CN112887582A - Image color processing method and device and related equipment - Google Patents

Image color processing method and device and related equipment Download PDF

Info

Publication number
CN112887582A
CN112887582A CN201911207388.1A CN201911207388A CN112887582A CN 112887582 A CN112887582 A CN 112887582A CN 201911207388 A CN201911207388 A CN 201911207388A CN 112887582 A CN112887582 A CN 112887582A
Authority
CN
China
Prior art keywords
image
face
target
parameter
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911207388.1A
Other languages
Chinese (zh)
Inventor
刘国祥
蔡金
汪舟杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HiSilicon Technologies Co Ltd
Original Assignee
HiSilicon Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HiSilicon Technologies Co Ltd filed Critical HiSilicon Technologies Co Ltd
Priority to CN201911207388.1A priority Critical patent/CN112887582A/en
Publication of CN112887582A publication Critical patent/CN112887582A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control

Abstract

The embodiment of the invention discloses an image color processing method, an image color processing device and related equipment, which can be particularly applied to cameras, smart phones and the like, and can improve the quality of real-time straight images in the shooting process, wherein the method comprises the following steps: acquiring a target image, and determining a target face area and a background area in the target image; determining a first image parameter matched with the face feature information in the target face region from a preset image parameter set based on the face feature information in the target face region; determining a second image parameter according to the image characteristic information of the target image and based on the first image parameter; and adjusting the image parameter of the target image based on the second image parameter. The method and the device can be applied to a plurality of technical fields such as intelligent image processing in the field of artificial intelligence AI, can beautify the complexion of the face in the target image more intelligently and accurately, and can adjust the whole picture of the target image to be coordinated with the complexion of the face.

Description

Image color processing method and device and related equipment
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image color processing method, an image color processing device, and a related apparatus.
Background
In portrait photography, people often pay more attention to the face area in an image, and because the skin color of the face is a memory color, the skin color cast caused by factors such as the shooting environment, the shooting technology and the like in the image is easily perceived by people subjectively, and the requirements of people on the skin color are more severe. People often want images shot in various shooting environments to have human face complexion with high reduction degree. In most cases, the goal of skin color reduction is not to only reduce the original skin color of the person in the image, but to reduce the ideal skin color of the person as much as possible, so that the reduced face skin color meets the aesthetic requirement of people.
Therefore, when the direct image of the shooting device such as a camera or a mobile phone does not meet the quality requirement, people usually use various image processing software to perform post-processing on the direct image obtained by shooting so as to adjust the skin color of the human face, improve the overall quality of the image, and the like, which is time-consuming and labor-consuming. And the high-quality straight image can greatly reduce the link of post processing, promote user's convenience, can reduce the too big artifact that brings of post processing range simultaneously to avoid shooting equipment such as camera or cell-phone to appear the irreversible problem that post processing can't be resumeed such as overexposure, color overflow in the raw data process of real-time processing collection.
Disclosure of Invention
The embodiment of the invention provides an image color processing method, an image color processing device and related equipment, which are used for more intelligently and more reasonably beautifying the complexion of a human face in a target image and improving the quality of the target image.
In a first aspect, an embodiment of the present invention provides an image color processing method, which may include: acquiring a target image, and determining a target face area and a background area in the target image; determining a first image parameter matched with the face feature information in the target face region from a preset image parameter set based on the face feature information in the target face region, wherein the face feature information comprises one or more of gender information, age information and race information corresponding to a face; determining a second image parameter according to the image characteristic information of the target image and based on the first image parameter; and adjusting the image parameter of the target image based on the second image parameter.
According to the embodiment of the invention, through the face characteristic information (for example, including gender information, age information, race information and the like corresponding to a face) of a target face region in a target image, a first image parameter (for example, an image parameter which can be a face skin color generally existing or liked in a crowd of the gender, the age and the race) matched with the target face region is determined from a preset image parameter set, a second image parameter of the face region in the target image is determined by combining the first image parameter and the image characteristic information (for example, information which can include a light source color temperature, an illumination intensity, a background region, an illumination ratio of the target face region and the like corresponding to the target image) of the target image, and the image parameter of the target image is adjusted based on the second image parameter. Therefore, the image parameters of the face area are adjusted according to the differences of the sex, the age and the race of the shooting object and the differences of the actual shooting environment, and the image parameters of the whole target image are adjusted by taking the image parameters of the face area as priority. The whole picture is coordinated with the face while beautifying the face, and the quality of the target image is improved. When the embodiment of the invention is applied to a specific application scene, the embodiment of the invention can be used for processing the original image acquired by a camera or a mobile phone in real time in various daily shooting, beautifying the face, coordinating the whole picture, improving the quality of the real-time image of shooting equipment such as the camera or the mobile phone and the like, and meeting the aesthetic requirement of a user. Optionally, the target object targeted in the present application may not only be the target face area, that is, the photographic object may not only be a human, but also be other photographic objects such as an animal, a plant, a building, or a food.
In one possible implementation, the method further includes: acquiring an image set, wherein the image set comprises a plurality of face images, and the face images are obtained by shooting in a preset shooting environment; determining a plurality of kinds of face feature information corresponding to the plurality of face images in the image set; determining image parameters respectively matched with the various human face characteristic information based on hue parameters, saturation parameters and brightness parameters of each human face image in the multiple human face images in an HSV color space; and generating the preset image parameter set according to the image parameters respectively matched with the various human face feature information.
In the embodiment of the invention, because of the difference of the face characteristic information, namely the difference of the shot objects, the original skin color or the favorite skin color of the shot objects are different. For example, skin tone ubiquitous or preferred by young asian women is a biased skin tone, while skin tone of black men or women is darkly colored. According to the embodiment of the invention, a large number of face images obtained by shooting a large number of different shot objects in a preset shooting environment (such as a pure-color shooting background and a uniform artificial light source) can be summarized and sorted based on different face feature information of the different shot objects and hue parameters, saturation parameters and brightness parameters of different images in HSV color space in the shot images, so that image parameters respectively matched with the different face feature information are obtained, and a preset image parameter set is generated. Therefore, the first image parameters determined by the face feature information of the target face area can be rationalized to the greatest extent according to the preset image parameter set, and the first image parameters accord with the actual conditions and the preference of most people.
In one possible implementation, the method further includes: determining face color gamuts respectively matched with the various kinds of face feature information based on red-green components and yellow-blue components of each face image in the plurality of face images in an LAB color space; and generating a preset human face color gamut set according to the human face color gamuts respectively matched with the various human face characteristic information.
In the embodiment of the invention, because of the difference of the face characteristic information, namely the difference of the shot objects, the original skin color or the favorite skin color of the shot objects are different. For example, skin tone ubiquitous or preferred by young asian women is a biased skin tone, while skin tone of black men or women is darkly colored. The embodiment of the invention can be used for acquiring the face color gamut respectively matched with different face feature information and generating the preset face color gamut set based on different face feature information of different shot objects and the red-green component and the yellow-blue component of the LAB color space of different images in the shot images according to a large number of face images shot by a large number of different shot objects in the preset shooting environment (such as a pure-color shooting background and a uniform artificial light source). Different face complexion information respectively matched with different face characteristic information can be further perfected by combining the HSV color space and the LAB color space, and the complexion information not only can comprise hue parameters, saturation parameters and brightness parameters of the HSV color space, but also can comprise red-green components and yellow-blue components of the LAB color space. More reasonably and accurately beautifies the complexion of the human face, coordinates the whole picture and improves the quality of the target image.
In one possible implementation manner, a first face color gamut matched with the face feature information in the target face region is determined from the preset face color gamut set based on the face feature information in the target face region; and determining a second face color gamut according to the image characteristic information of the target image and based on the first face color gamut.
In the embodiment of the invention, the first human face color gamut determined by the human face characteristic information of the target human face region can be rationalized to the greatest extent according to the preset human face color gamut set, and the first human face color gamut accords with the actual conditions and the preference of most people. And simultaneously, the first face color gamut is adjusted by combining the image characteristic information of the target image, namely the actual situation of the shooting environment and the like to obtain a second face color gamut, so that the face color can be beautified more reasonably and accurately.
In one possible implementation, after the adjusting the image parameter of the target image based on the second image parameter, the method further includes: determining a third face color gamut of the target face region based on a red-green component and a yellow-blue component of the target face region in a current LAB color space; and adjusting the color gamut of the third face based on the color gamut of the second face.
In the embodiment of the invention, after the image parameters of the target image are adjusted, the color gamut (third color gamut) of the current target face region in the LAB color space is further adjusted based on the second face color gamut, so that the face skin color is more in line with the preference and the aesthetic feeling of a user.
In one possible implementation manner, the image feature information of the target image includes a brightness contrast of the target face region; the method further comprises the following steps: determining a first brightness statistic value higher than the brightness threshold value in the brightness histogram of the target face area and a second brightness statistic value lower than the brightness threshold value in the brightness histogram of the target face area based on the brightness histogram of the target face area and a preset brightness threshold value; and determining the brightness contrast of the target face area according to the ratio of the first brightness statistic value to the second brightness statistic value.
In the embodiment of the invention, due to the influence of factors such as angle, illumination intensity and the like during shooting, the brightness distribution of the target face area in the target image has a plurality of conditions, and the face skin color in the target image and even the whole picture are often influenced. According to the embodiment of the invention, the brightness contrast of high brightness and low brightness of the face region is calculated through the brightness histogram of the face region and the preset brightness threshold. The brightness contrast can be used as important characteristic information in the image characteristic information of the target image, so that the whole picture can be well coordinated while beautifying the skin color of the human face.
In one possible implementation, the image characteristic information of the target image includes a light illumination ratio of the target image; the method further comprises the following steps: determining a third luminance statistic value higher than the luminance threshold value in the luminance histogram of the target image based on the luminance histogram of the target image and the preset luminance threshold value; and determining the illumination ratio of the target image according to the ratio of the third brightness statistic value to the first brightness statistic value.
In the embodiment of the invention, due to the influence of factors such as angles, light sources and the like during shooting, the brightness distribution between the target face area and the background area in the target image has difference, and even under backlight shooting, the brightness distribution generally has larger difference. According to the embodiment of the invention, the ratio of the whole highlight of the target image to the highlight of the target face area can be calculated through the brightness histogram of the whole target image and the brightness histogram of the target face area, and the illumination ratio of the target image can be obtained. The illumination ratio of the target image can be used as important characteristic information in the image characteristic information of the target image, so that the whole picture can be well coordinated while the complexion of the human face is beautified.
In one possible implementation, the image feature information of the target image includes a partition brightness statistic of the background area; the method further comprises the following steps: performing semantic segmentation on the background area to determine at least one semantic segmentation area; and determining a partition brightness statistic value of the background area according to the brightness histogram of each semantic segmentation area in the at least one semantic segmentation area.
In the embodiment of the invention, at least one semantic segmentation area is obtained by performing semantic segmentation on the background area (for example, the background area can be segmented into a uniform background area, a sky area, a vegetation area and the like), and luminance statistics is performed according to the luminance histograms of the areas to obtain a partition luminance statistic value of the background area, so that a more accurate and more reference value luminance distribution condition of the background area is obtained. The partition brightness statistic value of the background area can be used as important feature information in the image feature information of the target image, so that the whole picture can be well coordinated while the skin color of the human face is beautified.
In one possible implementation, the image feature information of the target image includes: one or more of the brightness contrast of the target face area, the illumination ratio of the target image, the partition brightness statistic value of the background area, the illumination intensity corresponding to the target image and the light source color temperature corresponding to the target image; the first image parameters comprise a first hue parameter, a first saturation parameter and a first brightness parameter which are matched with the face feature information in the target face region; the second image parameter includes a second hue parameter, a second saturation parameter, and a second brightness parameter obtained by adjusting one or more of the first hue parameter, the first saturation parameter, and the first brightness parameter based on image feature information of the target image.
In the embodiment of the present invention, a second image parameter coordinated with image characteristic information of a target image is obtained by adjusting a first image parameter in combination with image characteristic information of the target image (which may include, for example, luminance contrast of a target face region, illumination ratio of the target image, a partition luminance statistic of the background region, illumination intensity corresponding to the target image, and light source color temperature corresponding to the target image, etc.). For example, when a young asian woman is photographed at night, the original first image parameter may correspond to a whitish skin color, whereas when the target image is a darker environment as a whole, one or more of the first hue parameter, the first saturation parameter, and the first brightness parameter included in the first image parameter may be adjusted (e.g., brightness is decreased, etc.) to obtain a second image parameter, and the second image parameter is coordinated with the night environment. Finally, the whole picture is coordinated with the skin color of the human face while the skin color of the human face is beautified, and the quality of the target image is improved.
In one possible implementation, the adjusting the image parameter of the target image based on the second image parameter includes: and adjusting one or more of a hue parameter, a saturation parameter and a brightness parameter of the target image in the HSV color space based on the second image parameter.
In the embodiment of the invention, the second image parameter of the target face area is taken as a reference standard, namely, the face skin color obtained after the originally corresponding favorite skin color of the face feature information is adjusted according to the actual shooting condition of the target image is taken as priority, and the hue parameter, the saturation parameter, the brightness parameter and the like of the target image in the HSV color space are adjusted. Therefore, the whole picture is coordinated with the skin color of the human face while the skin color of the human face is beautified, and the target image is adjusted more reasonably and accurately to obtain the target image with higher quality.
In one possible implementation, before the adjusting the image parameter of the target image based on the second image parameter, the method further includes: adjusting a color balance of the target image based on the second image parameter.
In the embodiment of the invention, the skin color of the target face area is matched with the second image parameter and the second face color gamut by adjusting the color balance of the target image before adjusting the image parameters of the target image, so that the beautification of the skin color of the face is further improved, and the quality of the target image is improved.
In a second aspect, an embodiment of the present invention provides an image color processing apparatus, which may include:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a target image and determining a target face area and a background area in the target image;
the first image parameter determining unit is used for determining a first image parameter matched with the face feature information in the target face region from a preset image parameter set based on the face feature information in the target face region, wherein the face feature information comprises one or more of gender information, age information and race information corresponding to a face;
the second image parameter determining unit is used for determining a second image parameter according to the image characteristic information of the target image and based on the first image parameter;
and the first adjusting unit is used for adjusting the image parameters of the target image based on the second image parameters.
In one possible implementation, the apparatus further includes:
the second acquisition unit is used for acquiring an image set, wherein the image set comprises a plurality of face images, and the face images are obtained by shooting in a preset shooting environment;
the first determining unit is used for determining a plurality of kinds of face feature information corresponding to the plurality of face images in the image set;
the second determining unit is used for determining image parameters respectively matched with the various human face feature information based on the hue parameter, the saturation parameter and the brightness parameter of each human face image in the multiple human face images in the HSV color space;
and the first generating unit is used for generating the preset image parameter set according to the image parameters respectively matched with the various human face feature information.
In one possible implementation, the apparatus further includes:
the third determining unit is further configured to determine, based on a red-green component and a yellow-blue component of each of the plurality of face images in an LAB color space, a face color gamut to which the plurality of kinds of face feature information are respectively matched;
and the second generating unit is further used for generating a preset human face color gamut set according to the human face color gamuts respectively matched with the multiple kinds of human face feature information.
In one possible implementation, the apparatus further includes:
a first face color gamut determining unit, configured to determine, based on the face feature information in the target face region, a first face color gamut matching the face feature information in the target face region from the preset face color gamut set;
and the second face color gamut determining unit is used for determining a second face color gamut according to the image characteristic information of the target image and based on the first face color gamut.
In a possible implementation manner, after the adjusting the image parameter of the target image based on the second image parameter, the apparatus further includes:
a third face color gamut determining unit, configured to determine a third face color gamut of the target face region based on a red-green component and a yellow-blue component of the target face region in a current LAB color space;
and the second adjusting unit is used for adjusting the color gamut of the third face based on the color gamut of the second face.
In one possible implementation manner, the image feature information of the target image includes a brightness contrast of the target face region; the device further comprises:
a fourth determining unit, configured to determine, based on the luminance histogram of the target face region and a preset luminance threshold, a first luminance statistic value in the luminance histogram of the target face region that is higher than the luminance threshold, and a second luminance statistic value in the luminance histogram of the target face region that is lower than the luminance threshold;
and the fifth determining unit is used for determining the brightness contrast of the target face area according to the ratio of the first brightness statistic value to the second brightness statistic value.
In one possible implementation, the image characteristic information of the target image includes a light illumination ratio of the target image; the device further comprises:
a sixth determining unit, configured to determine, based on the luminance histogram of the target image and the preset luminance threshold, a third luminance statistic value higher than the luminance threshold in the luminance histogram of the target image;
and the seventh determining unit is used for determining the illumination ratio of the target image according to the ratio of the third brightness statistic value to the first brightness statistic value.
In one possible implementation, the image feature information of the target image includes a partition brightness statistic of the background area; the device further comprises:
the semantic segmentation unit is used for performing semantic segmentation on the background area and determining at least one semantic segmentation area;
an eighth determining unit, configured to determine a partition luminance statistic of the background region according to a luminance histogram of each of the at least one semantic dividing region.
In one possible implementation, the image feature information of the target image includes: one or more of the brightness contrast of the target face area, the illumination ratio of the target image, the partition brightness statistic value of the background area, the illumination intensity corresponding to the target image and the light source color temperature corresponding to the target image;
the first image parameters comprise a first hue parameter, a first saturation parameter and a first brightness parameter which are matched with the face feature information in the target face region;
the second image parameter includes a second hue parameter, a second saturation parameter, and a second brightness parameter obtained by adjusting one or more of the first hue parameter, the first saturation parameter, and the first brightness parameter based on image feature information of the target image.
In a possible implementation manner, the first adjusting unit is specifically configured to:
and adjusting one or more of a hue parameter, a saturation parameter and a brightness parameter of the target image in the HSV color space based on the second image parameter.
In one possible implementation, the apparatus further includes:
and the third adjusting unit is used for adjusting the color balance of the target image based on the second image parameter.
An acquisition unit configured to acquire a target image;
in a third aspect, an embodiment of the present invention provides a shooting device, which may include: a processor, a photographing module coupled to the processor;
the camera module is used for acquiring a target image;
the processor is configured to:
acquiring a target image, and determining a target face area and a background area in the target image;
determining a first image parameter matched with the face feature information in the target face region from a preset image parameter set based on the face feature information in the target face region, wherein the face feature information comprises one or more of gender information, age information and race information corresponding to a face;
determining a second image parameter according to the image characteristic information of the target image and based on the first image parameter;
adjusting image parameters of the target image based on the second image parameters;
in a possible implementation manner, the shooting device may further include a display;
the display is used for displaying the target image after the image parameters of the target image are adjusted.
In one possible implementation, the processor is further configured to:
acquiring an image set, wherein the image set comprises a plurality of face images, and the face images are obtained by shooting in a preset shooting environment;
determining a plurality of kinds of face feature information corresponding to the plurality of face images in the image set;
determining image parameters respectively matched with the various human face characteristic information based on hue parameters, saturation parameters and brightness parameters of each human face image in the multiple human face images in an HSV color space;
and generating the preset image parameter set according to the image parameters respectively matched with the various human face feature information.
In one possible implementation, the processor is further configured to:
determining face color gamuts respectively matched with the various kinds of face feature information based on red-green components and yellow-blue components of each face image in the plurality of face images in an LAB color space;
and generating a preset human face color gamut set according to the human face color gamuts respectively matched with the various human face characteristic information.
In one possible implementation, the processor is further configured to:
determining a first face color gamut matched with the face feature information in the target face region from the preset face color gamut set based on the face feature information in the target face region;
and determining a second face color gamut according to the image characteristic information of the target image and based on the first face color gamut.
In a possible implementation manner, after the adjusting the image parameter of the target image based on the second image parameter, the processor is further configured to:
determining a third face color gamut of the target face region based on a red-green component and a yellow-blue component of the target face region in a current LAB color space;
and adjusting the color gamut of the third face based on the color gamut of the second face.
In one possible implementation manner, the image feature information of the target image includes a luminance contrast of the target face region, and the processor is further configured to:
determining a first brightness statistic value higher than the brightness threshold value in the brightness histogram of the target face area and a second brightness statistic value lower than the brightness threshold value in the brightness histogram of the target face area based on the brightness histogram of the target face area and a preset brightness threshold value;
and determining the brightness contrast of the target face area according to the ratio of the first brightness statistic value to the second brightness statistic value.
In one possible implementation, the image characteristic information of the target image includes a light illumination ratio of the target image, and the processor is further configured to:
determining a third luminance statistic value higher than the luminance threshold value in the luminance histogram of the target image based on the luminance histogram of the target image and the preset luminance threshold value;
and determining the illumination ratio of the target image according to the ratio of the third brightness statistic value to the first brightness statistic value.
In one possible implementation, the image feature information of the target image includes a partition luminance statistic of the background area, and the processor is further configured to:
performing semantic segmentation on the background area to determine at least one semantic segmentation area;
and determining a partition brightness statistic value of the background area according to the brightness histogram of each semantic segmentation area in the at least one semantic segmentation area.
In one possible implementation, the image feature information of the target image includes: one or more of the brightness contrast of the target face area, the illumination ratio of the target image, the partition brightness statistic value of the background area, the illumination intensity corresponding to the target image and the light source color temperature corresponding to the target image;
the first image parameters comprise a first hue parameter, a first saturation parameter and a first brightness parameter which are matched with the face feature information in the target face region;
the second image parameter includes a second hue parameter, a second saturation parameter, and a second brightness parameter obtained by adjusting one or more of the first hue parameter, the first saturation parameter, and the first brightness parameter based on image feature information of the target image.
In one possible implementation, the processor is specifically configured to:
and adjusting one or more of a hue parameter, a saturation parameter and a brightness parameter of the target image in the HSV color space based on the second image parameter.
In one possible implementation, before the adjusting the image parameter of the target image based on the second image parameter, the processor is further configured to:
adjusting a color balance of the target image based on the second image parameter.
In a fourth aspect, the present application provides an image color processing apparatus having a function of implementing any one of the image color processing methods provided by the first aspect described above. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a fifth aspect, the present application provides an image color processing apparatus, where the image color processing apparatus includes a processor configured to support corresponding functions in any one of the image color processing methods provided in the first aspect. The image color processing apparatus may further include a memory for coupling with the processor that stores necessary program instructions and data for the image color processing apparatus. The image color processing apparatus may further include a communication interface for the image color processing apparatus to communicate with other devices or a communication network.
In a sixth aspect, the present application provides a shooting device, where the shooting device includes a processor configured to support the shooting device to execute corresponding functions in any one of the image color processing methods provided in the first aspect. The camera device may also include a memory, coupled to the processor, that stores program instructions and data necessary for the camera device. The photographing apparatus may further include a communication interface for the photographing apparatus to communicate with other apparatuses or a communication network.
In a seventh aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the image color processing method flow of any one of the above first aspects.
In an eighth aspect, an embodiment of the present invention provides a computer program, where the computer program includes instructions, and when the computer program is executed by a computer, the computer may execute the flow of the image color processing method according to any one of the first aspect.
In a ninth aspect, the present application provides a chip system, which includes a processor, and is configured to implement the functions related to the image color processing method flow in any one of the above first aspects. In one possible design, the system-on-chip further includes a memory for storing program instructions and data necessary for the image color processing method. The chip system may be constituted by a chip, or may include a chip and other discrete devices.
Drawings
FIG. 1 is a schematic diagram of an application scenario of image color processing according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an application scenario of another image color processing provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of an application scenario of another image color processing provided by an embodiment of the present invention;
fig. 4 is a functional block diagram of a photographing apparatus according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating an image color processing method according to an embodiment of the present invention;
FIG. 6 is a flow chart illustrating image color processing in a specific application scenario according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating brightness parameters of an HSV color space according to an embodiment of the present invention;
FIG. 8 is a schematic illustration of a gamut distribution in an LAB color space provided by an embodiment of the present invention;
FIG. 9 is a schematic diagram of a color balancing process according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating brightness contrast adjustment according to an embodiment of the present invention;
fig. 11 is a schematic diagram of a color gamut adjustment provided by an embodiment of the present invention;
FIG. 12 is a flowchart illustrating the overall steps of an image color process according to an embodiment of the present invention;
FIG. 13 is a schematic structural diagram of an image color processing apparatus according to an embodiment of the present invention;
FIG. 14 is a schematic structural diagram of a shooting device according to an embodiment of the present invention
Detailed Description
The embodiments of the present invention will be described below with reference to the drawings.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by those skilled in the art that the embodiments described herein can be combined with other embodiments.
As used in this specification, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a processor and the processor can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between 2 or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from two components interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
First, some terms in the present application are explained to facilitate understanding by those skilled in the art.
(1) Hue Saturation Value (HSV) is a color space created according to the intuitive characteristics of colors, also called a hexagonal pyramid model (HexconeModel). The HSV color space comprises the following parameters: hue parameter (H), saturation parameter (S) and brightness parameter (V). Wherein, the hue parameter H: measured by angle, the value range is 0-360 degrees, and red, green and blue are respectively separated by 120 degrees. The complementary colors differ by 180 degrees, respectively; saturation parameter S: the color purity degree is represented, the value range is 0.0-1.0, and when S is 0, only gray scale exists; a luminance parameter V: the brightness of the color is represented, and the value range is 0.0 (black) to 1.0 (white). The Red Green Blue (RGB) and cyan Red yellow (CMY) color models are hardware-oriented, while the HSV (HSV) color model is user-oriented.
(2) A color model (LAB), which is a color space created based on human perception of color, where the numerical values in the LAB describe all colors that a person with normal vision can see. Because LABs describe how colors are displayed, rather than the amount of a particular color material required by a device (e.g., a display, desktop printer, or digital camera) to generate the colors, LABs are considered device-independent color models. The LAB color space consists of three elements, luminance (L) and a red-green component a, a yellow-blue component b, which are related colors. Where L denotes luminance (luminance), a red-green component a denotes a range from red to green, and a yellow-blue component b denotes a range from yellow to blue. The value range of L is from 0 to 100, and when L is 50, the color is equivalent to 50% of black; the value range of a and b is from +127 to-128, wherein +127a is red, and gradually transits to-128 a to become green; similarly, +127b is yellow and-128 b is blue. All colors are composed by alternating changes of these three values. For example, a color block has an LAB value of L100, a 30, and b 0, and is pink. (Note: the a-axis, b-axis colors in LAB color space are different from RGB, magenta is more reddish, green is more cyan, yellow is slightly reddish, and blue is somewhat cyan.)
(3) The red, green and blue (RGB) color scheme is a color standard in the industry, which obtains various colors by changing three color channels of red (R), green (G) and blue (B) and superimposing them with each other, where RGB represents the colors of the three channels of red, green and blue, and the standard almost includes all colors that can be perceived by human vision, and is one of the most widely used color systems.
(4) Histograms, which are color features widely used in many image retrieval systems, describe the proportion of different colors in the whole image, without regard to the spatial location of each color, i.e., without describing objects or objects in the image. Color histograms are particularly suitable for describing images that are difficult to segment automatically. Of course, the color histogram may be based on different color spaces and coordinate systems. The more common color spaces are the RGB color space, the HSV color space, the LUV color space, and the LAB space, etc., where the HSV color space is the most common color space for histograms.
First, in order to facilitate understanding of the embodiments of the present invention, technical problems to be specifically solved by the present application are further analyzed and presented. In the prior art, as for the image processing technology, various technical solutions are included, and the following two general solutions are exemplified below. Wherein the content of the first and second substances,
the first scheme is as follows: automatic Exposure (AE) technique and Automatic White Balance (AWB) technique. The technology is a common self-contained function in camera or mobile phone shooting software, and can perform color statistics on a face region in an original image acquired by a camera according to a face detection result, wherein the color statistics includes statistics of histograms of various color channels (for example, Red (R) channels, Green (G) channels, Blue (B) channels and Yellow (Y) channels) in the face region. Then, according to the color statistical result and a preset target value, the exposure parameters are controlled, so that the face area obtains proper exposure. And meanwhile, compensating the color shift of the face area on the basis of the integral white balance of the original image according to the color statistical result and a preset target value. Therefore, the brightness and the color of the face area in the image finally projected by the camera or the mobile phone are close to the preset target.
The first scheme has the following disadvantages: the scheme focuses on enabling the brightness and the color of the face area to approach the preset target value, and rarely considers the background environment, so that the situation that the face and the background picture are inconsistent easily occurs. Meanwhile, due to other subsequent processing in some columns, the brightness and color of the face region in the finally obtained image have a large difference from the preset target value.
Scheme II: the technology of beautifying is a post-processing technology for a straight image obtained by shooting, and the technology usually realizes micro-shaping of a human face through a series of functions of whitening, buffing, face thinning and the like so as to beautify a plurality of aspects in a human face area.
The second scheme has the following defects: the scheme usually only performs beautification processing on the face or human body area, and does not consider the picture area beyond the beautification processing, so that the background picture is often inconsistent with the processed face or human body picture. For example, the whole bluing picture, the beautification treatment can adjust the face area to the normal skin color range, but the background picture is still bluish and color-biased, so that the whole picture is not harmonious and the picture look and feel is poor. Even when the scheme is used for processing human faces such as face thinning, the background picture can be distorted, and the image quality is reduced.
In summary, in both of the above two schemes, the original image data acquired by the shooting device in real time cannot be processed by using the existing general shooting device such as a camera or a mobile phone, so that the face picture in the real-time straight-out image of the shooting device meets the aesthetic requirements of the user, and the whole picture is coordinated with the face picture. Therefore, in order to solve the problem that the current image color processing technology does not meet the actual business requirements, the technical problem to be actually solved by the present application includes the following aspects: based on original image data acquired by shooting equipment such as an existing camera or a mobile phone, the face area in the original image data is beautified reasonably, efficiently and accurately, the brightness, the color and the like of the face area are coordinated with the whole picture, and the quality of a real-time straight image of the shooting equipment in the shooting process is improved.
In order to facilitate understanding of the embodiments of the present invention, the following exemplarily recites scenes of an image color processing system to which the image color processing method of the present application is applied, and may include the following three scenes.
In a first scene, a shooting device performs real-time image color processing on acquired original image data:
referring to fig. 1, fig. 1 is a schematic view of an application scenario of image color processing according to an embodiment of the present invention, where the application scenario includes a shooting device (in fig. 1, a camera is taken as an example). And the photographing apparatus may include an associated photographing module, a display, a processor, and the like. The shooting module, the display and the processor can perform data transmission through a system bus. The shooting module can convert the captured light source signal into a digital signal to finish the acquisition of original image data, and then the acquired original image data is transmitted to the processor through the system bus; the processor beautifies and adjusts the face area in the original image data by using the image color processing method in the application according to the acquired original image data (target image), and coordinates the whole picture. Furthermore, the processor can also transmit the adjusted original image data to the display, and control the display to display the straight image obtained by the shooting equipment at this time according to the adjusted original image data, so that the user can intuitively and timely master the shooting effect of the shooting equipment after image color processing at this time according to the straight image. Further, the processor may also save raw image data and a straight-out image, etc., which are taken each time.
For example, when the shooting environment is a night environment and the shooting target is a young european woman, an image may be obtained by a dark and unclear image if the brightness of the face area and the background area in the original image data collected by the shooting module is low as shown in fig. 1. The processor may determine a standard skin color (for example, a white skin color may be found by young european and american women) matching the photographed object according to the original image data collected by the photographing module, adjust the standard skin color according to related information corresponding to the photographing environment in the original image data (for example, in a night environment, the brightness of the background area is low, and at this time, the brightness value of the standard skin color may be appropriately adjusted down), and adjust the image parameters of the original image data according to the adjusted standard skin color. And (3) enabling the face skin color in the finally obtained straight-out image to be close to the adjusted standard skin color, and enabling the whole image of the straight-out image to be coordinated with the face skin color (for example, if the brightness value in the adjusted standard skin color is higher than the brightness value in the original night environment, the brightness value of the whole image is properly increased according to the reference standard, so that the face skin color is coordinated with the whole image). As described above, the photographing apparatus may be a camera, a smart phone, a tablet computer, or the like having the image capturing, image processing, and image displaying functions, which are not particularly limited in this application.
And in a second scene, the shooting equipment is connected with the computing equipment, and the original image data collected by the shooting equipment and sent to the computing equipment is subjected to real-time image color processing through the computing equipment:
referring to fig. 2, fig. 2 is a schematic view of another application scenario of image color processing according to an embodiment of the present invention, where the application scenario includes a shooting device (for example, a camera in fig. 2) and a computing device (for example, a notebook computer in fig. 2). The camera device and the computing device can perform data transmission through a wireless communication mode such as Bluetooth, Wi-Fi or a mobile network or a wired communication mode such as a data line. The shooting module in the shooting device can convert the captured light source signal into a digital signal to complete the acquisition of original image data, and then the shooting device can send the acquired original image data to the computing device in the wireless/wired communication mode. According to the obtained original image data, the computer equipment beautifies and adjusts the face area in the center by using the image color processing method in the application, and coordinates the whole picture. Further, the computing device can also store the image obtained after the original image data is adjusted to the local computing device and send the image to the shooting device. Further, the shooting device may also select to send the shot unprocessed straight-out images (for example, the straight-out images with poor shooting effect) to the computing device, and the computing device may process the straight-out images by using the image color processing method in the present application to obtain the processed images.
For example, when the shooting environment is a night environment and the shooting target is a young european woman, an image may be obtained by a dark and unclear image if the brightness of the face area and the background area in the original image data collected by the shooting module is low, as shown in fig. 2. In the real-time shooting process of the shooting device, the original image data collected by the shooting module in real time can be sent to the computing device. The computing device determines a standard skin color (for example, a white skin color may be found by young european and american women) matching the photographed object according to the received original image data, adjusts the standard skin color according to the related information corresponding to the photographing environment in the original image data (for example, in the dark environment, the brightness of the background area is low, and at this time, the brightness value of the standard skin color may be appropriately reduced), and adjusts the image parameters of the original image data according to the adjusted standard skin color. And (3) enabling the face skin color in the finally obtained image to be close to the adjusted standard skin color, and enabling the whole image of the image to be coordinated with the face skin color (for example, if the brightness value in the adjusted standard skin color is higher than the brightness value in the original night environment, the brightness value of the whole image is properly increased according to the reference standard, so that the face skin color is coordinated with the whole image). As described above, the shooting device may be a camera, a smart phone, a tablet computer, or the like having the above functions; the computing device may be a smart phone, a smart wearable device, a tablet computer, a notebook computer, a desktop computer, and the like, which have the above functions, and this is not particularly limited in this application.
And a third scene, uploading the image to be processed through the image color processing client, and obtaining the image after image color processing:
referring to fig. 3, fig. 3 is a schematic view of an application scenario of image color processing according to another embodiment of the present invention, where the application scenario includes a shooting device (for example, a smart phone in fig. 3) and a server. The shooting device and the server can be in communication connection in a wired/wireless communication mode. As shown in fig. 3, since the images captured by the smart phone have common effects, there are often cases of overexposure and lack of exposure. The image to be processed with poor shooting effect can be selected on the shooting device through the image color processing client, and uploaded to the server. For example, two to-be-processed pictures as shown in fig. 3, namely, a to-be-processed image 1 with a lack of exposure obtained by shooting in a dark environment and an to-be-processed image 2 with an excessive exposure obtained by shooting in an outdoor environment with strong sunlight can be selected and uploaded simultaneously by the image color processing client. After the server receives the two images, the face area in the images can be beautified and adjusted by using the image color processing method in the application, the whole picture is coordinated, the processed images are obtained and sent to the shooting equipment, and a user can view and store the processed images on the shooting equipment through the image color processing client. The shooting device can operate the image color processing client, and the terminal device can be a camera, a smart phone, an intelligent wearable device, a tablet computer and the like with the functions. The server may provide background services for the shooting device, and the server may be one server, a server cluster composed of multiple servers, or a cloud computing service center, which is not specifically limited in this application.
It is understood that the application scenarios and system architectures in fig. 1, fig. 2, and fig. 3 are only some exemplary implementations in the embodiments of the present invention, and the application scenarios in the embodiments of the present invention include, but are not limited to, the above application scenarios and system architectures. The image color processing method in the application can be used for beautifying the human face scene, and can also be used for other non-human face scenes such as gourmet, plants, animals, buildings and the like, specific optimization is carried out, and the image which accords with the aesthetic feeling and the preference of the user is obtained, and other scenes and examples are not listed and repeated one by one.
Referring to fig. 4, fig. 4 is a functional block diagram of an interior of a shooting device according to an embodiment of the present invention. Alternatively, in one embodiment, the photographing apparatus 100 may be configured to fully or partially automatically photograph the image. For example, the photographing apparatus 100 may be in a timed continuous automatic photographing mode, or an automatic photographing mode in which photographing is performed when a human face is detected within a photographing range according to a computer instruction, or the like. When the photographing apparatus 100 is in the automatic photographing mode, the photographing apparatus 100 may be set to operate without interaction with a person.
The embodiment will be specifically described below taking the photographing apparatus 100 as an example. It should be understood that the photographing apparatus 100 may have more or less components than those shown in the drawings, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The photographing apparatus 100 may include: the mobile terminal includes a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated configuration of the embodiment of the present invention does not constitute a specific limitation to the photographing apparatus 100. In other embodiments of the present application, the capture device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
Among them, the controller may be a neural center and a command center of the photographing apparatus 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and does not limit the structure of the shooting device 100. In other embodiments of the present application, the shooting device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the photographing apparatus 100 may be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The photographing apparatus 100 implements a display function through the GPU, the display screen 194, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the capture device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The photographing apparatus 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, and the application processor, etc.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In the embodiment of the present invention, the camera 193 is used for acquiring a target image, and the camera 193 may be located on the front side of the shooting device, for example, above the touch screen, or may be located at other positions, for example, on the back side of the shooting device. In addition, the camera 193 may further include a camera, such as an infrared camera or other cameras, for capturing images required for face recognition. The camera for collecting the image required by the face recognition is generally located on the front side of the shooting device, for example, above the touch screen, and may also be located at other positions, for example, on the back side of the shooting device. In some embodiments, the photographing apparatus 100 may include other cameras. The photographing apparatus may further include a dot matrix emitter (not shown) for emitting light. The camera collects light reflected by the human face to obtain a human face image, and the processor processes and analyzes the human face image and compares the human face image with stored human face image information to verify the human face image.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the photographing apparatus 100 selects a frequency bin, the digital signal processor is configured to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The photographing apparatus 100 may support one or more video codecs. In this way, the shooting device 100 can play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the photographing apparatus 100 can be realized by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the photographing apparatus 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the photographing apparatus 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, applications required by at least one function (such as a face recognition function, a photographing function, an image processing function, and the like), and the like. The storage data area may store data created during use of the photographing apparatus 100 (such as face feature information data, image parameter set data, face color gamut set data, and the like), and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The photographing apparatus 100 may implement an audio function through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor, etc. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be an Open Mobile Terminal Platform (OMTP) standard interface of 3.5mm, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like.
The gyro sensor 180B may be used to determine the motion attitude of the photographing apparatus 100. In some embodiments, the angular velocity of the photographing apparatus 100 about three axes (i.e., x, y, and z axes) may be determined by the gyro sensor 180B.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode.
The ambient light sensor 180L is used to sense the ambient light level. The photographing apparatus 100 may adaptively adjust the brightness of the display screen 194 according to the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture.
The fingerprint sensor 180H is used to collect a fingerprint. The photographing apparatus 100 can perform fingerprint unlocking, access to an application lock, fingerprint photographing, fingerprint call answering, and the like using the collected fingerprint characteristics. The fingerprint sensor 180H may be disposed below the touch screen, the shooting device 100 may receive a touch operation of a user on the touch screen in an area corresponding to the fingerprint sensor, and the shooting device 100 may collect fingerprint information of a finger of the user in response to the touch operation, so as to implement a related function.
The temperature sensor 180J is used to detect temperature. In some embodiments, the photographing apparatus 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on the surface of the photographing apparatus 100, different from the position of the display screen 194.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The photographing apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the photographing apparatus 100.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the photographing apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. In some embodiments, the photographing apparatus 100 employs eSIM, that is: an embedded SIM card. The eSIM card may be embedded in the photographing apparatus 100 and may not be separated from the photographing apparatus 100.
The photographing apparatus 100 may be a camera, a smart phone, a smart wearable apparatus, a tablet computer, a laptop computer, and the like, which have the above functions of photographing, image processing, and the like, and the embodiment of the present invention is not particularly limited thereto.
Referring to fig. 5, fig. 5 is a flowchart illustrating an image color processing method according to an embodiment of the present invention, which can be applied to the application scenarios and system architectures described in fig. 1, fig. 2, or fig. 3, and can be specifically applied to the shooting device 100 of fig. 4. The following description will be given taking the processor 110 whose execution subject is inside the photographing apparatus 100 in fig. 4 as described above as an example with reference to fig. 5. The method may comprise the following steps S501-S504.
Step S501: acquiring a target image, and determining a target face area and a background area in the target image.
Specifically, the processor 110 inside the photographing apparatus 100 acquires raw image data collected by the camera 193 (or referred to as a photographing module, or the like) of the photographing apparatus 100, and acquires a target image therefrom. Optionally, the target image in the embodiment of the present invention may be an original image in RAW format in 64 × 64RGB color mode.
For example, as shown in fig. 6, fig. 6 is a schematic diagram of an image color processing flow in a specific application scenario according to an embodiment of the present invention. The target face region and the background region in the target image are shown in fig. 6, for example, the target face region may be a region within a white dashed line frame in fig. 6, and the target face region mainly includes a face skin color region, and may also include a neck region where skin colors are exposed. Alternatively, the embodiment of the present invention may determine the target face region through the face key points (for example, eyes, nose, lips, and the like), and the background region may be a region other than the face region in the target image.
Step S502: and determining a first image parameter matched with the face feature information in the face region from a preset image parameter set based on the face feature information in the target face region.
Specifically, the processor 110 performs face feature recognition on a face region in the target image to obtain face feature information in the target face region, where the face feature information may include gender information, age information, race information, and the like corresponding to a face. Next, the processor 110 determines, from a preset image parameter set, a first image parameter matching the face feature information in the face region based on the face feature information in the target face region. The image parameter set may include image parameters matched with various types of face feature information, and optionally, the image parameters may include hue parameters, saturation parameters, brightness parameters, and the like of an HSV color space. Alternatively, the preset image parameter set may be stored in the shooting device, and the processor 110 may directly obtain the preset image parameter set, or the preset image parameter set may be stored in a server, and the shooting device 100 may obtain the image parameter set from the server in advance before shooting starts through a network connection.
For example, as shown in fig. 6, the shooting object is a young white woman, and the face feature information in the target face region recognized by the processor 110 may include a woman, a caucasian person, and 21-30 years old, then as shown in fig. 6, the image parameter matching the face feature information of the woman, the caucasian person, and 21-30 years old in the image parameter set is an image parameter 1, and the image parameter 1 may be the first image parameter in the embodiment of the present invention.
Optionally, the embodiment of the present invention may further determine, based on the face feature information in the target face region, a first face color gamut matched with the face feature information in the target face region from a preset face color gamut set. The face color gamut set may include a face color gamut matched with various types of face feature information, and optionally, the face color gamut may include a red-green component and a yellow-blue component of an LAB color space, and the like. Alternatively, the face color gamut set may be stored inside the shooting device, the processor 110 may directly obtain the face color gamut set, or the face color gamut set may be stored in a server, and the shooting device 100 may obtain the face color gamut set from the server in advance before shooting starts through a network connection.
Optionally, in the embodiment of the present invention, how to generate the image parameter set and the face color gamut set may specifically include the following steps S11 to S14:
step S11: the method comprises the steps of obtaining an image set, wherein the image set comprises a plurality of face images, and the face images are obtained by shooting in a preset shooting environment.
Specifically, shooting objects of different genders, ages and races are shot by the shooting device 100 or shooting tools such as other cameras and smartphones in a preset shooting environment, so as to obtain a plurality of face images. Optionally, the preset shooting environment may specifically include a solid background cloth for shooting (for example, a background cloth for white, blue, and red, etc. commonly used in shooting a document photo), a bright and appropriate lighting source, an appropriate shooting angle (for example, for shooting a subject right), and other ideal shooting conditions. Optionally, all or part of the facial images in the image set may also be downloaded from a cloud server or other devices storing qualified facial images through a network connection or the like. It should be noted that there may be a certain difference in the shooting environment of each face image, and this is not specifically limited in the embodiment of the present invention.
Step S12: determining a plurality of kinds of face feature information corresponding to the plurality of face images in the image set;
specifically, after an image set composed of a plurality of face images is obtained through shooting, face feature information corresponding to each face image is determined, and therefore a plurality of kinds of face feature information corresponding to the plurality of face images are determined. It can be understood that, in the case that the shooting amount is huge, that is, there are a large number of face images, a plurality of face images may correspond to the same kind of face feature information. For example, facial images of a european female aged 21, a american female aged 23, and a american female aged 27 may be taken as the subject, and the facial feature information may be a female, a caucasian, and facial feature information aged 21 to 30. The standard of each item of information in the face feature information is not specifically limited in the embodiment of the present invention, for example, the gender information in the face feature information may include males and females, and may further include an average value, and the average value may be used to determine the first image parameter under the condition that the processor 110 cannot identify the gender corresponding to the face. For example, the age information in the face feature information can be simply divided into children, young people, middle-aged people and old people, and can be more finely divided into the age of less than or equal to 12 years, the age of 13-18 years, the age of 19-24 years, the age of 25-30 years, the age of 31-40 years … …, and the like, and the description thereof is omitted here.
Step S13: determining image parameters respectively matched with the various human face characteristic information based on hue parameters, saturation parameters and brightness parameters of each human face image in the multiple human face images in an HSV color space; and determining the face color gamut respectively matched with the various face feature information based on the red-green component and the yellow-blue component of each face image in the plurality of face images in an LAB color space.
Specifically, a hue parameter, a saturation parameter and a brightness parameter of each face image in an HSV color space are calculated to obtain an image parameter corresponding to each image, then image parameters matched with face feature information respectively corresponding to each image are obtained, and image parameters respectively matched with various kinds of face feature information are determined. As described above, since there may be a case where a plurality of face images correspond to the same face feature information, one or more face feature information of a plurality of kinds of face feature information may be matched with a plurality of kinds of image parameters. Specifically, red and green components and yellow and blue components of each face image in an LAB color space are calculated to obtain a face color gamut corresponding to each face image, then face color gamuts matched with face feature information respectively corresponding to each image are obtained, and the face color gamuts respectively matched with various kinds of face feature information are determined. As described above, since there may be a case where a plurality of face images correspond to the same face feature information, one or more kinds of face feature information in the plurality of kinds of face feature information may be matched with a plurality of kinds of face color gamuts.
Optionally, the hue parameter, the saturation parameter and the brightness parameter in the image parameters may be calculated and determined according to the statistical conditions of the hue histogram, the saturation histogram and the brightness histogram of the human face image in the HSV color space. For example, as shown in fig. 7, fig. 7 is a schematic diagram of luminance parameters of an HSV color space according to an embodiment of the present invention. The data in fig. 7 includes an average value of the luminance values calculated according to the luminance histogram and a value range of the luminance values formed by upper and lower limits of the luminance values, and optionally, the luminance parameter may be one or more value ranges. Optionally, the upper and lower limits of the brightness value may further represent a standard deviation of the brightness value. In addition, when a color histogram is counted, the color space is often divided into several small color bins, and each of the color bins becomes one bin of the histogram, as shown by the horizontal axis in fig. 7, for example. This process is called color quantization (color quantization) and then a color histogram can be obtained by calculating the number of pixels whose color falls within each cell. There are many methods for color quantization, such as vector quantization, clustering method, or neural network method. It is most common to divide the components of the color space (e.g., three components of hue, saturation, and brightness in the HSV color space) uniformly. Alternatively, if the image is an RGB color pattern and the histogram is in HSV color space, we can pre-establish a look-up table (look-up table) from quantized RGB space to quantized HSV space, thereby speeding up the histogram calculation process.
Alternatively, the color gamut of the face can be determined by calculation according to the distribution of red, green and yellow and blue components of the face image in the LAB color space. For example, as shown in fig. 8, fig. 8 is a schematic diagram of a color gamut distribution of an LAB color space provided by an embodiment of the present invention. The horizontal axis (a) in fig. 8 represents a red-green component, red to green from right to left, and the vertical axis (b) in fig. 8 represents a yellow-blue component, yellow to blue from top to bottom. As shown in fig. 8, the regions respectively encircled by the oval closed lines are different face color gamuts matched with different face feature information calculated and determined according to different face images, and are respectively a face color gamut 1, a face color gamut 2, a face color gamut 3 and a face color gamut 4.
Step S14: generating the preset image parameter set according to the image parameters respectively matched with the various human face feature information; and generating a preset human face color gamut set according to the human face color gamuts respectively matched with the various human face characteristic information.
Specifically, as described above, since there may be a case where a plurality of face images correspond to the same face feature information, one or more kinds of face feature information in the plurality of kinds of face feature information may be matched with a plurality of kinds of image parameters and a plurality of kinds of face color gamuts. According to the image parameter(s) and the face color gamut(s) respectively matched with the various kinds of face feature information, the optimal image parameter and the optimal face color gamut respectively matched with each kind of face feature information can be determined through big data statistical analysis or in the form of questionnaires, for example, the yellow women in 21-30 years generally love natural white and have a reddish complexion, and the image parameter and the face color gamut which are matched with the complexion requirement in the image parameter and the face color gamut which are matched with the face feature information (in 21-30 years, yellow women and women) can be reserved, and the rest image parameters and the face color gamuts which are not high in reference value can be removed. Therefore, a preset image parameter set and a preset human face color gamut set are generated according to the image parameters and the human face color gamuts which are respectively matched with each kind of human face feature information.
Step S503: and determining a second image parameter according to the image characteristic information of the target image and based on the first image parameter.
Specifically, the processor 110 adjusts parameters in the first image parameters based on the first image parameters according to the image feature information of the target image, so as to obtain second image parameters. Optionally, the image feature information of the target image may include a luminance contrast of the target face region, a lighting ratio of the target image, a partition luminance statistic of the background region, a lighting intensity corresponding to the target image, a light source color temperature corresponding to the target image, and the like. The first image parameter may include a first hue parameter, a first saturation parameter, and a first brightness parameter that are matched with the face feature information in the target face region. Optionally, the second image parameter may include a second hue parameter, a second saturation parameter, and a second brightness parameter obtained by adjusting one or more of the first hue parameter, the first saturation parameter, and the first brightness parameter in the first image parameter based on the image feature information of the target image. Optionally, the processor 110 may further adjust parameters (e.g., red, green, yellow, and blue components, etc.) in the first face color gamut based on the first face color gamut according to the image feature information of the target image, so as to obtain a second face color gamut.
Optionally, in this embodiment of the present invention, the method for obtaining the luminance contrast may include: and determining a first brightness statistic value which is higher than the brightness threshold value in the brightness histogram of the target face region and a second brightness statistic value which is lower than the brightness threshold value in the brightness histogram of the target face region based on the brightness histogram of the target face region and a preset brightness threshold value. And determining the brightness contrast of the target face area according to the ratio of the first brightness statistic value to the second brightness statistic value. Optionally, the brightness threshold may be one brightness threshold, or may be two different brightness thresholds, for example, brightness threshold 1 and brightness threshold 2, where brightness threshold 1 may be greater than or equal to brightness threshold 2, and then a first brightness statistic in the brightness histogram of the target face area that is higher than brightness threshold 1 and a second brightness statistic in the brightness histogram of the target face area that is lower than brightness threshold 2 may be determined based on the brightness histogram of the target face area and brightness thresholds 1 and 2. Thereby reducing the luminance statistics (e.g. luminance values distributed in the middle of the luminance histogram) in which part of the reference value in the calculated histogram is low. And finally, calculating the ratio of the first brightness statistic value to the second brightness statistic value to obtain the brightness contrast of the target face area. Optionally, the first brightness parameter in the first image parameter may be adjusted according to the brightness contrast of the target face region, for example, the contrast of the brightness curve of the HSV color space corresponding to the first image parameter is adjusted by using the standard deviation of each data point of the brightness value of the HSV color space corresponding to the first image parameter as a reference standard.
Optionally, in this embodiment of the present invention, the method for obtaining the illumination ratio of the target image (or referred to as the illumination ratio of the face region and the background region) may include: determining a third brightness statistic value higher than the brightness threshold value in the brightness histogram of the target image based on the brightness histogram of the target image and a preset brightness threshold value; and determining the illumination ratio of the target image according to the ratio of the third brightness statistic value to the first brightness statistic value. Optionally, the preset luminance threshold may be one luminance threshold, or may be two different luminance thresholds, and in addition, the preset luminance threshold may be equal to or unequal to the luminance threshold described in the above-mentioned obtaining of luminance contrast, which is not specifically limited in the embodiment of the present invention. Optionally, when the ratio of the third luminance statistic to the first luminance statistic is larger, that is, the illumination ratio is larger, it may be indicated that the backlight of the human face is stronger, and the first image parameter may be adjusted according to the illumination ratio. For example, the brightness and contrast of the brightness value curve corresponding to the first image parameter may be adjusted, and the saturation curve corresponding to the first image parameter may also be adjusted.
Optionally, in this embodiment of the present invention, the method for obtaining the partition luminance statistic of the background area may include: performing semantic segmentation on the background area to determine at least one semantic segmentation area; and determining a partition brightness statistic value of the background area according to the brightness histogram of each semantic segmentation area in the at least one semantic segmentation area. For example, as shown in fig. 6, the background area of the target image may be semantically segmented according to area information such as a building, a uniform background, and a road facility, so as to obtain three semantic segmented areas, namely a building area, a uniform background area, and a road facility area. Optionally, the semantic segmentation may further segment the background region into one or more semantic segmentation regions according to information such as outdoor sky, outdoor grassland greenery, and a relationship between background body color and skin color.
Optionally, in the embodiment of the present invention, the illumination intensity corresponding to the target image may be estimated by an AE algorithm, and the illumination intensity may indicate the illumination intensity in the shooting environment. According to the illumination intensity (for example, the shooting environment is dark, the illumination intensity is low), the brightness value curve and the saturation curve corresponding to the first image parameter can be appropriately adjusted, and the like. In the embodiment of the invention, the color temperature of the light source corresponding to the target image can be estimated through an AWB algorithm, and the hue value curve corresponding to the first image parameter and the red-green component and the yellow-blue component corresponding to the first face color region are correspondingly adjusted according to the difference of the color temperatures of the light sources, such as a cold light source or a warm light source.
Therefore, by combining various information included in the image characteristic information of the target image, the first image parameter and the first face color gamut can be adjusted more comprehensively and reasonably, and the second image parameter and the second face color gamut which are more in line with the actual situation are obtained, so that the face color in the finally obtained image not only meets the aesthetic requirements of users to the maximum extent, but also is coordinated with the picture background.
Step S504: and adjusting the image parameter of the target image based on the second image parameter.
Specifically, the processor 110 adjusts the image parameters of the target image based on the second image parameters, i.e. with human face skin color as the priority, which may include, for example, adjusting the brightness parameter, saturation parameter, hue parameter, and the like of the target image in the HSV color space.
Optionally, in the embodiment of the present invention, before the adjusting the image parameter of the target image, the target image may be subjected to overall color balance based on the second image parameter, or the target image may be subjected to overall color balance by combining the second image parameter and the second face color gamut. Through the color balance processing of the image, the conditions of color cast, over saturation or insufficient saturation of the image can be corrected, and the required color can be modulated according to the preference and the manufacturing requirement of the user, so that the picture effect is better finished. As shown in fig. 9, fig. 9 is a schematic view of a color balancing process according to an embodiment of the present invention. As shown in fig. 9, the input target image may be an image in a 64 × 64RGB format, and after RGB values of the target image are counted and applied to the respective modules shown in fig. 9 for processing, the face skin color after color balancing is matched with the second image parameters and the second face color gamut, and G/R (green/red) gain and B/R (blue/red) gain are obtained. For example, the face skin color may be completely within the second image parameter or the second face color gamut, or the face skin color may be as close as possible to the second image parameter or the second face color gamut. As shown in fig. 9, the CMOS Camera Module (CCM) is used to transmit the target image in 64 × 64RGB format after AWB gain to Gamma (Gamma) gain for subsequent processing. The Gamma (Gamma) gain in this color balancing process is mainly a redistribution of luminance values as a power function, where Gamma is an exponent. Generally, if Gamma is larger than 1, the bright, bright and dark signals are darker, and some weak signals can be erased; less than 1 may instead allow a weaker signal to be displayed. The Hue Saturation brightness (HIS) space is similar to the HSV color space, and compared with the HSV color space, a large number of algorithms in image processing and computer vision in the HIS color space are more convenient to use, and the workload of image analysis and processing can be greatly simplified. The HSV color space, the HIS color space, and the RGB color pattern are only different representations of the same physical quantity, and may be mutually converted. In some embodiments, the color balancing process may include more, fewer, or different steps and modules than those shown in fig. 9, which are not specifically limited by the embodiments of the present invention. Through the processing modules shown in fig. 9, for example, the background is a pure color background such as red, the clothes are pure color clothes such as yellow, and the whole color of the scene with difficult automatic white balance such as an obvious white reference point is not available.
Alternatively, the processor 110 may adjust the brightness parameter of the target image in the HSV color space, for example, adjust a brightness contrast curve of the target image based on a second brightness parameter of the second image parameters. As shown in fig. 10, fig. 10 is a schematic diagram of adjusting brightness and contrast according to an embodiment of the present invention. For example, the horizontal axis and the vertical axis in fig. 10 each represent a luminance value of 0 to 255, and the straight line in fig. 10 may represent an unprocessed linear luminance contrast of the target image, i.e., Gamma is equal to 1. The 9 circles on polyline 1 in fig. 10 may be represented as 9 luminance sample points in the second luminance parameter calculated by the algorithm evaluation, which are connected by polyline 1. The number and the position of the luminance sampling points and the like are not particularly limited in the embodiments of the present invention. It can be understood that, since the second luminance parameter in the embodiment of the present invention is a luminance parameter of a face region, and luminance values of the face region are generally concentrated, and are neither too bright nor too dark, the luminance range related to the polyline 1 is also limited to about 100-200 shown in fig. 10. Curve 1 (dashed line) in fig. 10 may be represented as a luminance contrast curve obtained by data fitting based on polyline 1 and 9 luminance sampling points on polyline 1. It is clear that curve 1 relates to a range of luminance values from 0 to 255 for the full. Therefore, if the curve 1 is taken as an ideal brightness contrast curve of the target image, that is, the brightness contrast curve of the current target image is adjusted to the curve 1, the brightness requirement of the face region in the target image can be met to a greater extent, and simultaneously the overall brightness of the target image can be coordinated with the brightness of the face region to a certain extent. However, since the high brightness region is easily over-exposed and is more sensitive to dark light than bright light in human vision, the curve 1 should be properly depressed and the brightness contrast curve of the target image should be adjusted to the depressed curve 1 to more reasonably adjust the brightness contrast of the target image. Alternatively, one or more pinch points may be selected, by means of which the curve 1 is adjusted. For example, as shown in fig. 10, two pressing points, i.e., pressing point 1 and pressing point 2, may be selected near the brightness value 50, and a pressing point 3 may be selected near the brightness value 220, and the curve 1 may be adjusted by the pressing point 1, the pressing point 2, and the pressing point 3, resulting in the curve 3 (solid line). The number and the position of the selected pressing points are not particularly limited in the embodiment of the invention. As shown in fig. 10, the curve 3 can be represented as a brightness contrast curve of the adjusted target image, which not only meets the brightness requirement of the user on the face area and coordinates the brightness of the whole picture, but also better meets the sensitivity preference of human vision to different brightness, and prevents overexposure of the highlight area.
Alternatively, after the brightness contrast curve of the target image is adjusted through the brightness sampling points, the pressing points, and the like, the hue (or called hue) and the saturation of the target image may be changed to some extent, and the degree of the change is related to the adjustment amplitude of the brightness contrast curve of the target image. At this time, the hue and saturation of the target image can be corrected to a certain extent according to the second image parameter, the actual requirement and the like, so that the whole picture of the target image is more harmonious and has higher quality.
Optionally, in the embodiment of the present invention, after the image parameters of the target image are adjusted based on the second image parameters, the color gamut of the current face region in the LAB color space (i.e., the third face color gamut in this application) may also be obtained through calculation, and the current third face color gamut is adjusted according to the second face color gamut, so that the third face color gamut is as close to and matched with the second face color gamut as possible. Therefore, the human face skin color in the final image can meet the aesthetic requirements of people to the maximum extent while being matched with the background picture. For example, as shown in fig. 11, fig. 11 is a schematic diagram of color gamut adjustment provided in an embodiment of the present invention. The area enclosed by a plurality of boxes and line segments in fig. 11 may be represented as a second face color gamut, and the area covered by a plurality of circles in fig. 11 may be represented as a third face color gamut. The processor 110 may perform a global translation of the third face gamut with its center point as a reference point. It can be understood that, due to the difference of the actual requirements and the difference of the background environment in the target image, in the process of adjusting the color gamut of the third face, it is not always necessary to completely translate the color gamut of the third face to a position coinciding with the color gamut of the second face, but the actual requirements and the background environment in the target image are considered, the color gamut of the second face is referred to, the integral translation amount abShift of the color gamut of the second face is determined, and the color gamut of the third face is appropriately translated. The human face area and the whole picture of the target image are more coordinated, and the quality is higher.
Alternatively, as shown in fig. 6, the processor 110 may transmit the processed target image to the display screen 194 in the shooting device 100, and control the display screen 194 to display the processed target image, so that the processed target image is the straight-out image of the shooting device 100 in the current shooting. Obviously, the image color processing method of the embodiment of the invention can improve the quality of the real-time straight image in the shooting process, not only beautify the face area, but also coordinate the face area with the whole picture.
As shown in fig. 12, fig. 12 is a flowchart illustrating an overall step of image color processing according to an embodiment of the present invention. Firstly, through step 1, according to the face feature information in the collected original image data, namely the face feature information of the face region in the target image, in a preset image parameter set, a first image parameter and a first face color gamut which are matched with the face feature information are determined, namely, the standard face skin color which exists or is liked in most people corresponding to the face feature information is also determined. Secondly, through step 2, the determined first image parameter and the first face color gamut are adjusted according to the image characteristic information (for example, information including light source color temperature, illumination intensity, brightness contrast of the face region, a partition brightness statistic value of the semantically segmented image and the like) of the target image to obtain a second image parameter and a second face color gamut, that is, according to the actual shooting situation, the original first image parameter and the original first face color gamut are dynamically adjusted to match the specific face color under the actual shooting situation. Then, the target image is adjusted (e.g. including adjusting color balance, contrast and saturation, etc.) based on the second image parameters and the second face color gamut through steps 3-5. And after the target image is adjusted, adjusting the current face color gamut of the face region through step 6, namely, adjusting the third face color gamut, so that the third face color gamut reasonably approaches the second face color gamut within a certain range. Therefore, the method and the device finish beautifying the complexion of the human face and coordinating the whole picture aiming at the real-time collected original image data in the real-time shooting process of the camera, thereby obtaining a straight-out image with high quality and according with the aesthetic sense of a user. It should be noted that the beautifying of the skin color of the human face according to the embodiment of the present invention is not limited to the improvement of the brightness, color, etc. of the face on the surface, and can also improve the situations of flat, not three-dimensional, not transparent, etc. of the face due to the shooting environment.
Referring to fig. 13, fig. 13 is a schematic structural diagram of an image color processing apparatus according to an embodiment of the present invention, the image color processing apparatus 20 may include a first obtaining unit 2001, a first image parameter determining unit 2002, a second image parameter determining unit 2016 and a first adjusting unit 2019, wherein,
a first acquiring unit 2001, configured to acquire a target image, and determine a target face region and a background region in the target image;
a first image parameter determining unit 2002, configured to determine, from a preset image parameter set, a first image parameter matching face feature information in the target face region based on the face feature information in the target face region, where the face feature information includes one or more of gender information, age information, and genre information corresponding to a face;
a second image parameter determining unit 2016, configured to determine a second image parameter according to the image feature information of the target image and based on the first image parameter;
a first adjusting unit 2019, configured to adjust an image parameter of the target image based on the second image parameter.
In one possible implementation, the apparatus 20 further includes:
a second obtaining unit 2004, configured to obtain an image set, where the image set includes a plurality of face images, and the face images are obtained by shooting in a preset shooting environment;
a first determining unit 2005, configured to determine a plurality of kinds of facial feature information corresponding to the plurality of facial images in the image set;
a second determining unit 2006, configured to determine, based on a hue parameter, a saturation parameter, and a brightness parameter of each of the plurality of face images in an HSV color space, image parameters to which the plurality of kinds of face feature information are respectively matched;
a first generating unit 2007, configured to generate the preset image parameter set according to the image parameters respectively matched with the multiple kinds of face feature information.
In one possible implementation, the apparatus 20 further includes:
the third determining unit 2008 is further configured to determine, based on a red-green component and a yellow-blue component of each of the plurality of face images in an LAB color space, a face color gamut respectively matched with the plurality of kinds of face feature information;
the second generating unit 2009 is further configured to generate a preset human face color gamut set according to the human face color gamuts respectively matched with the multiple kinds of human face feature information.
In one possible implementation, the apparatus 20 further includes:
a first face gamut determining unit 2003, configured to determine, based on the face feature information in the target face region, a first face gamut matching the face feature information in the target face region from the preset face gamut set;
a second face color gamut determining unit 2017, configured to determine a second face color gamut based on the first face color gamut according to the image feature information of the target image.
In one possible implementation, the apparatus 20 further includes:
a third face color gamut determining unit 2020, configured to determine a third face color gamut of the target face region based on a red-green component and a yellow-blue component of the target face region in the current LAB color space;
a second adjusting unit 2021, configured to adjust the third face color gamut based on the second face color gamut.
In one possible implementation, the apparatus 20 further includes:
a fourth determining unit 2010, configured to determine, based on the luminance histogram of the target face region and a preset luminance threshold, a first luminance statistic value in the luminance histogram of the target face region that is higher than the luminance threshold, and a second luminance statistic value in the luminance histogram of the target face region that is lower than the luminance threshold;
a fifth determining unit 2011, configured to determine the luminance contrast of the target face area according to a ratio of the first luminance statistic to the second luminance statistic.
In one possible implementation, the apparatus 20 further includes:
a sixth determining unit 2012, configured to determine, based on the luminance histogram of the target image and the preset luminance threshold, a third luminance statistic value higher than the luminance threshold in the luminance histogram of the target image;
a seventh determining unit 2013, configured to determine the illumination ratio of the target image according to a ratio of the third luminance statistic to the first luminance statistic.
In one possible implementation, the apparatus 20 further includes:
a semantic segmentation unit 2014, configured to perform semantic segmentation on the background region to determine at least one semantic segmentation region;
an eighth determining unit 2015, configured to determine a partition luminance statistic of the background region according to the luminance histogram of each of the at least one semantic segmentation region.
In one possible implementation, the image feature information of the target image includes: one or more of the brightness contrast of the target face area, the illumination ratio of the target image, the partition brightness statistic value of the background area, the illumination intensity corresponding to the target image and the light source color temperature corresponding to the target image; the first image parameters comprise a first hue parameter, a first saturation parameter and a first brightness parameter which are matched with the face feature information in the target face region; the second image parameter includes a second hue parameter, a second saturation parameter, and a second brightness parameter obtained by adjusting one or more of the first hue parameter, the first saturation parameter, and the first brightness parameter based on image feature information of the target image.
In a possible implementation manner, the first adjusting unit 2019 is specifically configured to:
and adjusting one or more of a hue parameter, a saturation parameter and a brightness parameter of the target image in the HSV color space based on the second image parameter.
In one possible implementation, the apparatus 20 further includes:
and the third adjusting unit is used for adjusting the color balance of the target image based on the second image parameter.
It should be noted that, for the functions of the relevant units in the image color processing apparatus 20 described in the embodiment of the present invention, reference may be made to the relevant descriptions of the relevant method embodiments described in fig. 1 to fig. 12, and no further description is given here.
Each of the units in fig. 13 may be implemented in software, hardware, or a combination thereof. The unit implemented in hardware may include a circuit and a furnace, an arithmetic circuit, an analog circuit, or the like. A unit implemented in software may comprise program instructions, considered as a software product, stored in a memory and executable by a processor to perform the relevant functions, see in particular the previous description.
Based on the description of the method embodiment and the device embodiment, the embodiment of the invention also provides shooting equipment. Referring to fig. 14, fig. 14 is a schematic structural diagram of a shooting device according to an embodiment of the present invention, where the shooting device at least includes a processor 301, a shooting module 302, a display 303, and a computer-readable storage medium 304. The shooting module 302 may be configured to capture a target image, and the display 303 may be configured to display an image processed by the image color processing method according to the embodiment of the present invention. The processor 301, the photographing module 302, the display 303, and the computer-readable storage medium 304 in the photographing apparatus may be connected by a bus or other means.
A computer-readable storage medium 304 may be stored in the memory of the photographing apparatus, the computer-readable storage medium 304 being configured to store a computer program comprising program instructions, the processor 301 being configured to execute the program instructions stored by the computer-readable storage medium 304. The processor 301 (or CPU) is a computing core and a control core of the shooting device, and is adapted to implement one or more instructions, and specifically, adapted to load and execute one or more instructions so as to implement a corresponding method flow or a corresponding function; in one embodiment, the processor 301 according to an embodiment of the present invention may be configured to perform a series of processes for image color processing, including: acquiring a target image, and determining a target face area and a background area in the target image; determining a first image parameter matched with the face feature information in the target face region from a preset image parameter set based on the face feature information in the target face region, wherein the face feature information comprises one or more of gender information, age information and race information corresponding to a face; determining a second image parameter according to the image characteristic information of the target image and based on the first image parameter; based on the second image parameter, adjusting an image parameter of the target image, and so on.
An embodiment of the present invention further provides a computer-readable storage medium (Memory), which is a Memory device in a photographing device and is used to store programs and data. It is understood that the computer readable storage medium herein may include a built-in storage medium in the photographing apparatus, and may also include an extended storage medium supported by the photographing apparatus. The computer-readable storage medium provides a storage space storing an operating system of the terminal. Also, one or more instructions, which may be one or more computer programs (including program code), are stored in the memory space and are adapted to be loaded and executed by the processor 301. It should be noted that the computer-readable storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory; and optionally at least one computer readable storage medium remotely located from the aforementioned processor.
According to the embodiment of the invention, through the face characteristic information (for example, including gender information, age information, race information and the like corresponding to a face) of a target face region in a target image, a first image parameter (for example, an image parameter which can be a face skin color generally existing or liked in a crowd of the gender, the age and the race) matched with the target face region is determined from a preset image parameter set, a second image parameter of the face region in the target image is determined by combining the first image parameter and the image characteristic information (for example, information which can include a light source color temperature, an illumination intensity, a background region, an illumination ratio of the target face region and the like corresponding to the target image) of the target image, and the image parameter of the target image is adjusted based on the second image parameter. Therefore, the image parameters of the face area are adjusted according to the differences of the sex, the age and the race of the shooting object and the differences of the actual shooting environment, and the image parameters of the whole target image are adjusted by taking the image parameters of the face area as priority. The human face is beautified, the whole picture is coordinated with the human face, and the quality of a target image is improved. When the embodiment of the invention is applied to a specific application scene, the embodiment of the invention can be used for processing the original image data acquired by a camera or a mobile phone in real time in various daily shooting, beautifying the face, coordinating the whole picture, improving the quality of the real-time image directly output by shooting equipment such as the camera or the mobile phone and the like, and meeting the aesthetic requirement of a user. Optionally, the target object targeted in the present application may not only be the target face area, that is, the photographic object may not only be a human, but also be other photographic objects such as an animal, a plant, a building, or a food.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium may store a program, and when the program is executed, the program includes some or all of the steps described in any of the embodiments of the image color processing method.
Embodiments of the present invention also provide a computer program, which includes instructions that, when executed by a computer, enable the computer to perform some or all of the steps of any one of the image color processing methods.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the order of acts described, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred and that the acts and modules referred to are not necessarily required in this application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute all or part of the steps of the above-described method of the embodiments of the present application. The storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a magnetic disk, an optical disk, a Read-only memory (ROM) or a Random Access Memory (RAM).
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (25)

1. An image color processing method, comprising:
acquiring a target image, and determining a target face area and a background area in the target image;
determining a first image parameter matched with the face feature information in the target face region from a preset image parameter set based on the face feature information in the target face region, wherein the face feature information comprises one or more of gender information, age information and race information corresponding to a face;
determining a second image parameter according to the image characteristic information of the target image and based on the first image parameter;
and adjusting the image parameter of the target image based on the second image parameter.
2. The method of claim 1, further comprising:
acquiring an image set, wherein the image set comprises a plurality of face images, and the face images are obtained by shooting in a preset shooting environment;
determining a plurality of kinds of face feature information corresponding to the plurality of face images in the image set;
determining image parameters respectively matched with the various human face characteristic information based on hue parameters, saturation parameters and brightness parameters of each human face image in the multiple human face images in an HSV color space;
and generating the preset image parameter set according to the image parameters respectively matched with the various human face feature information.
3. The method of claim 2, further comprising:
determining face color gamuts respectively matched with the various kinds of face feature information based on red-green components and yellow-blue components of each face image in the plurality of face images in an LAB color space;
and generating a preset human face color gamut set according to the human face color gamuts respectively matched with the various human face characteristic information.
4. The method of claim 3, further comprising:
determining a first face color gamut matched with the face feature information in the target face region from the preset face color gamut set based on the face feature information in the target face region;
and determining a second face color gamut according to the image characteristic information of the target image and based on the first face color gamut.
5. The method of claim 4, wherein after the adjusting image parameters of the target image based on the second image parameters, the method further comprises:
determining a third face color gamut of the target face region based on a red-green component and a yellow-blue component of the target face region in a current LAB color space;
and adjusting the color gamut of the third face based on the color gamut of the second face.
6. The method according to claim 1, wherein the image feature information of the target image comprises a luminance contrast of the target face region; the method further comprises the following steps:
determining a first brightness statistic value higher than the brightness threshold value in the brightness histogram of the target face area and a second brightness statistic value lower than the brightness threshold value in the brightness histogram of the target face area based on the brightness histogram of the target face area and a preset brightness threshold value;
and determining the brightness contrast of the target face area according to the ratio of the first brightness statistic value to the second brightness statistic value.
7. The method of claim 1, wherein the image characteristic information of the target image comprises a lighting ratio of the target image; the method further comprises the following steps:
determining a third luminance statistic value higher than the luminance threshold value in the luminance histogram of the target image based on the luminance histogram of the target image and the preset luminance threshold value;
and determining the illumination ratio of the target image according to the ratio of the third brightness statistic value to the first brightness statistic value.
8. The method according to claim 1, wherein the image characteristic information of the target image comprises a partition luminance statistic of the background area; the method further comprises the following steps:
performing semantic segmentation on the background area to determine at least one semantic segmentation area;
and determining a partition brightness statistic value of the background area according to the brightness histogram of each semantic segmentation area in the at least one semantic segmentation area.
9. The method according to any one of claims 1 to 8, wherein the image characteristic information of the target image comprises: one or more of the brightness contrast of the target face area, the illumination ratio of the target image, the partition brightness statistic value of the background area, the illumination intensity corresponding to the target image and the light source color temperature corresponding to the target image;
the first image parameters comprise a first hue parameter, a first saturation parameter and a first brightness parameter which are matched with the face feature information in the target face region;
the second image parameter includes a second hue parameter, a second saturation parameter, and a second brightness parameter obtained by adjusting one or more of the first hue parameter, the first saturation parameter, and the first brightness parameter based on image feature information of the target image.
10. The method according to any one of claims 1-9, wherein the adjusting the image parameter of the target image based on the second image parameter comprises:
and adjusting one or more of a hue parameter, a saturation parameter and a brightness parameter of the target image in the HSV color space based on the second image parameter.
11. The method according to any of claims 1-10, wherein prior to said adjusting image parameters of the target image based on the second image parameters, the method further comprises:
adjusting a color balance of the target image based on the second image parameter.
12. An image color processing apparatus, comprising:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a target image and determining a target face area and a background area in the target image;
the first image parameter determining unit is used for determining a first image parameter matched with the face feature information in the target face region from a preset image parameter set based on the face feature information in the target face region, wherein the face feature information comprises one or more of gender information, age information and race information corresponding to a face;
the second image parameter determining unit is used for determining a second image parameter according to the image characteristic information of the target image and based on the first image parameter;
and the first adjusting unit is used for adjusting the image parameters of the target image based on the second image parameters.
13. The apparatus of claim 12, further comprising:
the second acquisition unit is used for acquiring an image set, wherein the image set comprises a plurality of face images, and the face images are obtained by shooting in a preset shooting environment;
the first determining unit is used for determining a plurality of kinds of face feature information corresponding to the plurality of face images in the image set;
the second determining unit is used for determining image parameters respectively matched with the various human face feature information based on the hue parameter, the saturation parameter and the brightness parameter of each human face image in the multiple human face images in the HSV color space;
and the first generating unit is used for generating the preset image parameter set according to the image parameters respectively matched with the various human face feature information.
14. The apparatus of claim 13, further comprising:
the third determining unit is further configured to determine, based on a red-green component and a yellow-blue component of each of the plurality of face images in an LAB color space, a face color gamut to which the plurality of kinds of face feature information are respectively matched;
and the second generating unit is further used for generating a preset human face color gamut set according to the human face color gamuts respectively matched with the multiple kinds of human face feature information.
15. The apparatus of claim 14, further comprising:
a first face color gamut determining unit, configured to determine, based on the face feature information in the target face region, a first face color gamut matching the face feature information in the target face region from the preset face color gamut set;
and the second face color gamut determining unit is used for determining a second face color gamut according to the image characteristic information of the target image and based on the first face color gamut.
16. The apparatus of claim 15, wherein after the adjusting the image parameter of the target image based on the second image parameter, the apparatus further comprises:
a third face color gamut determining unit, configured to determine a third face color gamut of the target face region based on a red-green component and a yellow-blue component of the target face region in a current LAB color space;
and the second adjusting unit is used for adjusting the color gamut of the third face based on the color gamut of the second face.
17. The apparatus according to claim 12, wherein the image feature information of the target image includes a luminance contrast of the target face region; the device further comprises:
a fourth determining unit, configured to determine, based on the luminance histogram of the target face region and a preset luminance threshold, a first luminance statistic value in the luminance histogram of the target face region that is higher than the luminance threshold, and a second luminance statistic value in the luminance histogram of the target face region that is lower than the luminance threshold;
and the fifth determining unit is used for determining the brightness contrast of the target face area according to the ratio of the first brightness statistic value to the second brightness statistic value.
18. The apparatus according to claim 12, wherein the image characteristic information of the target image includes a light illumination ratio of the target image; the device further comprises:
a sixth determining unit, configured to determine, based on the luminance histogram of the target image and the preset luminance threshold, a third luminance statistic value higher than the luminance threshold in the luminance histogram of the target image;
and the seventh determining unit is used for determining the illumination ratio of the target image according to the ratio of the third brightness statistic value to the first brightness statistic value.
19. The apparatus according to claim 12, wherein the image characteristic information of the target image comprises a partition luminance statistic of the background area; the device further comprises:
the semantic segmentation unit is used for performing semantic segmentation on the background area and determining at least one semantic segmentation area;
an eighth determining unit, configured to determine a partition luminance statistic of the background region according to a luminance histogram of each of the at least one semantic dividing region.
20. The method according to any one of claims 12 to 19, wherein the image characteristic information of the target image comprises: one or more of the brightness contrast of the target face area, the illumination ratio of the target image, the partition brightness statistic value of the background area, the illumination intensity corresponding to the target image and the light source color temperature corresponding to the target image;
the first image parameters comprise a first hue parameter, a first saturation parameter and a first brightness parameter which are matched with the face feature information in the target face region;
the second image parameter includes a second hue parameter, a second saturation parameter, and a second brightness parameter obtained by adjusting one or more of the first hue parameter, the first saturation parameter, and the first brightness parameter based on image feature information of the target image.
21. The apparatus according to any one of claims 12 to 20, wherein the first adjusting unit is specifically configured to:
and adjusting one or more of a hue parameter, a saturation parameter and a brightness parameter of the target image in the HSV color space based on the second image parameter.
22. The apparatus of any one of claims 12-21, further comprising:
and the third adjusting unit is used for adjusting the color balance of the target image based on the second image parameter.
23. A photographing apparatus, characterized by comprising: a processor, a photographing module coupled to the processor;
the shooting module is used for collecting a target image;
the processor is configured to:
acquiring a target image, and determining a target face area and a background area in the target image;
determining a first image parameter matched with the face feature information in the target face region from a preset image parameter set based on the face feature information in the target face region, wherein the face feature information comprises one or more of gender information, age information and race information corresponding to a face;
determining a second image parameter according to the image characteristic information of the target image and based on the first image parameter;
and adjusting the image parameter of the target image based on the second image parameter.
24. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when being executed by a processor, carries out the method of any one of the preceding claims 1 to 11.
25. A computer program, characterized in that the computer program comprises instructions which, when executed by a computer, cause the computer to carry out the method according to any one of claims 1-11.
CN201911207388.1A 2019-11-29 2019-11-29 Image color processing method and device and related equipment Withdrawn CN112887582A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911207388.1A CN112887582A (en) 2019-11-29 2019-11-29 Image color processing method and device and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911207388.1A CN112887582A (en) 2019-11-29 2019-11-29 Image color processing method and device and related equipment

Publications (1)

Publication Number Publication Date
CN112887582A true CN112887582A (en) 2021-06-01

Family

ID=76039227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911207388.1A Withdrawn CN112887582A (en) 2019-11-29 2019-11-29 Image color processing method and device and related equipment

Country Status (1)

Country Link
CN (1) CN112887582A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379650A (en) * 2021-07-22 2021-09-10 浙江大华技术股份有限公司 Face image exposure method and device, electronic equipment and storage medium
CN113591630A (en) * 2021-07-16 2021-11-02 中国图片社有限责任公司 Certificate photo automatic processing method, system, terminal equipment and storage medium
CN113676715A (en) * 2021-08-23 2021-11-19 展讯半导体(南京)有限公司 Image processing method and device
CN114023103A (en) * 2021-11-23 2022-02-08 北京筑梦园科技有限公司 Image processing method and device and parking management system
CN114630045A (en) * 2022-02-11 2022-06-14 珠海格力电器股份有限公司 Photographing method and device, readable storage medium and electronic equipment
WO2023010796A1 (en) * 2021-08-03 2023-02-09 展讯通信(上海)有限公司 Image processing method and related apparatus
CN116668838A (en) * 2022-11-22 2023-08-29 荣耀终端有限公司 Image processing method and electronic equipment

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591630A (en) * 2021-07-16 2021-11-02 中国图片社有限责任公司 Certificate photo automatic processing method, system, terminal equipment and storage medium
CN113379650A (en) * 2021-07-22 2021-09-10 浙江大华技术股份有限公司 Face image exposure method and device, electronic equipment and storage medium
WO2023010796A1 (en) * 2021-08-03 2023-02-09 展讯通信(上海)有限公司 Image processing method and related apparatus
CN113676715A (en) * 2021-08-23 2021-11-19 展讯半导体(南京)有限公司 Image processing method and device
CN114023103A (en) * 2021-11-23 2022-02-08 北京筑梦园科技有限公司 Image processing method and device and parking management system
CN114630045A (en) * 2022-02-11 2022-06-14 珠海格力电器股份有限公司 Photographing method and device, readable storage medium and electronic equipment
CN116668838A (en) * 2022-11-22 2023-08-29 荣耀终端有限公司 Image processing method and electronic equipment
CN116668838B (en) * 2022-11-22 2023-12-05 荣耀终端有限公司 Image processing method and electronic equipment

Similar Documents

Publication Publication Date Title
CN112887582A (en) Image color processing method and device and related equipment
CN110609722B (en) Dark mode display interface processing method, electronic equipment and storage medium
CN109639982B (en) Image noise reduction method and device, storage medium and terminal
CN109688351B (en) Image signal processing method, device and equipment
CN113129312B (en) Image processing method, device and equipment
WO2020125410A1 (en) Image processing method and electronic device
CN111416950A (en) Video processing method and device, storage medium and electronic equipment
CN111179282B (en) Image processing method, image processing device, storage medium and electronic apparatus
US11759143B2 (en) Skin detection method and electronic device
US20220180485A1 (en) Image Processing Method and Electronic Device
WO2021057277A1 (en) Photographing method in dark light and electronic device
US20220319077A1 (en) Image-text fusion method and apparatus, and electronic device
CN110069974B (en) Highlight image processing method and device and electronic equipment
CN111770282B (en) Image processing method and device, computer readable medium and terminal equipment
CN113810603B (en) Point light source image detection method and electronic equipment
CN113810764B (en) Video editing method and video editing device
CN114422682A (en) Photographing method, electronic device, and readable storage medium
CN111552451A (en) Display control method and device, computer readable medium and terminal equipment
CN113727085B (en) White balance processing method, electronic equipment, chip system and storage medium
CN112188094B (en) Image processing method and device, computer readable medium and terminal equipment
CN114463191B (en) Image processing method and electronic equipment
CN111885768B (en) Method, electronic device and system for adjusting light source
CN115633260A (en) Method for determining chrominance information and related electronic equipment
CN114332331A (en) Image processing method and device
RU2794062C2 (en) Image processing device and method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210601

WW01 Invention patent application withdrawn after publication