CN113487700A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN113487700A
CN113487700A CN202110692892.6A CN202110692892A CN113487700A CN 113487700 A CN113487700 A CN 113487700A CN 202110692892 A CN202110692892 A CN 202110692892A CN 113487700 A CN113487700 A CN 113487700A
Authority
CN
China
Prior art keywords
image
target
area
brightness
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110692892.6A
Other languages
Chinese (zh)
Inventor
程林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110692892.6A priority Critical patent/CN113487700A/en
Publication of CN113487700A publication Critical patent/CN113487700A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method and device and electronic equipment, and belongs to the field of image processing. The method comprises the following steps: acquiring a first image, wherein the first image comprises a background area and a target area; in a case where the first image does not satisfy the first condition, performing a target operation, the target operation including any one of: reducing the brightness value of the background area; increasing the brightness value of the target area and decreasing the brightness value of the background area; and fusing the background area with the reduced brightness value and a first area to obtain a second image, wherein the first area is the target area or the target area with the increased brightness value.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the field of image processing, and particularly relates to an image processing method and device and electronic equipment.
Background
With the development of electronic equipment, a user can perform image processing on an image through a cropping function of the electronic equipment to obtain images with different display effects.
At present, after a user shoots an image through an electronic device, a body highlighting process may be performed on the image by using a cropping application, and the electronic device may generally achieve the purpose of highlighting the body by erasing a background, for example, the electronic device may erase the background by using a mosaic or the like.
However, processing the image in the above manner may result in an obvious erasure trace of the image, resulting in a poor display effect of the image.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, an image processing device and electronic equipment, which can solve the problem that the image display effect is poor when the electronic equipment highlights a main body.
In a first aspect, an embodiment of the present application provides an image processing method, including: acquiring a first image, wherein the first image comprises a background area and a target area; in a case where the first image does not satisfy the first condition, performing a target operation, the target operation including any one of: reducing the brightness value of the background area; increasing the brightness value of the target area and decreasing the brightness value of the background area; and fusing the background area with the reduced brightness value and a first area to obtain a second image, wherein the first area is the target area or the target area with the increased brightness value.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: the system comprises an acquisition module, an execution module and a fusion module; the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first image, and the first image comprises a background area and a target area; an execution module, configured to, if the first image acquired by the acquisition module does not satisfy the first condition, execute a target operation, where the target operation includes any one of: reducing the brightness value of the background area; increasing the brightness value of the target area and decreasing the brightness value of the background area; and the fusion module is used for fusing the background area with the reduced brightness value obtained by the execution module and the first area to obtain a second image, wherein the first area is the target area or the target area with the increased brightness value.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In this embodiment of the application, the image processing apparatus may first acquire a first image including a background region and a target region; then, the image processing apparatus may perform a target operation including any one of: reducing the brightness value of the background area; increasing the brightness value of the target area and decreasing the brightness value of the background area; finally, the image processing apparatus may fuse the background region with the decreased brightness value and the first region to obtain the second image, where the first region is the target region or the target region with the increased brightness value. With the above arrangement, the image processing apparatus may determine whether the first image does not satisfy the first condition after acquiring the first image. Then, the image processing apparatus may perform processing of highlighting the subject on the first image, for example, reducing the luminance value of the background region of the first image, that is, the luminance value of the phase-shifted increase target region, or increasing the luminance value of the target region and reducing the luminance value of the background region, in the case where it is determined that the first image does not satisfy the first condition. Finally, the target image processing apparatus may re-fuse the background region and the target region with the decreased luminance values, or re-fuse the background region and the target region with the increased luminance values with the decreased luminance values, to obtain the second image. In summary, the brightness of the target area is higher than that of the background area due to the increase of the brightness value of the target area or the decrease of the brightness value of the background area, so that the user can visually see the target area more easily, and thus, the second image obtained after the brightness value is adjusted can highlight the main body more.
Drawings
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 4 is a second schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. The objects distinguished by "first", "second", and the like are usually a class, and the number of the objects is not limited, and for example, the first object may be one or a plurality of objects. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application, including steps 201 to 203:
step 201: the image processing apparatus acquires a first image.
The first image includes a background area and a target area.
In this embodiment of the application, the first image may be an image acquired by a camera of the image processing apparatus, may also be an image in a local cache area of the image processing apparatus, may also be an image downloaded by the image processing apparatus, and may also be any possible image, which is not limited in this embodiment of the application. For example, in the case that the first image is an image captured by a camera of the image processing apparatus, the first image may be a preview image or a shot image, which is not limited in the embodiment of the present application.
For example, the target area may include a target object, and the target object may be a person, a plant, an electrical appliance, or the like, which is not limited in this embodiment of the application.
Alternatively, the image processing apparatus may distinguish the first image to obtain the background region and the target region. For example, the image processing apparatus may perform saliency detection on the image to distinguish an area where a main object is located (i.e., a target area) from a background area in the image.
For example, the image processing apparatus may perform saliency detection on an image by using open-source saliency target detection (PoolNet) based on a pooling technique, obtain a mask (mask) according to a result of the PoolNet, and distinguish a main object (which may be simply referred to as a subject, i.e., the target region) and a background (which is the background region) in the image according to the mask. Then, the image processing apparatus may perform image blurring processing on the output mask with a large radius filter to soften the edge, resulting in a soft edge mask.
It should be noted that, for the process of softening the edge by performing image blurring on the output mask by the PoolNet detection and the large radius filter, reference may be made to related technologies, and details are not described here.
Step 202: the image processing apparatus performs a target operation in a case where the first image does not satisfy the first condition.
Wherein the target operation includes any one of: reducing the brightness value of the background area; increasing the brightness value of the target area and decreasing the brightness value of the background area.
Note that the luminance value in the embodiment of the present application may also be referred to as a pixel value.
Optionally, in an embodiment of the present application, the first condition includes any one of: the difference value between the brightness mean value of the target area and the brightness mean value of the background area is larger than or equal to a first threshold value, and the brightness mean value of the background area is smaller than or equal to a second threshold value.
In the embodiment of the present application, the image processing apparatus may determine whether the first image satisfies the first condition, thereby determining whether the highlight subject operation is required for the first image.
In this case, the image processing apparatus may not perform any processing on the first image, and may determine that the first image does not need to be subjected to the highlight main body operation.
For example, the image processing apparatus may calculate the luminance mean value of the subject and the background according to the above-mentioned soft edge mask, and if the background luminance mean value is much smaller than the subject luminance mean value, such as a difference of 60, it indicates that the subject is sufficiently prominent, or if the background luminance is extremely dark, such as a background luminance mean value smaller than 30, at which time the image processing apparatus may not perform any processing on the first image. On the contrary, the image processing apparatus may perform a target operation, that is, may perform a highlight subject operation on the first image.
Optionally, in this embodiment of the present application, the increasing the brightness value of the target area specifically may include the following steps: based on the original brightness value of each first pixel point in the target area, the brightness value of each first pixel point is increased in a nonlinear mode.
In one example, the above-mentioned increase in the luminance value of the target region may be achieved by increasing the luminance value of the first image.
Illustratively, the specific process of increasing the brightness value of each first pixel point in the target region may be as follows:
illustratively, the image processing device may normalize the original image by dividing the pixel-by-pixel value by 255. Then, the image processing apparatus may increase the luminance value of each first pixel point in the target region according to the following formulas (1) and (2):
Figure BDA0003127386010000051
Figure BDA0003127386010000052
wherein, IpA luminance value matrix representing an image with increased luminance values, input representing a luminance value matrix of an input original image (including a target region and a background region), a value between 0 and 255,
Figure BDA0003127386010000053
the luminance value matrix of the normalized output of the original image is represented, the value is between 0 and 1, img _ gray represents the image graying operation, and x represents the intermediate variable matrix.
It should be noted that the closer the value of a certain pixel in x is to 1, the darker the pixel is, the closer the value of the pixel is to 0, and the brighter the pixel is. When the pixel point in x is equal to 0 or 1, the output IpEqual to the normalized luminance value matrix of the original image, i.e.
Figure BDA0003127386010000054
It should be noted that, when x is in the interval 0 to 1, 4 x (x-0.5)2The value of (2) is also between 0 and 1, so that the brightness value of other pixels except the brightness value corresponding to the extremely dark point (i.e. the pixel with the brightness value of 0) and the extremely bright point (i.e. the pixel with the brightness value of 1) is divided by 4 (x-0.5)2After which the values increase non-linearly. Thus, I of the outputpExcept that the brightness values corresponding to the extremely dark points and the extremely bright points are not changed, other values are increased in a nonlinear mode, and the nonlinear brightness improvement of the image can be equivalently carried out, namely the image becomes bright.
In this way, the image processing apparatus can increase the brightness of the entire image and maintain the contrast of the entire image while keeping the brightness values corresponding to the extremely dark point and the extremely bright point of the image (including the target region) unchanged.
Optionally, in this embodiment of the application, the reducing the brightness value of the background area specifically may include the following steps: and based on the original brightness value of each second pixel point in the background region, the brightness value of each second pixel point is reduced in a nonlinear manner.
In one example, the above-described reduction of the luminance value of the target region may be achieved by reducing the luminance value of the first image.
For example, the specific process of reducing the brightness value of each second pixel point in the background region may be as follows:
for example, the image processing apparatus may normalize the luminance value matrix of the background region in a manner of dividing a pixel-by-pixel value by 255. Then, the image processing apparatus may calculate a luminance value of each second pixel point in the background region according to the following formulas (3) and (4):
Figure BDA0003127386010000061
Figure BDA0003127386010000062
wherein, IbA luminance value matrix representing the image after the luminance value reduction, and y represents an intermediate gray matrix.
It should be noted that the closer the value of a certain pixel in y is to 1, the brighter the pixel is, the closer the value of the pixel is to 0, and the darker the pixel is. When the pixel point in y is equal to 0 or 1, the output IbEqual to the normalized luminance value matrix of the original image, i.e.
Figure BDA0003127386010000063
When y is in the interval 0 to 1, 4 (y-0.5)2The value of (a) is also between 0 and 1, so that the normalized original image input, except for the very dark point (i.e. the pixel point with the luminance value of 0) and the very bright point(i.e. pixel with brightness value of 1) except the brightness value corresponding to other pixels, multiplying the brightness value of other pixels by 4 x (y-0.5)2After which the values decrease non-linearly. Thus, I of the outputbExcept that the brightness values corresponding to the extremely dark points and the extremely bright points are not changed, other values are reduced in a nonlinear manner, which is equivalent to the fact that the image can be subjected to nonlinear brightness reduction, namely the image is darkened.
In this way, the image processing apparatus can reduce the brightness of the entire image and maintain the contrast of the entire image while keeping the brightness values corresponding to the extremely dark points and the extremely bright points of the image (including the background area) unchanged.
Optionally, the image processing apparatus may actively execute step 202, or may passively execute step 202, which is not limited in this embodiment of the application.
In one example, in a case where the image processing apparatus actively performs step 202, the image processing apparatus may directly perform step 202 after acquiring the first image.
In the second example, in a case where the image processing apparatus passively performs step 202, the image processing apparatus may determine whether the first input by the user is received after acquiring the first image, and perform step 202 after receiving the first input by the user.
For example, the first input may be: the click input of the user to the screen, or the voice instruction input by the user, or the specific gesture input by the user may be specifically determined according to the actual use requirement, which is not limited in the embodiment of the present application.
The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure identification gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
It should be noted that, in the case where the target operation is to increase the luminance value of the target region and decrease the luminance value of the background region, the image processing apparatus does not have a clear sequence when increasing the luminance value of the target region and decreasing the luminance value of the background region. For example, the image processing apparatus may perform the brightness value reduction of the background region after performing the brightness value increase of the target region, or may perform the brightness value reduction of the background region before performing the brightness value increase of the target region, or may perform the brightness value reduction of the background region while performing the brightness value increase of the target region, which is not limited in the embodiment of the present application.
Step 203: the image processing device fuses the background area and the first area with the reduced brightness value to obtain a second image.
The first region is the target region or the target region with increased brightness value.
For example, the background region and the first region after the brightness value of the step 203 is reduced may be understood as: and fusing the background area with the reduced brightness value and the target area with the increased brightness value, or fusing the target area and the background area with the reduced brightness value.
In the embodiment of the present application, the background region and the target region that the image processing apparatus fuses are related to the above-described target operation. For example, in the case where the target operation is to reduce the luminance value of the background region, the image processing apparatus fuses the target region and the background region with reduced luminance; in a case where the above-described target operation is to increase the luminance value of the above-described target region and decrease the luminance value of the above-described background region, the image processing apparatus fuses the background region whose luminance is decreased and the target region whose luminance is increased.
In an example, the above-mentioned fusing the background region with reduced brightness and the target region with increased brightness may be implemented by fusing the first image with reduced brightness and the first image with increased brightness.
Optionally, in this embodiment of the application, when the first region is a target region with an increased brightness value, the method may further include the following step 204:
step 204: the image processing device acquires mask information of the first image according to the first image.
The mask information is used to indicate the target area and the background area.
For example, the image processing apparatus may use the above-mentioned open-source PoolNet to detect the saliency of the image to obtain a mask (i.e. the above-mentioned mask information), and the image processing apparatus may distinguish a main object (which may be simply referred to as a subject) and a background in the image according to the mask. Then, the image processing apparatus may perform image blurring processing on the output mask with a large radius filter to soften the edge, resulting in a soft edge mask.
Based on the step 204, the step 203 may specifically include the following steps a1 to A3:
step A1: the image processing device obtains a background ratio image according to the background area with the reduced brightness value and the first image, and obtains a target ratio image according to the target area with the increased brightness value and the first image.
The image processing apparatus may obtain the luminance value matrix of the background ratio image by dividing the luminance value matrix of the first image with the reduced luminance value by the luminance value matrix of the first image. The image processing apparatus may obtain the luminance value matrix of the background ratio image by dividing the luminance value matrix of the first image with the increased luminance value by the luminance value matrix of the first image.
Step A2: and the image processing device fuses the background ratio image and the target ratio image according to the mask information to obtain a target brightness image.
Step A3: the image processing device obtains a second image according to the target brightness image and the first image.
In one example, the image processing apparatus may divide the luminance value matrix of the target luminance image by the luminance value matrix of the first image, and multiply the luminance value matrix by 255 to obtain the luminance value matrix of the second image.
Illustratively, the specific fusion process may be as follows:
first, the image processing apparatus may normalize the original image, and a specific formula (5) may be expressed as:
Figure BDA0003127386010000091
wherein input represents a luminance value matrix of an input original image, Iinput_normalRepresenting the normalized original image.
Secondly, the image processing device may divide the target area with the increased brightness value from the original image, and perform the clipping according to a first preset interval. It should be noted that clipping can prevent brightness overflow. For example, the number of brightness hours may be truncated to a value between 1 and 4, in practice a 4 times difference in brightness is sufficiently large. The specific equation (6) can be expressed as:
Figure BDA0003127386010000092
wherein clip represents a cut operation, HpThe luminance value matrix of the image with increased luminance value divided by the original image (i.e. the target ratio image).
For example, the image processing apparatus may divide the background region with the reduced brightness value by the original image, and perform the clipping according to a second preset interval. For example, it may be truncated to between 0 and 1. The specific equation (7) can be expressed as:
Figure BDA0003127386010000093
wherein clip represents a cut operation, HbA matrix of luminance values representing the luminance value of the reduced luminance value image divided by the original image (i.e., the background ratio image described above).
Then, the image processing apparatus may obtain a luminance map (i.e., the above-mentioned target intensity image) in which the target region and the background region are fused by using an α fusion method according to the above-mentioned soft edge mask value, and the specific formula (8) may be expressed as:
Hfusion=Hp*mask+Hb*(1-mask) (8)
wherein HfusionAnd the brightness value matrix of the brightness map of the fusion of the target area and the background area.
Then, the image processing apparatus may obtain the final fused luminance map (i.e. the second image) according to formula (9), and the specific formula (9) may be expressed as:
Ioutput=(Iinput_normal*Hfusion)clip(0-1)*255 (9)
wherein clip represents an intercept operation, IoutputA luminance value matrix representing the output fused luminance map (i.e., the second image).
Therefore, the image processing device can keep the trend of the brightness change of the original image, simultaneously can achieve better brightness transition of the edge regions of the target region and the background region, and can directly perform fusion according to the original image compared with the Poisson algorithm and other conventional fusion algorithms, so that the image processing device is more natural and efficient.
In the case where the image processing apparatus is a network device, the image processing apparatus may merge the merged image IoutputAnd returning the data to the mobile terminal equipment, so that the user can conveniently perform the next operation.
Optionally, in this embodiment of the present application, before the target operation is executed in step 202, the method may further include the following step 205:
step 205: the image processing apparatus determines whether the target area satisfies a second condition in a case where the first image does not satisfy the first condition.
Wherein the second condition is that the average value of the brightness of the target area is less than or equal to a third threshold (e.g., 128).
The image processing apparatus may increase the luminance value of the target region when the target region satisfies the second condition, and may not process the target region when the target region satisfies the second condition.
Alternatively, in the embodiment of the present application, the image processing apparatus may determine the specific target operation according to whether the target region satisfies the second condition, and may include at least two of the following cases.
In the first case:
illustratively, based on step 205, the executing target operation in step 202 may specifically include the following step 202 a:
step 202 a: the image processing apparatus increases the luminance value of the target region and decreases the luminance value of the background region in a case where the target region satisfies the second condition.
Based on step 202a, step 203 may specifically include the following step 203 a:
step 203 a: and the image processing device fuses the target area with the increased brightness value and the background area with the decreased brightness value to obtain a second image.
In one example, the step 203a may be specifically implemented by fusing the first image with the increased brightness value and the first image with the decreased brightness value. It should be noted that, for a specific fusion process, reference may be made to the above process of fusing the first image with the increased brightness value and the first image with the decreased brightness value to the image processing apparatus, and details are not described here again.
In the second case:
illustratively, based on step 205, the executing target operation in step 202 may specifically include the following step 202 b:
step 202 b: the image processing apparatus reduces the luminance value of the background area in a case where the target area does not satisfy the second condition.
Based on step 202b, step 203 may specifically include the following step 203 b:
step 203 b: and the image processing device fuses the target area and the background area with the reduced brightness value to obtain a second image.
Illustratively, the step 203b may be specifically implemented by fusing the first image and the first image with the reduced brightness value. It should be noted that, for a specific fusion process, reference may be made to the above process of fusing the first image with the increased brightness value and the first image with the decreased brightness value to the image processing apparatus, and details are not described here again.
In the image processing method provided by the embodiment of the application, the image processing device may first acquire a first image including a background region and a target region; then, the image processing apparatus may perform a target operation including any one of: reducing the brightness value of the background area; increasing the brightness value of the target area and decreasing the brightness value of the background area; finally, the image processing apparatus may fuse the background region with the decreased brightness value and the first region to obtain the second image, where the first region is the target region or the target region with the increased brightness value. With the above arrangement, the image processing apparatus may determine whether the first image does not satisfy the first condition after acquiring the first image. Then, the image processing apparatus may perform processing of highlighting the subject on the first image, for example, reducing the luminance value of the background region of the first image, that is, the luminance value of the phase-shifted increase target region, or increasing the luminance value of the target region and reducing the luminance value of the background region, in the case where it is determined that the first image does not satisfy the first condition. Finally, the target image processing apparatus may re-fuse the background region and the target region with the decreased luminance values, or re-fuse the background region and the target region with the increased luminance values with the decreased luminance values, to obtain the second image. In summary, the brightness of the target area is higher than that of the background area due to the increase of the brightness value of the target area or the decrease of the brightness value of the background area, so that the user can visually see the target area more easily, and thus, the second image obtained after the brightness value is adjusted can highlight the main body more.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
Fig. 2 is a schematic diagram of a possible structure of an image processing apparatus for implementing the embodiment of the present application, and as shown in fig. 2, the image processing apparatus 300 includes: an obtaining module 301, an executing module 302, and a fusing module 303, wherein: an obtaining module 301, configured to obtain a first image, where the first image includes a background area and a target area; an executing module 302, configured to, in a case where the first image acquired by the acquiring module 301 does not satisfy the first condition, execute a target operation, where the target operation includes any one of: reducing the brightness value of the background area; increasing the brightness value of the target area and decreasing the brightness value of the background area; a fusion module 303, configured to fuse the background region with the decreased brightness value obtained by the execution module 302 and a first region to obtain a second image, where the first region is the target region or the target region with the increased brightness value.
Optionally, the first region is a target region with an increased brightness value; an obtaining module 301, configured to obtain mask information of the first image according to the first image, where the mask information is used to indicate the target region and the background region; the execution module 302 is further configured to obtain a background ratio image according to the background area with the decreased brightness value and the first image, and obtain a target ratio image according to the target area with the increased brightness value and the first image; the fusion module 303 is specifically configured to fuse the background ratio image and the target ratio image obtained by the execution module 302 according to the mask information obtained by the obtaining module 301 to obtain a target luminance image; the executing module 302 is further configured to obtain a second image according to the target brightness image obtained by the fusing module 302 and the first image.
Optionally, the first condition includes any one of: the difference value between the brightness mean value of the target area and the brightness mean value of the background area is larger than or equal to a first threshold value, and the brightness mean value of the background area is smaller than or equal to a second threshold value.
Alternatively, as shown in fig. 2, the image processing apparatus 300 further includes: a determination module 304; a determining module 304, configured to determine whether the target area satisfies a second condition that a mean luminance value of the target area is less than or equal to a third threshold, if the first image acquired by the acquiring module 301 does not satisfy the first condition; an executing module 302, configured to, in a case that the determining module 304 determines that the target region meets the second condition, increase a brightness value of the target region and decrease a brightness value of the background region; and in the event that the determination module 304 determines that the target region does not satisfy the second condition, reducing the luminance value of the background region; the fusion module 303 is specifically configured to fuse the target region and the background region with the reduced brightness value.
It should be noted that, as shown in fig. 2, modules that are necessarily included in the image processing apparatus 300 are indicated by solid line boxes, such as the acquisition module 301; modules that may or may not be included in the image processing apparatus 300 are illustrated with dashed boxes, such as the determination module 304.
According to the image processing device provided by the embodiment of the application, the image processing device can firstly acquire a first image comprising a background area and a target area; then, the image processing apparatus may perform a target operation including any one of: reducing the brightness value of the background area; increasing the brightness value of the target area and decreasing the brightness value of the background area; finally, the image processing apparatus may fuse the background region with the decreased brightness value and the first region to obtain the second image, where the first region is the target region or the target region with the increased brightness value. With the above arrangement, the image processing apparatus may determine whether the first image does not satisfy the first condition after acquiring the first image. Then, the image processing apparatus may perform processing of highlighting the subject on the first image, for example, reducing the luminance value of the background region of the first image, that is, the luminance value of the phase-shifted increase target region, or increasing the luminance value of the target region and reducing the luminance value of the background region, in the case where it is determined that the first image does not satisfy the first condition. Finally, the target image processing apparatus may re-fuse the background region and the target region with the decreased luminance values, or re-fuse the background region and the target region with the increased luminance values with the decreased luminance values, to obtain the second image. In summary, the brightness of the target area is higher than that of the background area due to the increase of the brightness value of the target area or the decrease of the brightness value of the background area, so that the user can visually see the target area more easily, and thus, the second image obtained after the brightness value is adjusted can highlight the main body more.
The beneficial effects of the various implementation manners in this embodiment may specifically refer to the beneficial effects of the corresponding implementation manners in the above method embodiments, and are not described herein again to avoid repetition.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 3, an electronic device 400 is further provided in this embodiment of the present application, and includes a processor 401, a memory 402, and a program or an instruction stored in the memory 402 and executable on the processor 401, where the program or the instruction is executed by the processor 401 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 4 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The input unit 104 is configured to acquire a first image, where the first image includes a background area and a target area; a processor 110, configured to, in a case where the first image acquired by the input unit 104 does not satisfy the first condition, perform a target operation, where the target operation includes any one of: reducing the brightness value of the background area; increasing the brightness value of the target area and decreasing the brightness value of the background area; and fusing the background area with the reduced brightness value and a first area to obtain a second image, wherein the first area is the target area or the target area with the increased brightness value.
Optionally, the first region is a target region with an increased brightness value; the processor 110 is further configured to obtain mask information of the first image according to the first image, where the mask information is used to indicate the target region and the background region; the background ratio image is obtained according to the background area with the reduced brightness value and the first image, and the target ratio image is obtained according to the target area with the increased brightness value and the first image; fusing the background ratio image and the target ratio image according to the mask information to obtain a target brightness image; and obtaining a second image according to the target brightness image and the first image.
Optionally, the first condition includes any one of: the difference value between the brightness mean value of the target area and the brightness mean value of the background area is larger than or equal to a first threshold value, and the brightness mean value of the background area is smaller than or equal to a second threshold value.
Optionally, the processor 110 is further configured to determine whether the target area satisfies a second condition if the first image acquired by the input unit 104 does not satisfy the first condition, where the second condition is that the average value of the brightness of the target area is less than or equal to a third threshold; and increasing the brightness value of the target area and decreasing the brightness value of the background area when the target area satisfies the second condition; in the case where the target area does not satisfy the second condition, the luminance value of the background area is reduced.
According to the electronic device provided by the embodiment of the application, the electronic device can firstly acquire a first image comprising a background area and a target area; then, the electronic device may perform a target operation in a case where the first image does not satisfy the first condition, the target operation including any one of: reducing the brightness value of the background area; increasing the brightness value of the target area and decreasing the brightness value of the background area; finally, the electronic device may fuse the background region with the decreased brightness value and the first region to obtain the second image, where the first region is the target region or the target region with the increased brightness value. By the scheme, after the electronic equipment acquires the first image, whether the first image does not meet the first condition can be determined firstly. Then, the electronic device may perform processing of highlighting the subject on the first image, for example, reducing a luminance value of a background region of the first image, that is, a luminance value of a phase-shifted increase target region, or increasing a luminance value of the target region and reducing a luminance value of the background region, in the case where it is determined that the first image does not satisfy the first condition. Finally, the electronic device may re-fuse the background region with the decreased brightness value and the target region, or re-fuse the background region with the decreased brightness value and the target region with the increased brightness value to obtain the second image. In summary, the brightness of the target area is higher than that of the background area due to the increase of the brightness value of the target area or the decrease of the brightness value of the background area, so that the user can visually see the target area more easily, and thus, the second image obtained after the brightness value is adjusted can highlight the main body more.
The beneficial effects of the various implementation manners in this embodiment may specifically refer to the beneficial effects of the corresponding implementation manners in the above method embodiments, and are not described herein again to avoid repetition.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring a first image, wherein the first image comprises a background area and a target area;
in a case where the first image does not satisfy a first condition, performing a target operation, the target operation including any one of: reducing the brightness value of the background area; increasing the brightness value of the target area and decreasing the brightness value of the background area;
and fusing the background area with the reduced brightness value and a first area to obtain a second image, wherein the first area is the target area or the target area with the increased brightness value.
2. The method according to claim 1, wherein the first region is the target region with an increased brightness value; the method further comprises the following steps:
according to the first image, obtaining mask information of the first image, wherein the mask information is used for indicating the target area and the background area;
and fusing the background area and the first area with the reduced brightness value to obtain a second image, wherein the method comprises the following steps:
obtaining a background ratio image according to the background area and the first image with the reduced brightness values, and obtaining a target ratio image according to the target area and the first image with the increased brightness values;
fusing the background ratio image and the target ratio image according to the mask information to obtain a target brightness image;
and obtaining a second image according to the target brightness image and the first image.
3. The method of claim 1, wherein the first condition comprises any one of: the difference value between the brightness mean value of the target area and the brightness mean value of the background area is larger than or equal to a first threshold value, and the brightness mean value of the background area is smaller than or equal to a second threshold value.
4. The method of any of claims 1 to 3, wherein prior to performing the target operation, the method further comprises:
determining whether the target area meets a second condition that the mean value of the brightness of the target area is less than or equal to a third threshold value if the first image does not meet the first condition;
the executing the target operation comprises:
increasing the brightness value of the target area and decreasing the brightness value of the background area if the target area satisfies the second condition;
reducing the brightness value of the background area in a case where the target area does not satisfy the second condition.
5. An image processing apparatus characterized by comprising: the system comprises an acquisition module, an execution module and a fusion module;
the acquisition module is used for acquiring a first image, and the first image comprises a background area and a target area;
the execution module is configured to execute a target operation when the first image acquired by the acquisition module does not satisfy a first condition, where the target operation includes any one of: reducing the brightness value of the background area; increasing the brightness value of the target area and decreasing the brightness value of the background area;
the fusion module is configured to fuse the background region and the first region with the decreased brightness value obtained by the execution module to obtain a second image, where the first region is the target region or the target region with the increased brightness value.
6. The apparatus according to claim 5, wherein the first region is the target region with an increased luminance value;
the obtaining module is further configured to obtain mask information of the first image according to the first image, where the mask information is used to indicate the target region and the background region;
the execution module is further configured to obtain a background ratio image according to the background region and the first image with decreased brightness values, and obtain a target ratio image according to the target region and the first image with increased brightness values;
the fusion module is specifically configured to fuse the background ratio image and the target ratio image obtained by the execution module according to the mask information obtained by the acquisition module to obtain a target brightness image;
the execution module is further configured to obtain a second image according to the target brightness image and the first image obtained by the fusion module.
7. The image processing apparatus according to claim 5, wherein the first condition includes any one of: the difference value between the brightness mean value of the target area and the brightness mean value of the background area is larger than or equal to a first threshold value, and the brightness mean value of the background area is smaller than or equal to a second threshold value.
8. The image processing apparatus according to any one of claims 5 to 7, characterized by further comprising: a determination module;
the determining module is configured to determine whether the target area satisfies a second condition when the first image acquired by the acquiring module does not satisfy a first condition, where the second condition is that a luminance mean value of the target area is less than or equal to a third threshold;
the execution module is specifically configured to, when the determination module determines that the target region satisfies the second condition, increase a luminance value of the target region and decrease a luminance value of the background region; and reducing the brightness value of the background area in the case where the determination module determines that the target area does not satisfy the second condition.
9. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which program or instructions, when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 4.
10. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any one of claims 1 to 4.
CN202110692892.6A 2021-06-22 2021-06-22 Image processing method and device and electronic equipment Pending CN113487700A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110692892.6A CN113487700A (en) 2021-06-22 2021-06-22 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110692892.6A CN113487700A (en) 2021-06-22 2021-06-22 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113487700A true CN113487700A (en) 2021-10-08

Family

ID=77935614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110692892.6A Pending CN113487700A (en) 2021-06-22 2021-06-22 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113487700A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130016902A1 (en) * 2011-07-13 2013-01-17 Ricoh Company, Ltd. Image data processing device, image forming apparatus, and recording medium
CN105227855A (en) * 2015-09-28 2016-01-06 广东欧珀移动通信有限公司 A kind of image processing method and terminal
CN108446705A (en) * 2017-02-16 2018-08-24 华为技术有限公司 The method and apparatus of image procossing
CN108734676A (en) * 2018-05-21 2018-11-02 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN110766621A (en) * 2019-10-09 2020-02-07 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130016902A1 (en) * 2011-07-13 2013-01-17 Ricoh Company, Ltd. Image data processing device, image forming apparatus, and recording medium
CN105227855A (en) * 2015-09-28 2016-01-06 广东欧珀移动通信有限公司 A kind of image processing method and terminal
CN108446705A (en) * 2017-02-16 2018-08-24 华为技术有限公司 The method and apparatus of image procossing
CN108734676A (en) * 2018-05-21 2018-11-02 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN110766621A (en) * 2019-10-09 2020-02-07 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN113126862B (en) Screen capture method and device, electronic equipment and readable storage medium
CN112947824A (en) Display parameter adjusting method and device, electronic equipment and medium
WO2023056950A1 (en) Image processing method and electronic device
CN112911147A (en) Display control method, display control device and electronic equipment
CN112367559A (en) Video display method and device, electronic equipment, server and storage medium
CN111866378A (en) Image processing method, apparatus, device and medium
CN111638839A (en) Screen capturing method and device and electronic equipment
CN110618852A (en) View processing method, view processing device and terminal equipment
CN112367486B (en) Video processing method and device
CN112399010B (en) Page display method and device and electronic equipment
CN112511890A (en) Video image processing method and device and electronic equipment
CN111835937A (en) Image processing method and device and electronic equipment
CN111968605A (en) Exposure adjusting method and device
CN111724455A (en) Image processing method and electronic device
CN113393391B (en) Image enhancement method, image enhancement device, electronic apparatus, and storage medium
CN115562539A (en) Control display method and device, electronic equipment and readable storage medium
CN112383708B (en) Shooting method and device, electronic equipment and readable storage medium
CN113342222B (en) Application classification method and device and electronic equipment
CN111861965A (en) Image backlight detection method, image backlight detection device and terminal equipment
CN112529766B (en) Image processing method and device and electronic equipment
CN114518859A (en) Display control method, display control device, electronic equipment and storage medium
CN113487700A (en) Image processing method and device and electronic equipment
CN113873168A (en) Shooting method, shooting device, electronic equipment and medium
CN113473012A (en) Virtualization processing method and device and electronic equipment
CN113962840A (en) Image processing method, image processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination