CN110473156B - Image information processing method and device, storage medium and electronic equipment - Google Patents

Image information processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN110473156B
CN110473156B CN201910741627.5A CN201910741627A CN110473156B CN 110473156 B CN110473156 B CN 110473156B CN 201910741627 A CN201910741627 A CN 201910741627A CN 110473156 B CN110473156 B CN 110473156B
Authority
CN
China
Prior art keywords
object region
tone
contrast
difference
tone mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910741627.5A
Other languages
Chinese (zh)
Other versions
CN110473156A (en
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910741627.5A priority Critical patent/CN110473156B/en
Publication of CN110473156A publication Critical patent/CN110473156A/en
Application granted granted Critical
Publication of CN110473156B publication Critical patent/CN110473156B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses a method for processing image information, which comprises the following steps: acquiring an image to be processed, and performing object recognition to obtain a first object area and a second object area which are related; determining corresponding first color adjustment information based on the first object area and the second object area; performing tone mapping processing on the first object region to generate a first target object region after the tone mapping processing; and calculating second color tone adjustment information of the first target object area and the first object area, and determining a color tone mapping processing strategy of the second object area according to the first color tone adjustment information and the second color tone adjustment information. Therefore, a tone mapping processing strategy suitable for the second object area is determined according to the first tone adjustment information of the associated first object area and the second object area before and after tone mapping processing and the second tone adjustment information of the first object area before and after tone mapping, so that the processing effect of the associated area is real, and the processing efficiency of the image information is greatly improved.

Description

Image information processing method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for processing image information, a storage medium, and an electronic device.
Background
With the continuous development of electronic technology, the camera pixels and image processing functions of electronic devices such as mobile phones are more and more powerful, and people have higher and higher requirements on the image processing effect, especially attention on the image processing effect of High Dynamic Range Imaging (HDR).
At present, generally, a Tone Mapping (Tone Mapping) method is used to process a high dynamic range image, so that the overall visual effect of the image is better, but performing a uniform Mapping process on the high dynamic range image may cause the processing effects of different areas in the processed image to be inconsistent, which affects the processing efficiency.
Disclosure of Invention
The embodiment of the application provides a method and a device for processing image information, a storage medium and an electronic device, which can improve the processing efficiency of the image information.
In a first aspect, an embodiment of the present application provides a method for processing image information, including:
acquiring an image to be processed, and performing object identification on the image to be processed to obtain a first object area and a second object area which are associated;
determining corresponding first color adjustment information based on the first object area and the second object area;
performing tone mapping processing on the first object region to generate a first target object region after the tone mapping processing;
and calculating second color tone adjustment information of the first target object area and the first object area, and determining a color tone mapping processing strategy of the second object area according to the first color tone adjustment information and the second color tone adjustment information.
In a second aspect, an embodiment of the present application provides an apparatus for processing image information, including:
the identification unit is used for acquiring an image to be processed and carrying out object identification on the image to be processed to obtain a first object area and a second object area which are related;
a determining unit configured to determine corresponding first color adjustment information based on the first object region and the second object region;
a processing unit, configured to perform tone mapping processing on the first object region, and generate a first target object region after the tone mapping processing;
and the calculating unit is used for calculating second color tone adjusting information of the first target object area and the first object area and determining a color tone mapping processing strategy of the second object area according to the first color tone adjusting information and the second color tone adjusting information.
In a third aspect, a storage medium is provided in an embodiment of the present application, and has a computer program stored thereon, where the computer program is executed on a computer, so as to enable the computer to execute the processing method of image information provided in any embodiment of the present application.
In a fourth aspect, an electronic device provided in an embodiment of the present application includes a processor and a memory, where the memory has a computer program, and the processor is configured to execute the method for processing image information provided in any embodiment of the present application by calling the computer program.
The method comprises the steps of obtaining an image to be processed for object recognition to obtain a first object area and a second object area which are related; determining corresponding first color adjustment information based on the first object area and the second object area; performing tone mapping processing on the first object region to generate a first target object region after the tone mapping processing; and calculating second color tone adjustment information of the first target object area and the first object area, and determining a color tone mapping processing strategy of the second object area according to the first color tone adjustment information and the second color tone adjustment information. Therefore, the tone mapping processing strategy suitable for the second object area is determined according to the first tone adjustment information of the associated first object area and the second object area before the tone mapping processing and the second tone adjustment information of the first object area before the tone mapping processing, so that the processing effect of the associated area is real, and the processing efficiency of the image information is greatly improved.
Drawings
The technical solution and other advantages of the present application will become apparent from the detailed description of the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a method for processing image information according to an embodiment of the present application.
Fig. 2 is another schematic flow chart of a method for processing image information according to an embodiment of the present application.
Fig. 3 is a scene schematic diagram of a method for processing image information according to an embodiment of the present application.
Fig. 4 is a schematic block diagram of an apparatus for processing image information according to an embodiment of the present disclosure.
Fig. 5 is a schematic block diagram of another apparatus for processing image information according to an embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 7 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
The term "module" as used herein may be considered a software object executing on the computing system. The various components, modules, engines, and services described herein may be viewed as objects implemented on the computing system. The apparatus and method described herein are preferably implemented in software, but can be implemented in hardware without departing from the scope of the present application.
The embodiment of the present application provides a method for processing image information, and an execution subject of the method for processing image information may be a processing apparatus for image information provided in the embodiment of the present application, or an electronic device integrated with the processing apparatus for image information, where the processing apparatus for image information may be implemented in a hardware or software manner. The electronic device may be a smart phone, a tablet computer, a Personal Digital Assistant (PDA), or the like.
The following is a detailed description of the analysis.
An embodiment of the present application provides a method for processing image information, as shown in fig. 1, fig. 1 is a schematic flowchart of the method for processing image information provided in the embodiment of the present application, and the method for processing image information may include the following steps:
in step S101, an image to be processed is acquired, and object recognition is performed on the image to be processed, so as to obtain a first object region and a second object region which are associated with each other.
The format of the image to be processed may be BitMaP (BitMaP, BMP), Joint Photographic Expert Group (JPEG), and the like, the image to be recognized may be a high dynamic range image, and the high dynamic range image may provide more dynamic range and image details than a common image, but since the gray distribution value of the high dynamic range image is very uneven, some pixel points are too bright, and some pixel points are too dark, it is necessary to map and transform the image color in a tone mapping manner, and map the color value of the image from the high dynamic range to the low dynamic range, so that the color distribution of the image is uniform, the image looks more comfortable, the overall presentation effect is better, and the brighter pixel points are dimmed in this operation process to be the most common operation.
However, in the tone mapping processing mode of the image, by protecting the preset object area, such as the protection mechanism of the face object area, it is avoided that the face object area is too dark in the tone mapping processing, which causes the human face to appear unnatural, but only protecting the face object area causes objects around the face object area to be still processed according to the original tone mapping mode, such as human hair, when the hair is black, the darkening of the hair does not cause the excessive incoordination with the face object area, but nowadays with the diversification of life, more and more people like hair dyeing, which does not lack the hair to be dyed into non-black bright color, or the hair of europe and the beauty inherently non-black hair, in this case, if the face object area is protected, the associated hair object area is still processed according to the original tone mapping mode, the difference between the face object region subjected to the protection processing and the hair object region processed by the normal tone mapping mode is too large, so that the whole portrait after the tone processing is unnatural.
Optionally, the image to be processed may be subjected to image recognition by an image recognition algorithm, for example, a Convolutional Neural Network (CNN) algorithm, to identify different objects in the image to be processed, and determine a first object region and a second object region that are associated with each other from the different objects, where, for example, the face object region and the hair object region are strongly correlated, or the trunk object region and the leaf object region are strongly correlated, and the first object region and the second object region that are associated need to be reasonably processed in the tone mapping process, so as to avoid a situation that the image processing is not true.
In step S102, corresponding first color adjustment information is determined based on the first object region and the second object region.
The first color adjustment information may be calculated based on the first object region and the second object region, where the first color adjustment information is a difference range of the expression degrees of the first object region and the second object region before performing no color tone mapping process, for example, a difference between brightness and contrast of the first object region and the second object region, and a corresponding difference range is determined according to the difference, where the difference range is the first color adjustment information, and the first color adjustment information represents a difference of the image expression degrees of the first object region and the second object region in the original image, and even if performing the color tone mapping process, the image expression degree difference needs to be retained to retain a true value of the image.
In step S103, tone mapping processing is performed on the first object region, and a first target object region after the tone mapping processing is generated.
The conventional tone mapping method is to adjust according to the brightness and contrast of a pixel, which easily causes that the brightness and contrast of objects in different areas in the whole image cannot meet the requirements at the same time, for example, in an image with a portrait, the background is a bright sky, if the image is tone-mapped to a low dynamic image at this time, a pixel point with a high pixel value needs to be compressed, so that although the dynamic range of the sky can be reduced, a point with a high pixel value in the portrait can be compressed at the same time, so that the portrait is not real, the contrast between parts is poor, at this time, a face object area is generally protected, and the problem that the face display is abnormal due to too much darkening of the face object area in the tone mapping process is avoided.
Optionally, a region that needs to be subjected to a tone mapping protection mechanism, such as a human face object region, is determined as the first object region, and tone mapping processing is performed according to the tone mapping processing under the protection mechanism to generate a tone mapped first target object region.
In step S104, second tone adjustment information of the first target object region and the first object region is calculated, and a tone mapping process strategy of the second object region is determined according to the first tone adjustment information and the second tone adjustment information.
Based on the difference between the expression degrees of the first target object area after the tone mapping process under the protection scheme and the first object area not according to the tone mapping process under the protection scheme, for example, the difference between the brightness and the contrast of the first target object area and the first object area, a corresponding difference range is determined according to the difference, where the difference range is second adjustment information, the second adjustment information represents the difference between the image expression degrees of the first object area in the original image and the first object area after the tone mapping process under the protection scheme, and the subsequent second object area is subjected to the tone mapping process and needs to follow the processing degree difference of the tone mapping process of the first object area.
Further, a tone mapping processing strategy corresponding to the second object region is determined by combining the first tone adjustment information and the second tone adjustment information, so that after the second object region is subjected to tone mapping processing based on the tone mapping processing strategy, the expression degree of the second object region after the tone mapping processing meets the expression degree difference between the first object region and the second object region in the original image to be processed, and also meets the processing amplitude of the tone mapping processing of the first object region, so that the expression degree of the first object region and the second object region after the tone mapping processing meets the requirements of a real scene.
As can be seen from the above, in the image information processing method provided by this embodiment, the object identification is performed by acquiring the image to be processed, so as to obtain the associated first object region and second object region; determining corresponding first color adjustment information based on the first object area and the second object area; performing tone mapping processing on the first object region to generate a first target object region after the tone mapping processing; and calculating second color tone adjustment information of the first target object area and the first object area, and determining a color tone mapping processing strategy of the second object area according to the first color tone adjustment information and the second color tone adjustment information. Therefore, a tone mapping processing strategy suitable for the second object area is determined according to the first tone adjustment information of the associated first object area and the second object area before and after tone mapping processing and the second tone adjustment information of the first object area before and after tone mapping, so that the processing effect of the associated area is real, and the processing efficiency of the image information is greatly improved.
The method described in the above embodiments is further illustrated in detail by way of example.
Referring to fig. 2, fig. 2 is another schematic flow chart of a method for processing image information according to an embodiment of the present disclosure.
Specifically, the method comprises the following steps:
in step S201, an image to be processed is acquired, and key feature point information in the image to be processed is extracted.
It should be noted that, for better explaining the present application, the electronic device is exemplified by a mobile phone in the following.
The mobile phone obtains a high dynamic range image, as shown in fig. 3, a processed image 1 is a high dynamic range image, the processed image 1 may include a sky and a person, and due to a light refraction problem, brightness and contrast of the sky in the shot processed image 1 are higher than brightness and contrast of the person.
Optionally, the pixels of the image to be processed may be subjected to feature recognition one by one through a convolutional neural network according to a sequence from top to bottom, and key feature point information in the image to be processed is extracted.
In step S202, a face object region and a hair object region associated with each other are determined according to the key feature point information.
The face object region can be determined according to the key feature point information, and the face object region and the associated hair object region are determined according to the key feature points because the features contained in the face object region are very rich and special, such as the key feature points of eyes, mouth, eyebrows, nose, hair and the like.
Because the conventional tone mapping method is to simultaneously process the brightness and contrast of different object regions in the whole image to be processed, as shown in fig. 3, the conventional tone mapping method is to perform tone mapping processing on a hair object region a, a face object region B and a sky object region C together, and the background is a bright sky, at this time, if the processed image 1 is tone mapped to a low dynamic image, a pixel point with a high pixel value needs to be compressed, so that although the dynamic range of the sky can be reduced, the pixel value in the portrait is excessively compressed as the sky processing degree, so that the portrait is not real, the contrast between local parts is poor, so that the image often needs to be face-protected, the face object region is separately tone mapped, and other object regions such as the hair object region are still processed as the sky processing degree, therefore, in the case where the face object region and the hair object region are strongly correlated, a difference between the brightness and the contrast of the face object region and the hair object region after the tone mapping process is large, so that the human image is not realistic.
In step S203, a first brightness value and a first contrast value of the face object region are obtained, and a second brightness value and a second contrast value of the hair object region are obtained.
The method comprises the steps of acquiring a first brightness value and a first contrast value of a face object region before tone mapping processing, acquiring a second brightness value and a second contrast value of a hair object region before tone mapping processing, wherein the first brightness value, the first contrast value, the second brightness value and the second contrast value represent the difference range of the representation degree of the face object region and the hair object region in an original image and are in a relatively real portrait representation form.
In step S204, a first luminance difference between the first luminance value and the second luminance value and a first contrast difference between the first contrast value and the second contrast value are calculated, and the first luminance difference and the first contrast difference are determined as first color tone adjustment information.
The first brightness difference between the first brightness value and the second brightness value and the first contrast difference between the first contrast value and the second contrast are calculated, and the first brightness difference and the first contrast difference are positive numbers, and if the first brightness difference and the first contrast difference are negative numbers, absolute value processing is required correspondingly.
Optionally, a first brightness difference range is determined according to the first brightness difference, a first contrast difference range is determined according to the first contrast difference, the size of the range fluctuates up and down according to the actual situation, for example, the up-down variation range is 5, and the like, and corresponding first color adjustment information is determined based on the first brightness difference range and the first contrast difference range, and the first color adjustment information represents the image representation difference range of the face object region and the hair object region in the original image.
In step S205, tone mapping processing is performed on the face object region, and a first target face object region after the tone mapping processing is generated.
As shown in fig. 3, a tone mapping process with a face protection mechanism is performed on the face object region B to generate a first target face object region after the tone mapping process.
In step S206, a third luminance value and a third contrast value of the first target face object region are acquired.
And acquiring a third brightness value and a third contrast value of the first target face object region after tone mapping processing with a face protection mechanism.
In step S207, a second luminance difference of the third luminance value and the first luminance value and a second contrast difference of the third contrast value and the first contrast value are calculated, and the second luminance difference and the second contrast difference are determined as second hue adjustment information.
No matter how the tone mapping processing is performed on the high dynamic image, the final rendering effect should conform to the rendering rule of the original dynamic image, and therefore, a second brightness difference between a third brightness value of the target face object region after the tone mapping processing with the face protection mechanism and a first brightness value of the face object region of the original image, and a second contrast difference between a third contrast value of the target face object region after the tone mapping processing with the face protection mechanism and a first contrast value of the face object region of the original image are calculated, where the second brightness difference and the second contrast difference are positive numbers, and if the second brightness difference and the second contrast difference are negative numbers, absolute value processing needs to be performed correspondingly.
Optionally, a second brightness difference range may be determined according to the second brightness difference, and a second contrast difference range may be determined according to the second contrast difference, where the size of the range fluctuates up and down according to an actual situation, for example, the up-down variation range is 5, and corresponding second color adjustment information is determined based on the second brightness difference range and the second contrast difference range, where the second color adjustment information represents the image expression level difference between the face object area in the original image and the target face object area after the color tone mapping process according to the face protection mechanism.
In step S208, a region hue contrast range is determined according to the first brightness difference and the first contrast difference, a face hue adjustment range is determined according to the second brightness difference and the second contrast difference, and a hue mapping processing policy corresponding to the hair object region is determined by combining the region hue contrast range and the face hue adjustment range.
The method comprises the steps of obtaining a first brightness difference range corresponding to a first brightness difference, obtaining a first contrast difference range corresponding to a first contrast, determining a region tone contrast range between a face object region and a hair object region in an original image to be processed according to the first brightness difference range corresponding to the first brightness difference and the first contrast difference range corresponding to the first contrast, wherein the region tone contrast range is the contrast range of the face object region and the hair object region of the original image to be processed, and the target face object region and the target hair object region after subsequent tone mapping processing need to meet the contrast range, so that the face and the hair can keep the original presentation mode and the reality is kept.
Further, a face tone adjustment range between the target face object region after the tone mapping process with the face protection mechanism and the face object region in the original image to be processed is determined according to a second brightness difference range corresponding to the second brightness difference and a second contrast difference range corresponding to the second contrast, the face tone adjustment range is an adjustment range of the target face object region after the tone mapping process with the face protection mechanism and the face object of the original image to be processed, and the subsequent hair object region in the original image to be processed and the target hair head object region after the tone mapping process need to satisfy the adjustment range, so that the tone mapping process effect of the hair object region is close to the tone mapping process effect with the face protection mechanism.
Therefore, the tone mapping processing strategy of the hair object region is determined by combining the region tone contrast range and the face tone adjustment range, so that after the hair object region is subjected to tone mapping processing according to the tone mapping processing strategy, the obtained target hair object region conforms to the original presentation difference degree of the face object region and the hair object region in the image to be processed and the adjustment processing effect of the face protection mechanism after the tone mapping processing, so that the representation degrees of the target face object region and the target hair object region after the tone mapping are in accordance with reality, and the situation that the face is not true due to the fact that the presentation effect difference of the hair object region associated with the face object region is large under the tone mapping processing of the face object region under the face protection is avoided.
As can be seen from the above, in the image information processing method provided by this embodiment, the object identification is performed by acquiring the image to be processed, so as to obtain the associated first object region and second object region; determining corresponding first color adjustment information based on the first object area and the second object area; performing tone mapping processing on the first object region to generate a first target object region after the tone mapping processing; and calculating second color tone adjustment information of the first target object area and the first object area, and determining a color tone mapping processing strategy of the second object area according to the first color tone adjustment information and the second color tone adjustment information. Therefore, a tone mapping processing strategy suitable for the second object area is determined according to the first tone adjustment information of the associated first object area and the second object area before and after tone mapping processing and the second tone adjustment information of the first object area before and after tone mapping, so that the processing effect of the associated area is real, and the processing efficiency of the image information is greatly improved.
In order to better implement the image information processing method provided by the embodiment of the present application, the embodiment of the present application further provides an apparatus based on the image information processing method. The terms are the same as those in the above-described image information processing method, and details of implementation may refer to the description in the method embodiment.
Referring to fig. 4, fig. 4 is a block diagram of an image information processing apparatus according to an embodiment of the present disclosure. Specifically, the image information processing apparatus 300 includes: a recognition unit 31, a determination unit 32, a processing unit 33 and a calculation unit 34.
The identification unit 31 is configured to acquire an image to be processed, and perform object identification on the image to be processed to obtain a first object region and a second object region which are associated with each other.
The identifying unit 31 may perform image identification on the image to be processed through an image identification algorithm, such as a convolutional neural network algorithm, to identify different objects in the image to be processed, and determine a first object region and a second object region that are associated with each other from the different objects, for example, the face object region and the hair object region are strongly correlated, or the trunk object region and the leaf object region are strongly correlated, and the first object region and the second object region that are associated need to be reasonably processed in the tone mapping process, so as to avoid a situation that the image processing is not true.
In some embodiments, as shown in fig. 5, the identification unit 31 includes:
the extraction subunit 311 is configured to acquire an image to be processed, and extract key feature point information in the image to be processed;
a determining subunit 312, configured to determine, according to the key feature point information, a related face object region and a hair object region.
A determining unit 32, configured to determine corresponding first color adjustment information based on the first object region and the second object region.
The determining unit 32 may calculate, based on the first object region and the second object region, corresponding first color adjustment information, where the first color adjustment information is a difference range of the first object region and the second object region before performing the color tone mapping process, such as a difference value between brightness and contrast of the first object region and the second object region, and determine a corresponding difference range according to the difference value, where the difference range is the first color adjustment information, and the first color adjustment information represents an image expression difference between the first object region and the second object region in the original image, and the image expression difference needs to be retained even after performing the color tone mapping process, so as to retain a real value of the image.
In some embodiments, the determining unit 32 is specifically configured to obtain a first luminance value and a first contrast value of the face object region; acquiring a second brightness value and a second contrast value of the hair object area; calculating a first luminance difference of the first and second luminance values and a first contrast difference of a first and second contrast values; determining the first luminance difference and the first contrast difference as first color tone adjustment information.
A processing unit 33, configured to perform tone mapping processing on the first object region, and generate a first target object region after the tone mapping processing.
The processing unit 33 determines a region that needs to be subjected to a tone mapping protection mechanism, such as a human face object region, as a first object region, and performs tone mapping processing according to the tone mapping processing under the protection mechanism to generate a tone mapped first target object region.
In some embodiments, the processing unit 33 is specifically configured to: and carrying out tone mapping processing on the human face object region to generate a first target human face object region after the tone mapping processing.
The calculating unit 34 is configured to calculate second tone adjustment information of the first target object region and the first object region, and determine a tone mapping processing policy of the second object region according to the first tone adjustment information and the second tone adjustment information.
The calculating unit 34 determines a corresponding difference range according to a difference between brightness and contrast of the first target object region and the first object region, such as a difference between brightness and contrast of the first target object region and the first object region, based on the difference between the expression degrees of the first target object region after the tone mapping process under the protection scheme and the expression degree of the first object region not according to the tone mapping process under the protection scheme, where the difference range is second adjustment information, the second adjustment information represents an image expression degree difference between the first object region in the original image and the first target object region after the tone mapping process under the protection scheme, and the subsequent second object region performs the tone mapping process and needs to follow the processing degree difference of the tone mapping process of the first object region.
Further, the calculating unit 34 determines a tone mapping processing policy corresponding to the second object region by combining the first tone adjustment information and the second tone adjustment information, so that after the second object region is subjected to tone mapping processing based on the tone mapping processing policy, the expression level of the obtained second object region after the tone mapping processing meets both the expression level difference between the first object region and the second object region in the original image to be processed and the processing amplitude of the tone mapping processing of the first object region, so that the expression level of the first object region and the second object region after the tone mapping processing meets the requirement of the real scene.
In some embodiments, the calculating unit 34 is specifically configured to: acquiring a third brightness value and a third contrast value of the first target face object region; calculating a second luminance difference of the third luminance value and the first luminance value and a second contrast difference of the third contrast value and the first contrast value; determining the second luminance difference and the second contrast difference as second hue adjustment information; determining a regional tone contrast range according to the first brightness difference and the first contrast difference; determining a human face tone adjusting range according to the second brightness difference and the second contrast difference; and determining a tone mapping processing strategy corresponding to the hair object region by combining the region tone contrast range and the face tone adjusting range.
As can be seen from the above, the image information processing apparatus provided in this embodiment obtains the image to be processed through the recognition unit 31 to perform object recognition, and obtains the associated first object region and second object region; the determination unit 32 determines corresponding first color adjustment information based on the first object region and the second object region; the processing unit 33 performs tone mapping processing on the first object region to generate a tone-mapped first target object region; the calculation unit 34 calculates second tone adjustment information of the first target object region and the first object region, and determines a tone mapping processing strategy of the second object region based on the first tone adjustment information and the second tone adjustment information. Therefore, a tone mapping processing strategy suitable for the second object area is determined according to the first tone adjustment information of the associated first object area and the second object area before and after tone mapping processing and the second tone adjustment information of the first object area before and after tone mapping, so that the processing effect of the associated area is real, and the processing efficiency of the image information is greatly improved.
The embodiment of the application also provides the electronic equipment. Referring to fig. 6, an electronic device 500 includes a processor 501 and a memory 502. The processor 501 is electrically connected to the memory 502.
The processor 500 is a control center of the electronic device 500, connects various parts of the whole electronic device using various interfaces and lines, performs various functions of the electronic device 500 by running or loading a computer program stored in the memory 502, and calls data stored in the memory 502, and processes the data, thereby performing overall monitoring of the electronic device 500.
The memory 502 may be used to store software programs and modules, and the processor 501 executes various functional applications and processing of image information by running the computer programs and modules stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, a computer program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 502 may also include a memory controller to provide the processor 501 with access to the memory 502.
In this embodiment, the processor 501 in the electronic device 500 loads instructions corresponding to processes of one or more computer programs into the memory 502 according to the following steps, and the processor 501 runs the computer programs stored in the memory 502, so as to implement various functions, as follows:
acquiring an image to be processed, and performing object identification on the image to be processed to obtain a first object area and a second object area which are associated;
determining corresponding first color adjustment information based on the first object area and the second object area;
performing tone mapping processing on the first object region to generate a first target object region after the tone mapping processing;
and calculating second color tone adjustment information of the first target object area and the first object area, and determining a color tone mapping processing strategy of the second object area according to the first color tone adjustment information and the second color tone adjustment information.
In some embodiments, the first object region is a face object region, the second object region is a hair object region, and when the to-be-processed image is subject to object recognition to obtain the associated first object region and second object region, the processor 501 may specifically perform the following steps:
extracting key characteristic point information in the image to be processed;
and determining a face object region and a hair object region which are associated according to the key feature point information.
In some embodiments, when determining the corresponding first color adjustment information based on the first object region and the second object region, the processor 501 may specifically perform the following steps:
acquiring a first brightness value and a first contrast value of a face object region;
acquiring a second brightness value and a second contrast value of the hair object area;
calculating a first luminance difference of the first and second luminance values and a first contrast difference of a first and second contrast values;
determining the first luminance difference and the first contrast difference as first color tone adjustment information.
In some embodiments, when calculating the second hue adjustment information of the first target object region and the first object region, the processor 501 may specifically perform the following steps:
acquiring a third brightness value and a third contrast value of the first target face object region;
calculating a second luminance difference of the third luminance value and the first luminance value and a second contrast difference of the third contrast value and the first contrast value;
determining the second luminance difference and the second contrast difference as second hue adjustment information.
In some embodiments, when determining the tone mapping processing policy of the second object region according to the first tone adjustment information and the second tone adjustment information, the processor 501 may specifically perform the following steps:
determining a regional tone contrast range according to the first brightness difference and the first contrast difference;
determining a human face tone adjusting range according to the second brightness difference and the second contrast difference;
and determining a tone mapping processing strategy corresponding to the hair object region by combining the region tone contrast range and the face tone adjusting range.
In some embodiments, when performing the tone mapping process on the first object region to generate the tone-mapped first target object region, the processor 501 may specifically perform the following steps:
and carrying out tone mapping processing on the face object region to generate a first target face object region after the tone mapping processing.
As can be seen from the above, the electronic device of the embodiment of the application performs object identification by acquiring an image to be processed, and obtains a first object region and a second object region which are associated with each other; determining corresponding first color adjustment information based on the first object area and the second object area; performing tone mapping processing on the first object region to generate a first target object region after the tone mapping processing; and calculating second color tone adjustment information of the first target object area and the first object area, and determining a color tone mapping processing strategy of the second object area according to the first color tone adjustment information and the second color tone adjustment information. Therefore, a tone mapping processing strategy suitable for the second object area is determined according to the first tone adjustment information of the associated first object area and the second object area before and after tone mapping processing and the second tone adjustment information of the first object area before and after tone mapping, so that the processing effect of the associated area is real, and the processing efficiency of the image information is greatly improved.
Referring to fig. 7, in some embodiments, the electronic device 500 may further include: a display 503, radio frequency circuitry 504, audio circuitry 505, and a power supply 506. The display 503, the rf circuit 504, the audio circuit 505, and the power source 506 are electrically connected to the processor 501.
The display 503 may be used to display information input by or provided to the user as well as various graphical user interfaces, which may be made up of graphics, text, icons, video, and any combination thereof. The Display 503 may include a Display panel, and in some embodiments, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The rf circuit 504 may be used for transceiving rf signals to establish wireless communication with a network device or other terminals through wireless communication, and for transceiving signals with the network device or other terminals.
The audio circuit 505 may be used to provide an audio interface between a user and an electronic device through a speaker, microphone.
The power source 506 may be used to power various components of the electronic device 500. In some embodiments, power supply 506 may be logically coupled to processor 501 through a power management system, such that functions of managing charging, discharging, and power consumption are performed through the power management system.
An embodiment of the present application further provides a storage medium, where the storage medium stores a computer program, and when the computer program runs on a computer, the computer program causes the computer to execute the image information processing method in any one of the above embodiments, such as: acquiring an image to be processed, and performing object identification on the image to be processed to obtain a first object area and a second object area which are associated; determining corresponding first color adjustment information based on the first object area and the second object area; performing tone mapping processing on the first object region to generate a first target object region after the tone mapping processing; and calculating second color tone adjustment information of the first target object area and the first object area, and determining a color tone mapping processing strategy of the second object area according to the first color tone adjustment information and the second color tone adjustment information.
In the embodiment of the present application, the storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should be noted that, for the image information processing method of the embodiment of the present application, it can be understood by a person skilled in the art that all or part of the process of implementing the image information processing method of the embodiment of the present application can be completed by controlling the relevant hardware through a computer program, where the computer program can be stored in a computer readable storage medium, such as a memory of an electronic device, and executed by at least one processor in the electronic device, and during the execution process, the process of the embodiment of the image information processing method can be included. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random access memory, etc.
In the image information processing apparatus according to the embodiment of the present application, each functional module may be integrated into one processing chip, each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented as a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium such as a read-only memory, a magnetic or optical disk, or the like.
The foregoing describes in detail a method, an apparatus, a storage medium, and an electronic device for processing image information provided in an embodiment of the present application, and a specific example is applied in the present application to explain principles and embodiments of the present application, and the description of the foregoing embodiments is only used to help understand the method and core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for processing image information, comprising:
acquiring an image to be processed, and performing object identification on the image to be processed to obtain a first object area and a second object area which are associated;
determining corresponding first color adjustment information based on the first object region and the second object region, including: acquiring a first brightness value and a first contrast value of a first object region; acquiring a second brightness value and a second contrast value of a second object region; calculating a first luminance difference of the first and second luminance values and a first contrast difference of a first and second contrast values; determining the first luminance difference and the first contrast difference as first color tone adjustment information; wherein, the first tone adjustment information is a difference range of the expression degree of the first object area and the second object area before the tone mapping processing is not carried out;
performing tone mapping processing on the first object region based on tone mapping processing under a protection mechanism to generate a first target object region after the tone mapping processing;
calculating second color tone adjustment information of the first target object area and the first object area, and determining a color tone mapping processing strategy of the second object area according to the first color tone adjustment information and the second color tone adjustment information, wherein the method comprises the following steps: acquiring a third brightness value and a third contrast value of the first target object area; calculating a second luminance difference of the third luminance value and the first luminance value and a second contrast difference of the third contrast value and the first contrast value; determining the second luminance difference and the second contrast difference as second hue adjustment information; determining a regional tone contrast range according to the first brightness difference and the first contrast difference; determining a first object region tone adjusting range according to the second brightness difference and the second contrast difference; determining a tone mapping processing strategy corresponding to a second object region by combining the regional tone contrast range and the first object region tone adjusting range; the second tone adjustment information is based on the expression degree difference range of the first target object area subjected to tone mapping processing under the protection mechanism and the first object area not subjected to tone mapping processing under the protection mechanism.
2. The method for processing image information according to claim 1, wherein the first object region is a face object region, the second object region is a hair object region, and the step of performing object recognition on the image to be processed to obtain the associated first object region and second object region includes:
extracting key characteristic point information in the image to be processed;
and determining a face object region and a hair object region which are associated according to the key feature point information.
3. The method of processing image information according to claim 2, wherein said obtaining a first luminance value and a first contrast value of a first object region; the step of obtaining a second luminance value and a second contrast value of a second object region includes:
acquiring a first brightness value and a first contrast value of a face object region;
a second brightness value and a second contrast value of the hair object region are obtained.
4. The method for processing image information according to claim 3, wherein the step of obtaining a third luminance value and a third contrast value of the first target object region includes:
and acquiring a third brightness value and a third contrast value of the first target face object region.
5. The method for processing image information according to claim 4, wherein said determining a first object region tone adjustment range based on said second luminance difference and second contrast difference; the step of determining the tone mapping processing strategy corresponding to the second object region by combining the regional tone contrast range and the first object region tone adjustment range comprises the following steps:
determining a human face tone adjusting range according to the second brightness difference and the second contrast difference;
and determining a tone mapping processing strategy corresponding to the hair object region by combining the region tone contrast range and the face tone adjusting range.
6. The method for processing image information according to any one of claims 2 to 5, wherein the step of performing tone mapping processing on the first object region to generate a tone-mapped first target object region includes:
and carrying out tone mapping processing on the human face object region to generate a first target human face object region after the tone mapping processing.
7. An apparatus for processing image information, comprising:
the identification unit is used for acquiring an image to be processed and carrying out object identification on the image to be processed to obtain a first object area and a second object area which are related;
a determining unit, configured to determine corresponding first color adjustment information based on the first object region and the second object region, including: acquiring a first brightness value and a first contrast value of a first object region; acquiring a second brightness value and a second contrast value of a second object region; calculating a first luminance difference of the first and second luminance values and a first contrast difference of a first contrast value and a second contrast value; determining the first luminance difference and the first contrast difference as first color tone adjustment information; wherein, the first tone adjustment information is a difference range of the expression degree of the first object area and the second object area before the tone mapping processing is not carried out;
the processing unit is used for carrying out tone mapping processing on the first object area based on tone mapping processing under a protection mechanism to generate a first target object area after the tone mapping processing;
a calculating unit, configured to calculate second color adjustment information of the first target object region and the first object region, and determine a color tone mapping processing policy of the second object region according to the first color adjustment information and the second color adjustment information, including: acquiring a third brightness value and a third contrast value of the first target object area; calculating a second luminance difference of the third luminance value and the first luminance value and a second contrast difference of the third contrast value and the first contrast value; determining the second luminance difference and the second contrast difference as second hue adjustment information; determining a regional tone contrast range according to the first brightness difference and the first contrast difference; determining a first object region tone adjusting range according to the second brightness difference and the second contrast difference; determining a tone mapping processing strategy corresponding to a second object region by combining the regional tone contrast range and the first object region tone adjusting range; the second tone adjustment information is based on the expression degree difference range of the first target object area subjected to tone mapping processing under the protection mechanism and the first object area not subjected to tone mapping processing under the protection mechanism.
8. The apparatus for processing image information according to claim 7, wherein the first object region is a face object region, and the second object region is a hair object region, and the identifying unit includes:
the extraction subunit is used for acquiring an image to be processed and extracting key feature point information in the image to be processed;
and the determining subunit is used for determining the related human face object region and the hair object region according to the key feature point information.
9. A storage medium having stored thereon a computer program, characterized in that, when the computer program runs on a computer, the computer is caused to execute a processing method of image information according to any one of claims 1 to 6.
10. An electronic device comprising a processor and a memory, said memory having a computer program, wherein said processor is adapted to execute the method of processing image information according to any one of claims 1 to 6 by calling said computer program.
CN201910741627.5A 2019-08-12 2019-08-12 Image information processing method and device, storage medium and electronic equipment Active CN110473156B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910741627.5A CN110473156B (en) 2019-08-12 2019-08-12 Image information processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910741627.5A CN110473156B (en) 2019-08-12 2019-08-12 Image information processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110473156A CN110473156A (en) 2019-11-19
CN110473156B true CN110473156B (en) 2022-08-02

Family

ID=68510191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910741627.5A Active CN110473156B (en) 2019-08-12 2019-08-12 Image information processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110473156B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112351195B (en) * 2020-09-22 2022-09-30 北京迈格威科技有限公司 Image processing method, device and electronic system
CN114463191B (en) * 2021-08-26 2023-01-31 荣耀终端有限公司 Image processing method and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101389039A (en) * 2007-09-11 2009-03-18 佳能株式会社 Image processing apparatus and image processing method and imaging apparatus
CN101621702A (en) * 2009-07-30 2010-01-06 北京海尔集成电路设计有限公司 Method and device for automatically adjusting chroma and saturation
CN103400342A (en) * 2013-07-04 2013-11-20 西安电子科技大学 Mixed color gradation mapping and compression coefficient-based high dynamic range image reconstruction method
CN105913373A (en) * 2016-04-05 2016-08-31 广东欧珀移动通信有限公司 Image processing method and device
CN107862657A (en) * 2017-10-31 2018-03-30 广东欧珀移动通信有限公司 Image processing method, device, computer equipment and computer-readable recording medium
WO2019028700A1 (en) * 2017-08-09 2019-02-14 深圳市大疆创新科技有限公司 Image processing method, device and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101389039A (en) * 2007-09-11 2009-03-18 佳能株式会社 Image processing apparatus and image processing method and imaging apparatus
CN101621702A (en) * 2009-07-30 2010-01-06 北京海尔集成电路设计有限公司 Method and device for automatically adjusting chroma and saturation
CN103400342A (en) * 2013-07-04 2013-11-20 西安电子科技大学 Mixed color gradation mapping and compression coefficient-based high dynamic range image reconstruction method
CN105913373A (en) * 2016-04-05 2016-08-31 广东欧珀移动通信有限公司 Image processing method and device
WO2019028700A1 (en) * 2017-08-09 2019-02-14 深圳市大疆创新科技有限公司 Image processing method, device and computer readable storage medium
CN107862657A (en) * 2017-10-31 2018-03-30 广东欧珀移动通信有限公司 Image processing method, device, computer equipment and computer-readable recording medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《Dynamic range enhancement for Medical Image Processing》;Gian DL et al;《2017 7th IEEE International Workshop on Advances in Sensors and Interfaces》;20170713;全文 *
《不完整的模糊人脸图像修复方法的研究》;李美怡;《中国优秀博硕士学位论文全文数据库 (硕士) 信息科技辑》;20061015(第2006年第10期);全文 *
《基于标准肤色的人脸图像纹理合成与三维重建应用》;阳策等;《计算机系统应用》;20190603;全文 *
《基于边缘保持平滑滤波与编辑传播的快速人脸美化方法及系统实现》;许少杰;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160315;全文 *

Also Published As

Publication number Publication date
CN110473156A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN113129312B (en) Image processing method, device and equipment
CN109951627B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109741281A (en) Image processing method, device, storage medium and terminal
CN107690804B (en) Image processing method and user terminal
CN112950499B (en) Image processing method, device, electronic equipment and storage medium
CN112669197A (en) Image processing method, image processing device, mobile terminal and storage medium
CN110163816B (en) Image information processing method and device, storage medium and electronic equipment
CN110570370B (en) Image information processing method and device, storage medium and electronic equipment
CN110473156B (en) Image information processing method and device, storage medium and electronic equipment
CN111723803A (en) Image processing method, device, equipment and storage medium
CN112289278A (en) Screen brightness adjusting method, screen brightness adjusting device and electronic equipment
CN114463191B (en) Image processing method and electronic equipment
CN113452969B (en) Image processing method and device
CN112419218A (en) Image processing method and device and electronic equipment
CN111968605A (en) Exposure adjusting method and device
CN107563957A (en) Eyes image processing method and processing device
CN107105167B (en) Method and device for shooting picture during scanning question and terminal equipment
CN114299014A (en) Image processing architecture, method, electronic device and storage medium
CN112634155A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112511890A (en) Video image processing method and device and electronic equipment
CN117119316B (en) Image processing method, electronic device, and readable storage medium
CN115760652B (en) Method for expanding dynamic range of image and electronic equipment
CN116664630B (en) Image processing method and electronic equipment
CN117119316A (en) Image processing method, electronic device, and readable storage medium
CN114363507A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant