CN109697738A - Image processing method, device, terminal device and storage medium - Google Patents

Image processing method, device, terminal device and storage medium Download PDF

Info

Publication number
CN109697738A
CN109697738A CN201811630233.4A CN201811630233A CN109697738A CN 109697738 A CN109697738 A CN 109697738A CN 201811630233 A CN201811630233 A CN 201811630233A CN 109697738 A CN109697738 A CN 109697738A
Authority
CN
China
Prior art keywords
image
brightness
distribution
pixel
mapping relations
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811630233.4A
Other languages
Chinese (zh)
Inventor
王会朝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811630233.4A priority Critical patent/CN109697738A/en
Publication of CN109697738A publication Critical patent/CN109697738A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The embodiment of the present application discloses a kind of image processing method, device, terminal device and storage medium, and this method includes the image for obtaining color and showing the score from color mode;Determine the image brightness distribution for each scene and described image that described image includes;Brightness mapping relations are generated according to the corresponding normal brightness distribution of each scene, weighted value and described image Luminance Distribution;It is adjusted according to luminance component of the brightness mapping relations to pixel each in described image, this programme improves image processing effect.

Description

Image processing method, device, terminal device and storage medium
Technical field
The invention relates to image processing techniques more particularly to a kind of image processing method, device, terminal device and Storage medium.
Background technique
With the fast development of terminal device, the equipment such as smart phone, tablet computer have been provided with Image Acquisition function Can, user is higher and higher to the quality requirement for the image that these terminal devices acquire.
At present after acquiring image, can generally blast be carried out to image, so that the darker region in obtained image It becomes clear, the details for being visually difficult to differentiate is shown, improve the clarity of whole image.But the blast of above-mentioned image In mode, usually the rgb value of each pixel in image is enhanced, easily leads to following problem: color in image It is excessively adjusted and distortion phenomenon occurs after enhancing close to grey, and compared with the color of bright areas, thickened, image Treatment effect is poor.
Summary of the invention
This application provides a kind of image processing method, device, terminal device and storage mediums, improve image procossing effect Fruit.
In a first aspect, the embodiment of the present application provides a kind of image processing method, comprising:
Obtain the image that color is showed the score from color mode;
Determine the image brightness distribution for each scene and described image that described image includes;
Brightness is generated according to the corresponding normal brightness distribution of each scene, weighted value and described image Luminance Distribution Mapping relations;
It is adjusted at generation first according to luminance component of the brightness mapping relations to pixel each in described image Manage image.
Second aspect, the embodiment of the present application also provides a kind of image processing apparatus, comprising:
Original image obtains module, the image showed the score for obtaining color from color mode;
Image parameter determining module, for determining the brightness of image of each scene and described image that described image includes Distribution;
Mapping relations determining module, for according to the corresponding normal brightness distribution of each scene, weighted value and institute It states image brightness distribution and generates brightness mapping relations;
Adjustment module, for being adjusted according to luminance component of the brightness mapping relations to pixel each in described image Section generates the first processing image.
The third aspect, the embodiment of the present application also provides a kind of terminal devices, comprising: processor, memory and storage On a memory and the computer program that can run on a processor, the processor are realized such as when executing the computer program Image processing method described in the embodiment of the present application.
Fourth aspect, the embodiment of the present application also provides a kind of storage medium comprising terminal device executable instruction, institutes Terminal device executable instruction is stated when being executed by terminal device processor for executing image described in the embodiment of the present application Reason method.
In the present solution, obtaining the image that color is showed the score from color mode;Determine each scene and institute that described image includes State the image brightness distribution of image;It is bright according to the corresponding normal brightness distribution of each scene, weighted value and described image Degree distribution generates brightness mapping relations;It is carried out according to luminance component of the brightness mapping relations to pixel each in described image It adjusts, improves image processing effect.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is a kind of flow chart of image processing method provided by the embodiments of the present application;
Fig. 1 a is the schematic diagram of the image brightness distribution figure provided by the embodiments of the present application determined;
Fig. 1 b is a kind of curve synoptic diagram of brightness mapping relations provided by the embodiments of the present application;
Fig. 2 is the flow chart of another image processing method provided by the embodiments of the present application;
Fig. 3 is the flow chart of another image processing method provided by the embodiments of the present application;
Fig. 4 is the flow chart of another image processing method provided by the embodiments of the present application;
Fig. 5 is the flow chart of another image processing method provided by the embodiments of the present application;
Fig. 6 is a kind of structural block diagram of image processing apparatus provided by the embodiments of the present application;
Fig. 7 is a kind of structural schematic diagram of terminal device provided by the embodiments of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is for explaining the application, rather than the restriction to the application.It also should be noted that for the ease of retouching It states, part relevant to the application is illustrated only in attached drawing rather than entire infrastructure.
Fig. 1 is a kind of flow chart of image processing method provided by the embodiments of the present application, is applicable to terminal device to figure As being handled, this method can be executed by terminal device provided by the embodiments of the present application, the image procossing of the terminal device The mode that software and/or hardware can be used in device is realized, as shown in Figure 1, concrete scheme provided in this embodiment is as follows:
Step S101, the image that color is showed the score from color mode is obtained.
Color is usually described by three relatively independent attributes, and three independent variable comprehensive functions constitute a space and sit Mark, as color mode.Color mode can be divided into primary colours color mode and color, show the score from color mode, for example, primary colours color mould Formula includes but is not limited to RGB color mode, and color, to show the score from color mode include but is not limited to YUV color mode
With Lab color mode.Y-component characterizes brightness in YUV color mode, and U component characterizes coloration, and V component characterization is dense Degree, wherein U component and V component indicate the color of image jointly.L * component characterizes brightness in Lab color mode, and a and b are common Indicate color.In color, the image showed the score from color mode, extract light intensity level and color component can be distinguished, can to image into The processing of either side, is treated in journey luminance component in row brightness and color, will not make to the color component of image At any influence.
RGB color mode, YUV color mode and Lab color mode can be converted, and take the mobile phone as an example, based in mobile phone Image acquisition device image when, the generation method of the image of YUV color mode, comprising: based on imaging sensor acquire Initial data, convert raw data into the image of RGB color mode;YUV color is generated according to the image of RGB color mode The image of mode.Wherein, image collecting device can be camera, may include in camera charge-coupled device (CCD, Charge-coupled Device) imaging sensor or complementary metal oxide semiconductor (CMOS, Complementary Metal Oxide Semiconductor) imaging sensor, it is based on above-mentioned ccd image sensor or cmos image sensor will The light signal captured is converted into the RAW initial data of digital signal, is converted to RGB color mode based on RAW initial data Image data, and be further converted into the image data of YUV color mode.In the image collecting device of mobile phone, JPG format Image can be formed by the image of YUV color mode.
The true color that the color in the image data of the RGB color mode formed is not image is converted by RAW initial data Coloured silk, can not carry out any processing to the image data for the RGB color mode being here formed as, in the image data of YUV color mode The color of formation is the realistic colour of image, can be handled the image data of the YUV color mode.At common image When reason, usually RGB data is handled, following face is carried out to the initial data of imaging sensor acquisition in treatment process --- the image of RGB color mode --- image of YUV color mode --- the RGB face of the conversion of color pattern: to RAW initial data The image of color pattern carries out processing operation to the image of RGB color mode, the image for the RGB color mode that obtains that treated, The image of treated RGB color mode is converted to the image of YUV color mode, the exportable image for obtaining JPG format.Phase It answers, when the image to other color modes is handled, is required to obtain after the image of YUV color mode is converted It arrives, and by after the image of treated image is converted to YUV color mode, obtains the image of JPG format.
In one embodiment, the color of acquisition show the score the image from color mode can be YUV color mode image or The image of person's Lab color mode.When the image processing method of the application be applied to mobile phone when, optionally, the color of acquisition show the score from The image of color mode be YUV color mode image, can image acquisition device to YUV color mode image it is laggard Row processing, converts without extra image, reduces the conversion process of image, improve image processing efficiency.
In the present embodiment, the image which shows the score from color mode, which can be, to be shot by camera according to shooting instruction The image arrived can also be that acquisition is presented on electronic equipment screen, is pre- for user by camera before shooting instruction execution The image information look at.
Step S102, the image brightness distribution for each scene and described image that described image includes is determined.
In one embodiment, the different scenes for including in image are divided by way of image segmentation, specifically , the segmentation based on threshold value, the segmentation based on region, the segmentation based on edge or segmentation based on specific theory etc. can be used Method is realized.Illustratively, such as include 4 setting sun, seawater, sandy beach, personage scenes in piece image, pass through image segmentation Mode respectively obtain the setting sun split, seawater, sandy beach, the corresponding image-region of personage, after image segmentation, phase That answers identifies the obtained image-region of segmentation respectively to determine each scene that present image is included.Wherein, image Identification refers to be handled image, analyzed and is understood using program, to identify each image-region, specifically, can be foundation The factors such as the light and shade ratio of object and image for including in image-region carry out scene Recognition to image.Such as it can be and be based on Deep learning model carries out image recognition to image, and deep learning model can be convolutional neural networks.It is pre- in terminal device First setting has the deep learning model of scene Recognition function, which can be based on the training of supervised learning mode It obtains, such as acquisition great amount of images and marks the real scene of each image and be input to sample image as training sample In untrained deep learning model, output scene is obtained, when output scene and the real scene of deep learning model are inconsistent When, the network parameters such as weight and deviant in deep learning model are reversely adjusted according to the difference of output scene and real scene, Circulation executes above-mentioned training process, when the precision of the output scene of deep learning model reaches default precision, completes to depth Spend the training of learning model.
In one embodiment, during the determination of image brightness distribution, each pixel is bright in traversal image Component is spent, for example, extracting the Y-component of each pixel in image, and to each brightness point in the image of YUV color mode Corresponding pixel is measured to be counted.Optionally, the image data of YUV color mode is stored using planar format, Three components of Y, U, V are stored in respectively in different matrixes, in traversing image when the luminance component of each pixel, are read It takes in the matrix of storage Y-component, the luminance component of each pixel in the image can be obtained, and then count and obtain brightness of image point Cloth.Illustratively, as shown in Figure 1a, Fig. 1 a is the signal of the image brightness distribution figure provided by the embodiments of the present application determined Figure, wherein horizontal axis is each luminance component of image, and range 0-255, the longitudinal axis is the corresponding pixel of luminance component each in the image The quantity of point.
Step S103, according to the corresponding normal brightness distribution of each scene, weighted value and described image brightness point Cloth generates brightness mapping relations.
Wherein, the corresponding normal brightness of different scenes is distributed, and includes each brightness of 0-255 point in normal brightness distribution The standard proportional that corresponding pixel quantity accounts for whole image pixel quantity is measured, when the brightness of the corresponding image-region of the scene When distribution situation meets the distribution of corresponding normal brightness, the bandwagon effect of the image can meet the needs of user is to image.Image Luminance Distribution and corresponding normal brightness distribution when having differences, the luminance component of pixel in image is adjusted so that Luminance Distribution Yu the normal brightness distribution of image are unanimously or within the scope of allowable error after adjusting.
In one embodiment, real-time operation is carried out after identifying different scenes obtain corresponding normal brightness point Cloth, or prestore the corresponding normal brightness distribution of different scenes.Specifically, the determination process of normal brightness distribution can be foundation (test sample picture is the corresponding picture of different scenes, such as the setting sun, seawater, sandy beach, personage field to test sample picture effect Scape), a set of normal brightness distribution is calibrated for each sample set.Different scenes is also corresponding with corresponding weighted value, weight Value shows that more greatly importance of the scene in entire image is higher, and weighted value is lower, shows the scene in the weight of entire image The property wanted is lower, which can be set by user or system default, illustratively, with four setting sun, seawater, sandy beach, personage fields For scape, weighted value can respectively successively are as follows: 0.2,0.3,0.1 and 0.4.
In one embodiment, according to the corresponding normal brightness distribution of each scene, weighted value and described image brightness The mode that distribution generates brightness mapping relations, which may is that, searches each field to the brightness value of pixel each in image brightness distribution The pixel brightness value of normal brightness distribution under scape obtains the first contrast rating table, according to weighted value to the first contrast system Contrast rating value in number table is weighted to obtain the second contrast rating table.Wherein, the contrast rating of any pixel Equal to normal brightness distribution pixel brightness value divided by pixel corresponding in image brightness distribution brightness value, specifically , the bright of the pixel of the distribution of the normal brightness under different scenes is searched respectively for the brightness value of each pixel in image Angle value obtains the first contrast rating table by division arithmetic, and wherein scene quantity is the positive integer greater than 1, is obtaining and is scheming As after the corresponding contrast rating table of each pixel can by make each contrast rating value in contrast rating table multiplied by Contrast rating of the mode of corresponding weighted value to weight to obtain the second contrast rating table, in the second contrast rating table Value is the parameter for introducing different scenes difference weighted value factor.Before the setting of specific scene and corresponding weighted value can refer to Implementation is stated, details are not described herein again.
In one embodiment, described according to the corresponding normal brightness distribution of each scene, weighted value and described It includes: according to the corresponding normal brightness distribution of each scene and weighted value that image brightness distribution, which generates brightness mapping relations, Generate combination Luminance Distribution;Brightness mapping relations are generated according to the combination Luminance Distribution and described image Luminance Distribution.Wherein, Combination Luminance Distribution is the combination that multiple and different scenes correspond to normal brightness distribution, and being that needs corresponding with the image are final realizes Luminance Distribution, determine the corresponding pixel of each luminance component in the corresponding normal brightness distribution of each scene specifically, can be Point ratio obtains pixel ratio of the luminance component in combination Luminance Distribution multiplied by summation after corresponding weighted value, and then obtains To complete combination Luminance Distribution.Optionally, brightness is generated according to the combination Luminance Distribution and described image Luminance Distribution to reflect The relationship of penetrating includes: to search the brightness value of pixel each in described image Luminance Distribution the pixel of corresponding combination Luminance Distribution Point brightness value obtains contrast rating matrix.Wherein, the corresponding contrast of image slices vegetarian refreshments is stored in contrast rating matrix The method of determination of coefficient value, the contrast rating value refers to previous examples, and details are not described herein again.
Wherein, include the corresponding relationship of image original luminance component and mapped luminance component in brightness mapping relations, can use The Luminance Distribution situation of image after the luminance component of pixel in image is adjusted to mapped luminance component, and is adjusted meets Combine Luminance Distribution.It illustratively, is a kind of curve of brightness mapping relations provided by the embodiments of the present application referring to Fig. 1 b, Fig. 1 b Schematic diagram.Wherein, brightness mapping relations can be with curve form or inquiry table (LUT, look up table) form exhibition Show, the present embodiment does not limit this, and Fig. 1 b is only that a curve of brightness mapping relations shows example.In Figure 1b, curve Corresponding horizontal axis is the former luminance component of image, and it is the luminance component after adjusting that curve, which corresponds to the longitudinal axis,.
Step S104, life is adjusted according to luminance component of the brightness mapping relations to pixel each in described image At the first processing image.
In one embodiment, the image brightness distribution is adjusted according to obtained combination Luminance Distribution, is realized It is adjusted for reasonable adaptability when simultaneously including multiple scenes in image.Illustratively, each of image picture is traversed Vegetarian refreshments obtains the luminance component of each pixel, determines the corresponding mapped luminance of the luminance component based on brightness mapping relations The luminance component of each pixel is adjusted to mapped luminance component by component, to realize to the brightness regulation of image, obtains the One processing image.It in one embodiment, can be according to obtained contrast rating matrix to each picture in image brightness distribution The color of vegetarian refreshments compensates to obtain the first processing image, specifically, can be by each picture in the image brightness distribution determined The brightness value of vegetarian refreshments is multiplied by the corresponding contrast rating value stored in contrast rating matrix to complete compensating to image To the first processing image.
As shown in the above, for each scene for including in image, according to the corresponding normal brightness point of each scene Cloth, weighted value and described image Luminance Distribution generate brightness mapping relations, and to the picture in image after establishing mapping relations The luminance component of vegetarian refreshments is adjusted, and significantly improves image processing effect, meets demand of the user to high-definition image.
Fig. 2 is the flow chart of another image processing method provided by the embodiments of the present application, optionally, described in the foundation It includes: each luminance component pair in combination Luminance Distribution that combination Luminance Distribution and described image Luminance Distribution, which generate brightness mapping relations, The the first pixel ratio answered determines the corresponding second pixel ratio of each luminance component in described image Luminance Distribution, according to The first pixel ratio and the second pixel ratio-dependent luminance component to be regulated and corresponding object brightness point Amount, establishes the mapping relations between the luminance component to be regulated and the object brightness component.As shown in Fig. 2, technical solution It is specific as follows:
Step S201, the image that color is showed the score from color mode is obtained.
Step S202, the image brightness distribution for each scene and described image that described image includes is determined.
Step S203, it determines the corresponding first pixel ratio of each luminance component in the combination Luminance Distribution, determines institute The corresponding second pixel ratio of each luminance component in image brightness distribution is stated, according to the first pixel ratio and described Two pixel ratio-dependents luminance component to be regulated and corresponding object brightness component, establish the luminance component to be regulated with Mapping relations between the object brightness component.
In one embodiment, to any luminance component, when the first pixel ratio of luminance component in combination Luminance Distribution When example is with the second pixel ratio difference of luminance component corresponding in image, which need to be adjusted.Specifically, When the first pixel ratio of the first luminance component in combination Luminance Distribution is greater than the second pixel of the first luminance component in image When point ratio, need for other luminance components to be mapped as the first luminance component, to improve second of the first luminance component in image Pixel ratio, wherein other luminance components are the luminance component for needing to adjust, which is object brightness point Amount, establishes the mapping relations of other luminance components and object brightness component, and illustratively, other luminance components are the first brightness Luminance component in the adjacent interval of component, wherein other luminance component corresponding pixel points ratios for needing to adjust, can be with First pixel ratio is identical with the difference of the second pixel ratio, or with the first pixel ratio and the second pixel ratio The difference of example is within the scope of allowable error.Similarly, when the first pixel ratio of the second luminance component in normal brightness distribution is big In image to be processed when the second pixel ratio of the second luminance component, need the second luminance component being mapped as other brightness Component, to reduce the second pixel ratio of the second luminance component in image to be processed.
Optionally, according to the numerical values recited of luminance component, sequentially each luminance component is analyzed and is handled, such as can be with It is the sequence being incremented by with 0 to 255 luminance component or the sequence progress that 255 to 0 luminance components successively decrease.Illustratively, with brightness For component is 0, the generating mode of brightness mapping relations is introduced.When the first pixel that luminance component is 0 in combination Luminance Distribution When point ratio is greater than the second pixel ratio that luminance component is 0 in image to be processed, it may be determined that the first pixel ratio and the The proportional difference of two pixel ratios, if pixel ratio of the luminance component in the section 1-5 is same or similar with aforementioned proportion difference, Luminance component is determined as the luminance component that needs are adjusted in the section 1-5, is 0 as object brightness component by luminance component, builds Vertical mapping relations, i.e., it is 0 that the luminance component of 1-5, which is mapped as luminance component, in brightness mapping relations.And so on, it establishes bright Spend mapping relations.
It should be noted that the image that terminal device obtains, the range of luminance component can be 0-255, be also possible to Any subrange in 0-255, such as the range of luminance component can be 30-200, i.e., in image luminance component 0-30 with And 200-255 points are 0 for the quantity of interior pixel, it, can be by the range 30- of luminance component by establishing brightness mapping relations 200 are mapped as range 0-255, and realization stretches the luminance component range of the image of acquisition, so that bright area is brighter, dark space Domain is darker, amplifies to color, improves the clarity of image.
Step S204, life is adjusted according to luminance component of the brightness mapping relations to pixel each in described image At the first processing image.
It can be seen from the above, passing through corresponding brightness component in combination Luminance Distribution and the image brightness distribution of image to be processed Pixel ratio determines the mapping relations of luminance component, and to establish brightness mapping relations, each pixel is bright in determining image It spends after component, the object brightness component of mapping can be quickly determined by way of inquiring brightness mapping relations, relative to every One pixel carries out the mode of Function Mapping, improves image processing efficiency, reduces the time of image procossing, user experience More preferably.
Based on the above technical solution, optionally, described according to the combination Luminance Distribution and described image brightness Distribution generates brightness mapping relations and comprises determining that the corresponding third pixel in each luminance component section in the combination Luminance Distribution Ratio determines the corresponding 4th pixel ratio in each luminance component section in described image Luminance Distribution, according to the third picture Vegetarian refreshments ratio and the 4th pixel ratio-dependent luminance component to be regulated and corresponding object brightness component, described in foundation Mapping relations between luminance component to be regulated and the object brightness component.
Luminance component range 0-255 is divided into multiple luminance component sections, as unit of luminance component section, is carried out bright The analysis and processing of component are spent, to establish brightness mapping relations, brightness mapping relations are established in principle and above-described embodiment Brightness mapping relations establish that principle is identical, and details are not described herein again.
Illustratively, by taking the 0-10 of luminance component section as an example, the generating mode of brightness mapping relations is introduced.Work as normal brightness It is 0-10's that the third pixel ratio that luminance component section is 0-10 in distribution, which is greater than luminance component section in image to be processed, When the 4th pixel ratio, it may be determined that the proportional difference of third pixel ratio and the 4th pixel ratio, if luminance component is in 10- The pixel ratio in 15 sections is same or similar with aforementioned proportion difference, then is determined as needing to adjust in the section 0-15 by luminance component Luminance component, using luminance component section be 0-10 as object brightness component, mapping relations are established, illustratively, by brightness Each luminance component is multiplied by 2/3 in the 0-15 of component section, obtains object brightness component, such as by luminance component 15 and object brightness Component 10 establishes mapping relations, luminance component 12 and object brightness component 8 is established mapping relations, by luminance component 9 and target Luminance component 6 establishes mapping relations ... and so on.Correspondingly, to each luminance component section in luminance component range 0-255 Determine mapping relations, sequentially to establish brightness mapping relations.
Wherein, luminance component section is bigger, and brightness mapping relations establish that speed is faster, and the precision of brightness mapping relations is got over Difference;Correspondingly, luminance component section is smaller, brightness mapping relations establish that speed is slower, and the precision of brightness mapping relations is higher, Dividing for luminance component section can be determined by measuring the speed and precision of establishing of brightness mapping relations.
Fig. 3 is the flow chart of another image processing method provided by the embodiments of the present application, optionally, described in the foundation It includes: the bright of each pixel in traversal described image that combination Luminance Distribution and described image Luminance Distribution, which generate brightness mapping relations, Component is spent, determines the luminance component range of described image;Interception and the luminance component range in the combination Luminance Distribution Corresponding target brightness distribution;Brightness mapping relations are generated according to the Luminance Distribution of the target brightness distribution and described image. As shown in figure 3, technical solution is specific as follows:
Step S301, the image that color is showed the score from color mode is obtained.
Step S302, the image brightness distribution for each scene and described image that described image includes is determined.
Step S303, traverse described image in each pixel luminance component, in the combination Luminance Distribution interception and The corresponding target brightness distribution of the luminance component range, it is raw according to the Luminance Distribution of the target brightness distribution and described image At brightness mapping relations.
Wherein, according to the traversing result of the luminance component of pixel each in image, the maximum of luminance component in image is determined Value and minimum value, it is further known that range of the range of the luminance component of image between minimum value and maximum value, for example, brightness The maximum value of component is 200, minimum value 50, then the range of the luminance component of image is 50-200.What if electronic equipment obtained When the luminance component range of image is the subset of brightness range 0-255, according to the maximum value and minimum value of luminance component in image Combination Luminance Distribution is intercepted, the part in normal brightness distribution between maximum value and minimum value is obtained, it is bright as target Degree distribution, such as image luminance component range be 50-200 when, interception normal brightness distribution in luminance component be 50-200 Part as target brightness distribution.
Wherein, the Luminance Distribution based on target brightness distribution and described image, generate the brightness mapping relations with it is above-mentioned It is identical with the principle of image brightness distribution generation brightness mapping relations according to combination Luminance Distribution in embodiment, it is no longer superfluous herein It states.Wherein it is possible to be analyzed as unit of luminance component or as unit of luminance component section, establishes brightness mapping and close System.
Step S304, life is adjusted according to luminance component of the brightness mapping relations to pixel each in described image At the first processing image.
It can be seen from the above, determining luminance component image model according to the traversing result to the luminance component of pixel in image It encloses, the interception target brightness distribution corresponding with the range of described image luminance component in combination Luminance Distribution, and according to target Luminance Distribution and the Luminance Distribution of image generate the brightness mapping relations, the luminance component of image are adjusted, at generation Image after reason.Within the scope of luminance component image, by the brightness regulation of image to standard state, it is reasonable to carry out to brightness of image It adjusts, improves picture quality.
Fig. 4 is the flow chart of another image processing method provided by the embodiments of the present application, optionally, according to described bright The luminance component of pixel each in described image is adjusted after generation the first processing image in degree mapping relations, further includes: Limb recognition is carried out to the first processing image;The filter being filtered to described image is determined according to limb recognition result Wave core;The first processing image is filtered based on the filtering core, is obtained corresponding with the first processing image Low-frequency image and high frequency imaging;Determine the first gain coefficient of the high frequency imaging and the second gain system of the low-frequency image Number;Gain process is carried out to the high frequency imaging by first gain coefficient and obtains the first gain image, passes through described the Two gain coefficients carry out gain process to the low-frequency image and obtain the second gain image;By first gain image and described Second gain image is merged to obtain second processing image.As shown in figure 4, technical solution is specific as follows:
Step S401, the image that color is showed the score from color mode is obtained.
Step S402, the image brightness distribution for each scene and described image that described image includes is determined.
Step S403, according to the corresponding normal brightness distribution of each scene, weighted value and described image brightness point Cloth generates brightness mapping relations.
Step S404, life is adjusted according to luminance component of the brightness mapping relations to pixel each in described image At the first processing image.
Step S405, limb recognition is carried out to the first processing image, is determined according to limb recognition result to the figure As the filtering core that is filtered, the first processing image is filtered based on the filtering core, is obtained and institute State the corresponding low-frequency image of the first processing image and high frequency imaging.
Wherein, limb recognition is carried out to image to be used to extract the boundary line between the object and background in image, can be First the profile point in image is detected roughly, then the profile point detected is connected by Link Rule, simultaneously Also detect and connect the boundary of the boundary point omitted and removal falseness.The purpose of limb recognition be in discovery image about shape and The information of reflection or transmittance.Illustratively, it can be and adjacent pixel values or bright carried out to the pixel of image line by line, by column Angle value is detected, and determines the pixel that pixel value or brightness value acutely convert, as edge pixel point, by edge pixel point It is attached, forms edge.Illustratively, image progress limb recognition can also be and is based on but is not limited to the edge Roberts Operator, Sobel edge detection operator or Laplacan boundary operator calculate image.
Wherein, limb recognition result can be marginal information in output image, or raw based on the marginal information recognized At the characteristic value of characterization marginal information.Filtering core is the operator core for the filter being filtered to image, filtering core it is big Small difference, filter effect are different.Such as the lesser filter of filtering core is filtered the small details that can retain in image, filter The biggish filter of wave core is filtered the big profile that can retain in image.Illustratively, filtering core can be but unlimited In 3 × 3,5 × 5,7 × 7 or 9 × 9 etc..
Wherein, when shooting to different reference objects, there are larger differences for acquired image content, by right Image carries out limb recognition, determines that the filtering core for being adapted to the image avoids figure so that retaining picture material in filtering Detailed information or the loss of profile information as in.Wherein, the fringing coefficient of image is the characteristic value for characterizing marginal information, is shown Example property, fringing coefficient is bigger, and the marginal information for including in image is more, and fringing coefficient is smaller, the edge letter for including in image It ceases fewer.In order to retain the information in image, the size and fringing coefficient of filtering core are positively correlated, i.e. the fringing coefficient of image is got over Greatly, bigger applied to the filtering core of the image.Such as when image includes black button on white tabletop and desktop and dotted When spot, it is known that the image is flatter, and the marginal information for including is less, to image carry out the obtained fringing coefficient of limb recognition compared with It is small, correspondingly, the filtering core for being suitable for the image is smaller, such as it can be 3 × 3 filtering core;When image include multiple desks, When chair, cabinet and the above-mentioned multiple objects of desk, it is known that the image is more complex, and the marginal information for including is more, to image The fringing coefficient that progress limb recognition obtains is larger, correspondingly, the filtering core for being suitable for the image is larger, such as can be 9 × 9 Filtering core.
Optionally, according to the limb recognition of image as a result, the position of marginal information in image is determined, to image progress region It divides, is filtered to there are the regions of marginal information using larger filtering core, it is smaller to the background area use of image Filtering core is filtered, and is filtered based on dynamic filtering collecting image, is combined the profile for retaining image Information and detailed information avoid the loss of image information.
It wherein, is low-pass filter to the filter that image is filtered, correspondingly, carrying out low-pass filtering to image Processing.Specifically, carrying out low-pass filtering treatment to image based on low-pass filter, low-frequency image corresponding with original image is obtained, By original image image subtraction low-frequency image, high frequency imaging corresponding with original image can be obtained, specifically, to original image and low-frequency image into The pixel difference value of row corresponding pixel points, to obtain high frequency imaging corresponding with original image.
Wherein, low-pass filtering treatment is carried out to the luminance component of the first processing image, such as in YUV color mode, only Y-component is filtered, the relevant high frequency imaging of Y-component and low-frequency image and subsequent enhancing processing is obtained, is Adjusting and transformation to Y-component do not influence the ratio between UV completely, and guaranteeing image, color is undistorted during processing, real Show on the basis of not damaging color, has improved the contrast of image.
Step S406, the first gain coefficient of the high frequency imaging and the second gain coefficient of the low-frequency image are determined, Gain process is carried out to the high frequency imaging by first gain coefficient and obtains the first gain image, is increased by described second Beneficial coefficient carries out gain process to the low-frequency image and obtains the second gain image.
Wherein, enhancing processing is carried out to high frequency imaging comprising the content information in original image in high frequency imaging, so that enhancing The contrast of high frequency imaging and low-frequency image afterwards, adjusts the dynamic range of image, and prominent objects in images improves the clear of image Clear degree.Illustratively, enhancing processing is carried out to high frequency imaging, can be the enhancing coefficient of pixel in setting high frequency imaging, it will Enhancing coefficient is multiplied with the pixel value of pixel or brightness value respectively, and enhanced high frequency imaging and low-frequency image are carried out figure As fusion, the image that obtains that treated.Wherein, the enhancing coefficient for carrying out enhancing processing to high frequency imaging can be fixation Value, i.e., the enhancing coefficient of each pixel are identical.Or the enhancing coefficient for carrying out enhancing processing to high frequency imaging can also be Be calculated according to each pixel, and having differences property different according to each pixel, correspondingly, to high frequency imaging into When row enhancing processing, pixel value or brightness value to each pixel obtain high quality multiplied by corresponding enhancing coefficient Enhance image.
Wherein, in high frequency imaging, based on the window of default size, centered on reference image vegetarian refreshments, the window is calculated The local variance in region;The reference image vegetarian refreshments is determined according to the corresponding Local standard deviation of the local variance of the window area Yield value;The first gain coefficient of the high frequency imaging is determined according to the yield value of each reference image vegetarian refreshments.Reference image vegetarian refreshments For any pixel point (i, j) in image, luminance component is x (i, j), wherein i and j is respectively scheming for reference image vegetarian refreshments Transverse and longitudinal coordinate as in, window size are (2n+1) (2n+1), wherein n is the integer more than or equal to 0, above-mentioned window size It is only a kind of example, in other embodiments, above-mentioned window can be rectangle, the i.e. form of (2n+1) (2m+1).
The local variance of window area can be calculated by following formula:
Wherein,
In above-mentioned formula, mx(i, j) is the local mean values of window area, and x (k, l) is the brightness of pixel in window Component, k and l are the integer more than or equal to 0.
Wherein, σx(i, j) is the Local standard deviation of the window area centered on reference image vegetarian refreshments, optionally, reference image The yield value of vegetarian refreshments is inversely proportional with Local standard deviation, for example, the yield value of reference image vegetarian refreshments can be D/ σx(i, j), optionally, The yield value of reference image vegetarian refreshments is greater than 1, to realize the enhancing to the luminance component of the pixel in high frequency imaging, wherein D mono- A constant.
Wherein, the method for determination of the second gain coefficient of low-frequency image, the determination with the first gain coefficient of high frequency imaging Mode is identical, and details are not described herein again.
Step S407, it is merged first gain image and second gain image to obtain second processing figure Picture.
It can be seen from the above, successively carrying out color enhanced processing and raising pair to image for the image of camera acquisition It than degree processing, and is handled independent luminance component, is not related to color component, i.e., on the basis for not damaging color On, color dynamic range and virtual mode are adjusted, the clarity for improving brightness of image and image detail passes through at the same time The high frequency imaging and low frequency figure that the luminance component of first processing image of colo(u)r specification separation color mode is filtered Picture calculates separately the first gain coefficient and the second gain coefficient, is carried out according to first gain coefficient to the high frequency imaging Enhancing processing, enhancing processing is carried out to the low-frequency image according to second gain coefficient, by enhanced low-frequency image with Enhanced high frequency imaging carries out image co-registration, the image that obtains that treated, at the same enhance it is right in high frequency imaging and low-pass pictures Than degree, the loss of details in image processing process is avoided, on the basis of image is distortionless, improves image definition.
Fig. 5 is the flow chart of another image processing method provided by the embodiments of the present application, optionally, described by described Second gain coefficient carries out gain process to obtain the second gain image including: according in the low-frequency image to the low-frequency image The luminance information of each pixel identifies the flat site in the low-frequency image and non-planar regions;According to the flat site The low-frequency image is split with the non-planar regions;By second gain coefficient to the non-flat forms area after fractionation Domain carries out gain process and obtains the second gain image with the flat site progress image co-registration after splitting.As shown in figure 5, technology Scheme is specific as follows:
Step S501, the image that color is showed the score from color mode is obtained.
Step S502, the image brightness distribution for each scene and described image that described image includes is determined.
Step S503, according to the corresponding normal brightness distribution of each scene, weighted value and described image brightness point Cloth generates brightness mapping relations.
Step S504, life is adjusted according to luminance component of the brightness mapping relations to pixel each in described image At the first processing image.
Step S505, limb recognition is carried out to the first processing image, is determined according to limb recognition result to the figure As the filtering core that is filtered, the first processing image is filtered based on the filtering core, is obtained and institute State the corresponding low-frequency image of the first processing image and high frequency imaging.
Step S506, the first gain coefficient of the high frequency imaging and the second gain coefficient of the low-frequency image are determined, Gain process is carried out to the high frequency imaging by first gain coefficient and obtains the first gain image.
Step S507, it according to the luminance information of pixel each in the low-frequency image, identifies flat in the low-frequency image Smooth region and non-planar regions split the low-frequency image according to the flat site and the non-planar regions, lead to Second gain coefficient is crossed to carry out gain process to the non-planar regions after fractionation and carry out figure with the flat site after splitting As fusion obtains the second gain image.
Wherein, according to the luminance information of pixel each in the low-frequency image, the flat region in the low-frequency image is identified Domain and non-planar regions, comprising: piecemeal is carried out to the low-frequency image and handles to obtain multiple images region, and determines described image The pixel difference value in region;When the pixel difference value in described image region is less than or equal to preset value, described image area is determined Domain belongs to flat site;When the pixel difference value in described image region is greater than the preset value, determine that described image region belongs to In non-planar regions.
Wherein, for any one image-region, pixel difference value be can be through following formula meter in described image region It calculates:
Wherein, A is the pixel difference value of image-region, and p is the sum of pixel in image-region, gb(b=1,2 ... p) For the luminance component of each pixel in image-region,For the local luminance mean value of image-region, p and b are greater than 0 Positive integer.
Pixel difference value indicates the difference condition of the luminance information of each pixel in image-region, such as pixel difference value Bigger, showing the luminance information of each pixel in the image-region, there are larger differences, and pixel difference value is smaller, shows the figure As the luminance information similitude of pixel each in region is higher.Pixel difference value is less than or equal to the image-region of preset value Spliced, form flat site, the image-region that pixel difference value is greater than preset value is spliced, forms non-flat forms area Domain.Optionally, for determining the preset value of flat site and non-planar regions and the average local difference value of low-frequency imagePhase It closes, specifically, according to the quantity of image-region and the pixel difference value A of each image-region, it may be determined that low-frequency image is averaged Local difference valueInlet coefficient λ, above-mentioned preset value can beI.e. when the pixel difference value of image-regionWhen, The image-region belongs to flat site, when the pixel difference value of image-regionWhen, which belongs to non-flat forms area Domain.
Illustratively, for the image including black button and dotted spot in white tabletop, by being filtered To high frequency imaging in can be including black button, include the dotted spot on white tabletop and desktop in low-frequency image, It is non-planar regions there are the part of dotted spot in low-frequency image, the background area of white tabletop is flat site.Upper It states in embodiment, it is known that the yield value of pixel is inversely proportional with Local standard deviation, and in flat site, Local standard deviation very little is led The yield value of addressed pixel point is bigger, so as to cause the amplification of noise.To in low-frequency image flat site and non-flat forms area Domain is identified and is split, and only carries out enhancing processing to non-planar regions, to the luminance component of flat site without adjustment, Avoid the amplification when carrying out enhancing processing to image to flat site noise.
Step S508, it is merged first gain image and second gain image to obtain second processing figure Picture.
It can be seen from the above, the color to image is virtually amplified, the first processing image is obtained, further, to first It handles image and carries out low-pass filtering treatment, and enhancing processing is carried out to obtained high frequency imaging, and to the non-flat of low-frequency image Smooth region carries out enhancing processing, realizes the enhancing to image detail, keeps the flat site of low-frequency image, controls noise, is increasing While the contrast of big image, the amplification to noise is avoided.
Fig. 6 is a kind of structural block diagram of image processing apparatus provided by the embodiments of the present application, and the device is above-mentioned for executing The image processing method that embodiment provides, has the corresponding functional module of execution method and beneficial effect.As shown in fig. 6, the dress Set and specifically include: original image obtains module 101,102 mapping relations determining module 103 of image parameter determining module and adjusts mould Block 104, wherein
Original image obtains module 101, the image showed the score for obtaining color from color mode;
Image parameter determining module 102, for determining the image of each scene and described image that described image includes Luminance Distribution;
Mapping relations determining module 103, for according to the corresponding normal brightness distribution of each scene, weighted value and Described image Luminance Distribution generates brightness mapping relations;
Adjustment module 104, for according to the brightness mapping relations to the luminance component of pixel each in described image into Row, which is adjusted, generates the first processing image.
As shown in the above, for each scene for including in image, according to the corresponding normal brightness point of each scene Cloth, weighted value and described image Luminance Distribution generate brightness mapping relations, and to the picture in image after establishing mapping relations The luminance component of vegetarian refreshments is adjusted, and significantly improves image processing effect, meets demand of the user to high-definition image.
In a possible embodiment, described image parameter determination module 102 is also used to:
It is bright being generated according to the corresponding normal brightness distribution of each scene, weighted value and described image Luminance Distribution It spends before mapping relations, generates corresponding normal brightness distribution according to different image scenes, and determine corresponding weighted value.
In a possible embodiment, the mapping relations determining module 103 is specifically used for:
It is described to be generated according to the corresponding normal brightness distribution of each scene, weighted value and described image Luminance Distribution Brightness mapping relations include: to generate combination brightness point according to the corresponding normal brightness distribution of each scene and weighted value Cloth;Brightness mapping relations are generated according to the combination Luminance Distribution and described image Luminance Distribution.
In a possible embodiment, the mapping relations determining module 103 is specifically used for:
It determines the corresponding first pixel ratio of each luminance component in the combination Luminance Distribution, determines described image brightness The corresponding second pixel ratio of each luminance component in distribution, according to the first pixel ratio and the second pixel ratio Example determines luminance component to be regulated and corresponding object brightness component, establishes the luminance component to be regulated and the target is bright Spend the mapping relations between component;Alternatively,
It determines the corresponding third pixel ratio in each luminance component section in the combination Luminance Distribution, determines described image The corresponding 4th pixel ratio in each luminance component section in Luminance Distribution, according to the third pixel ratio and the described 4th Pixel ratio-dependent luminance component to be regulated and corresponding object brightness component, establish the luminance component to be regulated and institute State the mapping relations between object brightness component.
In a possible embodiment, the mapping relations determining module 103 is specifically used for:
The luminance component for traversing each pixel in described image, determines the luminance component range of described image;
Target brightness distribution corresponding with the luminance component range is intercepted in the combination Luminance Distribution;
Brightness mapping relations are generated according to the Luminance Distribution of the target brightness distribution and described image.
In a possible embodiment, the adjustment module 104 is also used to:
Generation first is being adjusted according to luminance component of the brightness mapping relations to pixel each in described image After handling image, limb recognition is carried out to the first processing image;
The filtering core being filtered to described image is determined according to limb recognition result;
The first processing image is filtered based on the filtering core, is obtained and the first processing image pair The low-frequency image and high frequency imaging answered;
Determine the first gain coefficient of the high frequency imaging and the second gain coefficient of the low-frequency image;
Gain process is carried out to the high frequency imaging by first gain coefficient and obtains the first gain image, passes through institute It states the second gain coefficient and the second gain image is obtained to low-frequency image progress gain process;
It is merged first gain image and second gain image to obtain second processing image.
In a possible embodiment, described that the low-frequency image is carried out at gain by second gain coefficient Reason obtains the second gain image and includes:
According to the luminance information of pixel each in the low-frequency image, flat site in the low-frequency image and non-is identified Flat site;
The low-frequency image is split according to the flat site and the non-planar regions;
It is flat simultaneously and after fractionation that gain process is carried out to the non-planar regions after fractionation by second gain coefficient Region carries out image co-registration and obtains the second gain image.
In a possible embodiment, the original image obtains module 101 and is also used to:
Before the acquisition color is showed the score from the image of color mode, the original signal that imaging sensor acquires is converted to The image of the RGB color mode is converted to the image that color is showed the score from color mode, the color by the image of RGB color mode Showing the score from color mode includes at least one of YUV color mode, LAB color mode and hsv color mode.
The present embodiment provides a kind of terminal device on the basis of the various embodiments described above, and Fig. 7 is that the embodiment of the present application mentions The structural schematic diagram of a kind of terminal device supplied, as shown in fig. 7, the terminal device 200 includes: memory 201, processor (Central Processing Unit, CPU) 202, Peripheral Interface 203, RF (Radio Frequency, radio frequency) circuit 205, Voicefrequency circuit 206, loudspeaker 211, power management chip 208, input/output (I/O) subsystem 209, touch screen 212, Wifi Module 213, other input/control devicess 210 and outside port 204, these components pass through one or more communication bus or Signal wire 207 communicates.
It should be understood that graphic terminal 200 is only an example of terminal device, and terminal device 200 It can have than shown in the drawings more or less component, can combine two or more components, or can be with It is configured with different components.Various parts shown in the drawings can include one or more signal processings and/or dedicated It is realized in the combination of hardware, software or hardware and software including integrated circuit.
Below just the terminal device provided in this embodiment for image procossing be described in detail, the terminal device with For smart phone.
Memory 201, the memory 201 can be accessed by CPU202, Peripheral Interface 203 etc., and the memory 201 can It can also include nonvolatile memory to include high-speed random access memory, such as one or more disk memory, Flush memory device or other volatile solid-state parts.
The peripheral hardware that outputs and inputs of equipment can be connected to CPU202 and deposited by Peripheral Interface 203, the Peripheral Interface 203 Reservoir 201.
I/O subsystem 209, the I/O subsystem 209 can be by the input/output peripherals in equipment, such as touch screen 212 With other input/control devicess 210, it is connected to Peripheral Interface 203.I/O subsystem 209 may include 2091 He of display controller For controlling one or more input controllers 2092 of other input/control devicess 210.Wherein, one or more input controls Device 2092 processed receives electric signal from other input/control devicess 210 or sends electric signal to other input/control devicess 210, Other input/control devicess 210 may include physical button (push button, rocker buttons etc.), dial, slide switch, behaviour Vertical pole clicks idler wheel.It is worth noting that input controller 2092 can with it is following any one connect: keyboard, infrared port, The indicating equipment of USB interface and such as mouse.
Touch screen 212, the touch screen 212 are the input interface and output interface between user terminal and user, can It is shown to user depending on output, visual output may include figure, text, icon, video etc..
Display controller 2091 in I/O subsystem 209 receives electric signal from touch screen 212 or sends out to touch screen 212 Electric signals.Touch screen 212 detects the contact on touch screen, and the contact that display controller 2091 will test is converted to and is shown The interaction of user interface object on touch screen 212, i.e. realization human-computer interaction, the user interface being shown on touch screen 212 Object can be the icon of running game, the icon for being networked to corresponding network etc..It is worth noting that equipment can also include light Mouse, light mouse are the extensions for the touch sensitive surface for not showing the touch sensitive surface visually exported, or formed by touch screen.
RF circuit 205 is mainly used for establishing the communication of mobile phone Yu wireless network (i.e. network side), realizes mobile phone and wireless network The data receiver of network and transmission.Such as transmitting-receiving short message, Email etc..Specifically, RF circuit 205 receives and sends RF letter Number, RF signal is also referred to as electromagnetic signal, and RF circuit 205 converts electrical signals to electromagnetic signal or electromagnetic signal is converted to telecommunications Number, and communicated by the electromagnetic signal with communication network and other equipment.RF circuit 205 may include for executing The known circuit of these functions comprising but it is not limited to antenna system, RF transceiver, one or more amplifiers, tuner, one A or multiple oscillators, digital signal processor, CODEC (COder-DECoder, coder) chipset, user identifier mould Block (Subscriber Identity Module, SIM) etc..
Voicefrequency circuit 206 is mainly used for receiving audio data from Peripheral Interface 203, which is converted to telecommunications Number, and the electric signal is sent to loudspeaker 211.
Loudspeaker 211 is reduced to sound for mobile phone to be passed through RF circuit 205 from the received voice signal of wireless network And the sound is played to user.
Power management chip 208, the hardware for being connected by CPU202, I/O subsystem and Peripheral Interface are powered And power management.
It is arbitrarily real that the application can be performed in the image processing apparatus and terminal device of the terminal device provided in above-described embodiment The image processing method for applying terminal device provided by example has and executes the corresponding functional module of this method and beneficial effect.Not The technical detail of detailed description in the above-described embodiments, reference can be made to the image of terminal device provided by the application any embodiment Processing method.
The embodiment of the present application also provides a kind of storage medium comprising terminal device executable instruction, and the terminal device can It executes instruction when being executed by terminal device processor for executing a kind of image processing method, this method comprises:
Obtain the image that color is showed the score from color mode;
Determine the image brightness distribution for each scene and described image that described image includes;
Brightness is generated according to the corresponding normal brightness distribution of each scene, weighted value and described image Luminance Distribution Mapping relations;
It is adjusted at generation first according to luminance component of the brightness mapping relations to pixel each in described image Manage image.
In a possible embodiment, according to the corresponding normal brightness distribution of each scene, weighted value and Described image Luminance Distribution generates before brightness mapping relations, further includes:
Corresponding normal brightness distribution is generated according to different image scenes, and determines corresponding weighted value.
In a possible embodiment, described bright according to the combination Luminance Distribution and the generation of described image Luminance Distribution Spending mapping relations includes:
It determines the corresponding first pixel ratio of each luminance component in the combination Luminance Distribution, determines described image brightness The corresponding second pixel ratio of each luminance component in distribution, according to the first pixel ratio and the second pixel ratio Example determines luminance component to be regulated and corresponding object brightness component, establishes the luminance component to be regulated and the target is bright Spend the mapping relations between component;Alternatively,
It determines the corresponding third pixel ratio in each luminance component section in the combination Luminance Distribution, determines described image The corresponding 4th pixel ratio in each luminance component section in Luminance Distribution, according to the third pixel ratio and the described 4th Pixel ratio-dependent luminance component to be regulated and corresponding object brightness component, establish the luminance component to be regulated and institute State the mapping relations between object brightness component.
In a possible embodiment, described bright according to the combination Luminance Distribution and the generation of described image Luminance Distribution Spending mapping relations includes:
The luminance component for traversing each pixel in described image, determines the luminance component range of described image;
Target brightness distribution corresponding with the luminance component range is intercepted in the combination Luminance Distribution;
Brightness mapping relations are generated according to the Luminance Distribution of the target brightness distribution and described image.
In a possible embodiment, in the brightness according to the brightness mapping relations to pixel each in described image Component is adjusted after generation the first processing image, further includes:
Limb recognition is carried out to the first processing image;
The filtering core being filtered to described image is determined according to limb recognition result;
The first processing image is filtered based on the filtering core, is obtained and the first processing image pair The low-frequency image and high frequency imaging answered;
Determine the first gain coefficient of the high frequency imaging and the second gain coefficient of the low-frequency image;
Gain process is carried out to the high frequency imaging by first gain coefficient and obtains the first gain image, passes through institute It states the second gain coefficient and the second gain image is obtained to low-frequency image progress gain process;
It is merged first gain image and second gain image to obtain second processing image.
In a possible embodiment, described that the low-frequency image is carried out at gain by second gain coefficient Reason obtains the second gain image and includes:
According to the luminance information of pixel each in the low-frequency image, flat site in the low-frequency image and non-is identified Flat site;
The low-frequency image is split according to the flat site and the non-planar regions;
It is flat simultaneously and after fractionation that gain process is carried out to the non-planar regions after fractionation by second gain coefficient Region carries out image co-registration and obtains the second gain image.
In a possible embodiment, before the acquisition color is showed the score from the image of color mode, further includes:
The image that the original signal that imaging sensor acquires is converted to RGB color mode, by the RGB color mode Image is converted to the image that color is showed the score from color mode, and it includes YUV color mode, LAB color that the color, which is showed the score from color mode, At least one of mode and hsv color mode.
Storage medium --- any various types of memory devices or storage equipment.Term " storage medium " is intended to wrap It includes: install medium, such as CD-ROM, floppy disk or magnetic tape equipment;Computer system memory or random access memory, such as DRAM, DDR RAM, SRAM, EDO RAM, Lan Basi (Rambus) RAM etc.;Nonvolatile memory, such as flash memory, magnetic medium (such as hard disk or optical storage);Register or the memory component of other similar types etc..Storage medium can further include other Memory of type or combinations thereof.In addition, storage medium can be located at program in the first computer system being wherein performed, Or can be located in different second computer systems, second computer system is connected to the by network (such as internet) One computer system.Second computer system can provide program instruction to the first computer for executing." storage is situated between term Matter " may include may reside in different location (such as by network connection different computer systems in) two or More storage mediums.Storage medium can store the program instruction that can be performed by one or more processors and (such as implement For computer program).
Certainly, a kind of storage medium comprising computer executable instructions, computer provided by the embodiment of the present application The image processing method operation that executable instruction is not limited to the described above, can also be performed provided by the application any embodiment Relevant operation in image processing method.
Note that above are only the preferred embodiment and institute's application technology principle of the application.It will be appreciated by those skilled in the art that The application is not limited to specific embodiment described here, be able to carry out for a person skilled in the art it is various it is apparent variation, The protection scope readjusted and substituted without departing from the application.Therefore, although being carried out by above embodiments to the application It is described in further detail, but the application is not limited only to above embodiments, in the case where not departing from the application design, also It may include more other equivalent embodiments, and scope of the present application is determined by the scope of the appended claims.

Claims (10)

1. image processing method characterized by comprising
Obtain the image that color is showed the score from color mode;
Determine the image brightness distribution for each scene and described image that described image includes;
Brightness mapping is generated according to the corresponding normal brightness distribution of each scene, weighted value and described image Luminance Distribution Relationship;
It is adjusted according to luminance component of the brightness mapping relations to pixel each in described image and generates the first processing figure Picture.
2. the method according to claim 1, wherein according to the corresponding normal brightness point of each scene Cloth, weighted value and described image Luminance Distribution generate before brightness mapping relations, further includes:
Corresponding normal brightness distribution is generated according to different image scenes, and determines corresponding weighted value.
3. the method according to claim 1, wherein described according to the corresponding normal brightness point of each scene Cloth, weighted value and described image Luminance Distribution generate brightness mapping relations
The pixel of normal brightness under each scene distribution is searched the brightness value of each pixel in described image Luminance Distribution Point brightness value obtains the first contrast rating table, according to the weighted value to the contrast system in the first contrast rating table Numerical value is weighted to obtain the second contrast rating table.
4. the method according to claim 1, wherein described according to the corresponding normal brightness point of each scene Cloth, weighted value and described image Luminance Distribution generate brightness mapping relations
Combination Luminance Distribution is generated according to the corresponding normal brightness distribution of each scene and weighted value;
Brightness mapping relations are generated according to the combination Luminance Distribution and described image Luminance Distribution.
5. according to the method described in claim 4, it is characterized in that, described bright according to the combination Luminance Distribution and described image Degree distribution generates brightness mapping relations
The pixel brightness of corresponding combination Luminance Distribution is searched the brightness value of each pixel in described image Luminance Distribution Value obtains contrast rating matrix;
Correspondingly, described be adjusted generation according to luminance component of the brightness mapping relations to pixel each in described image First, which handles image, includes:
It compensates to obtain according to color of the contrast rating matrix to each pixel in described image Luminance Distribution First processing image.
6. according to the method described in claim 4, it is characterized in that, described bright according to the combination Luminance Distribution and described image Degree distribution generates brightness mapping relations
The luminance component for traversing each pixel in described image, determines the luminance component range of described image;
Target brightness distribution corresponding with the luminance component range is intercepted in the combination Luminance Distribution;
Brightness mapping relations are generated according to the target brightness distribution and described image Luminance Distribution.
7. method according to claim 1 to 6, which is characterized in that according to the brightness mapping relations to institute The luminance component for stating each pixel in image is adjusted after generation the first processing image, further includes:
Limb recognition is carried out to the first processing image;
The filtering core being filtered to described image is determined according to limb recognition result;
The first processing image is filtered based on the filtering core, is obtained corresponding with the first processing image Low-frequency image and high frequency imaging;
Determine the first gain coefficient of the high frequency imaging and the second gain coefficient of the low-frequency image;
Gain process is carried out to the high frequency imaging by first gain coefficient and obtains the first gain image, passes through described the Two gain coefficients carry out gain process to the low-frequency image and obtain the second gain image;
It is merged first gain image and second gain image to obtain second processing image.
8. image processing apparatus characterized by comprising
Original image obtains module, the image showed the score for obtaining color from color mode;
Image parameter determining module, for determining the brightness of image point of each scene and described image that described image includes Cloth;
Mapping relations determining module, for according to the corresponding normal brightness distribution of each scene, weighted value and the figure Image brightness distribution generates brightness mapping relations;
Adjustment module, for life to be adjusted according to luminance component of the brightness mapping relations to pixel each in described image At the first processing image.
9. a kind of terminal device, comprising: processor, memory and storage are on a memory and the meter that can run on a processor Calculation machine program, which is characterized in that the processor is realized when executing the computer program such as any one of claim 1-7 institute The image processing method stated.
10. a kind of storage medium comprising terminal device executable instruction, which is characterized in that the terminal device executable instruction When being executed by terminal device processor for executing such as image processing method of any of claims 1-7.
CN201811630233.4A 2018-12-28 2018-12-28 Image processing method, device, terminal device and storage medium Pending CN109697738A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811630233.4A CN109697738A (en) 2018-12-28 2018-12-28 Image processing method, device, terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811630233.4A CN109697738A (en) 2018-12-28 2018-12-28 Image processing method, device, terminal device and storage medium

Publications (1)

Publication Number Publication Date
CN109697738A true CN109697738A (en) 2019-04-30

Family

ID=66232349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811630233.4A Pending CN109697738A (en) 2018-12-28 2018-12-28 Image processing method, device, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN109697738A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120021A (en) * 2019-05-05 2019-08-13 腾讯科技(深圳)有限公司 Method of adjustment, device, storage medium and the electronic device of brightness of image
CN110349271A (en) * 2019-07-11 2019-10-18 Oppo广东移动通信有限公司 Lens color adaptation method, apparatus, storage medium and augmented reality equipment
CN111402165A (en) * 2020-03-18 2020-07-10 展讯通信(上海)有限公司 Image processing method, device, equipment and storage medium
CN111681189A (en) * 2020-06-17 2020-09-18 深圳开立生物医疗科技股份有限公司 Method, device and equipment for improving image brightness uniformity and storage medium
CN112381836A (en) * 2020-11-12 2021-02-19 贝壳技术有限公司 Image processing method and device, computer readable storage medium, and electronic device
CN112508820A (en) * 2020-12-18 2021-03-16 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112543278A (en) * 2019-09-20 2021-03-23 青岛海信移动通信技术股份有限公司 Method and terminal for adjusting contrast

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102214357A (en) * 2011-06-22 2011-10-12 王洪剑 Image enhancement method and system
CN108090879A (en) * 2017-12-12 2018-05-29 上海顺久电子科技有限公司 The method and display device that a kind of high dynamic range images to input are handled

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102214357A (en) * 2011-06-22 2011-10-12 王洪剑 Image enhancement method and system
CN108090879A (en) * 2017-12-12 2018-05-29 上海顺久电子科技有限公司 The method and display device that a kind of high dynamic range images to input are handled

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120021A (en) * 2019-05-05 2019-08-13 腾讯科技(深圳)有限公司 Method of adjustment, device, storage medium and the electronic device of brightness of image
US11587530B2 (en) 2019-05-05 2023-02-21 Tencent Technology (Shenzhen) Company Limited Method and apparatus for adjusting image luminance, storage medium, and electronic device
CN110349271A (en) * 2019-07-11 2019-10-18 Oppo广东移动通信有限公司 Lens color adaptation method, apparatus, storage medium and augmented reality equipment
CN110349271B (en) * 2019-07-11 2023-10-20 Oppo广东移动通信有限公司 Lens color adjustment method, device, storage medium and augmented reality equipment
CN112543278A (en) * 2019-09-20 2021-03-23 青岛海信移动通信技术股份有限公司 Method and terminal for adjusting contrast
CN112543278B (en) * 2019-09-20 2022-05-27 青岛海信移动通信技术股份有限公司 Method and terminal for adjusting contrast
CN111402165A (en) * 2020-03-18 2020-07-10 展讯通信(上海)有限公司 Image processing method, device, equipment and storage medium
CN111681189A (en) * 2020-06-17 2020-09-18 深圳开立生物医疗科技股份有限公司 Method, device and equipment for improving image brightness uniformity and storage medium
CN111681189B (en) * 2020-06-17 2023-11-17 深圳开立生物医疗科技股份有限公司 Method, device, equipment and storage medium for improving image brightness uniformity
CN112381836A (en) * 2020-11-12 2021-02-19 贝壳技术有限公司 Image processing method and device, computer readable storage medium, and electronic device
CN112508820A (en) * 2020-12-18 2021-03-16 维沃移动通信有限公司 Image processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN109272459B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109697738A (en) Image processing method, device, terminal device and storage medium
CN109146814B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109727215A (en) Image processing method, device, terminal device and storage medium
CN109639982B (en) Image noise reduction method and device, storage medium and terminal
CN108900819A (en) Image processing method, device, storage medium and electronic equipment
CN109727216A (en) Image processing method, device, terminal device and storage medium
CN109685746B (en) Image brightness adjusting method and device, storage medium and terminal
CN109741280B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110766621B (en) Image processing method, image processing device, storage medium and electronic equipment
CN107633252B (en) Skin color detection method, device and storage medium
CN109714582B (en) White balance adjusting method, device, storage medium and terminal
CN109741281A (en) Image processing method, device, storage medium and terminal
CN109741288A (en) Image processing method, device, storage medium and electronic equipment
CN109618098A (en) A kind of portrait face method of adjustment, device, storage medium and terminal
CN111739041B (en) Image frame clipping method, device and equipment
CN110519485A (en) Image processing method, device, storage medium and electronic equipment
CN113034509A (en) Image processing method and device
CN109583330B (en) Pore detection method for face photo
CN112446830A (en) Image color edge processing method and device, storage medium and electronic equipment
CN109003272A (en) Image processing method, apparatus and system
CN109672829B (en) Image brightness adjusting method and device, storage medium and terminal
CN115660997B (en) Image data processing method and device and electronic equipment
CN111970501A (en) Pure color scene AE color processing method and device, electronic equipment and storage medium
TW202034686A (en) Image processing method and electronic device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination