CN109741281A - Image processing method, device, storage medium and terminal - Google Patents

Image processing method, device, storage medium and terminal Download PDF

Info

Publication number
CN109741281A
CN109741281A CN201910008661.1A CN201910008661A CN109741281A CN 109741281 A CN109741281 A CN 109741281A CN 201910008661 A CN201910008661 A CN 201910008661A CN 109741281 A CN109741281 A CN 109741281A
Authority
CN
China
Prior art keywords
image
sample
processed
mapping matrix
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910008661.1A
Other languages
Chinese (zh)
Other versions
CN109741281B (en
Inventor
王会朝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910008661.1A priority Critical patent/CN109741281B/en
Publication of CN109741281A publication Critical patent/CN109741281A/en
Application granted granted Critical
Publication of CN109741281B publication Critical patent/CN109741281B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the present application discloses image processing method, device, storage medium and terminal.The described method includes: obtaining the image to be processed that color is showed the score from color mode;Image to be processed is input to brightness mapping matrix trained in advance to determine in model;The output of model is determined according to brightness mapping matrix as a result, determining brightness mapping matrix corresponding with image to be processed;It is adjusted according to luminance component of the brightness mapping matrix to each pixel in image to be processed, the target image after generating brightness adjustment.The embodiment of the present application is by using above-mentioned technical proposal, it can accurately and rapidly determine the brightness mapping matrix for the image showed the score from color mode remarkably, and independent luminance component in image to be processed is handled based on brightness mapping matrix, it is not related to the adjusting of any pair of color component, it ensure that in above-mentioned image processing process, image color is kept intact, influence and change there is no treatment process to color, and the image color of guarantee is clearly undistorted.

Description

Image processing method, device, storage medium and terminal
Technical field
The invention relates to technical field of image processing more particularly to image processing method, device, storage medium and Terminal.
Background technique
With the fast development of terminal technology, the electronic equipments such as mobile phone, tablet computer have been provided with Image Acquisition function Can, user is higher and higher to the quality requirement for the image that terminal acquires.
At present after acquiring image, can generally blast be carried out to image, so that the darker region in obtained image It becomes clear, the details for being visually difficult to differentiate is shown, improve the clarity of whole image.But the blast of above-mentioned image In mode, usually the rgb value of each pixel in image is enhanced, easily leads to following problem: color in image Distortion phenomenon is excessively occurred after enhancing close to grey, and compared with the color of bright areas, is thickened.Image fault Lead to the loss in detail of the distortion zone, and often image fault region may be user's region-of-interest, such as face in image Region.
Summary of the invention
The embodiment of the present application provides a kind of image processing method, device, storage medium and terminal, can optimize the relevant technologies In fade up scheme.
In a first aspect, the embodiment of the present application provides a kind of image processing method, comprising:
Obtain the image to be processed that color is showed the score from color mode;
The image to be processed is input to brightness mapping matrix trained in advance to determine in model;
Determine the output of model as a result, determining brightness corresponding with the image to be processed according to the brightness mapping matrix Mapping matrix;
It is adjusted, is generated according to luminance component of the brightness mapping matrix to each pixel in the image to be processed Target image after brightness adjustment.
Second aspect, the embodiment of the present application provide a kind of image processing apparatus, comprising:
Image collection module to be processed, the image to be processed showed the score for obtaining color from color mode;
Image input module to be processed, it is true for the image to be processed to be input to brightness mapping matrix trained in advance In cover half type;
Brightness mapping matrix determining module, for determining the output of model according to the brightness mapping matrix as a result, determining Brightness mapping matrix corresponding with the image to be processed;
Target image generation module, for according to the brightness mapping matrix to each pixel in the image to be processed Luminance component is adjusted, the target image after generating brightness adjustment.
The third aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence realizes the image processing method as described in the embodiment of the present application when the program is executed by processor.
Fourth aspect, the embodiment of the present application provide a kind of terminal, including memory, and processor and storage are on a memory And the computer program that can be run in processor, the processor realize such as the embodiment of the present application when executing the computer program The image processing method.
The image procossing scheme provided in the embodiment of the present application obtains the image to be processed that color is showed the score from color mode, and Image to be processed is input to brightness mapping matrix trained in advance to determine in model, model is determined according to brightness mapping matrix Output is as a result, determine brightness mapping matrix corresponding with image to be processed, then according to brightness mapping matrix to image to be processed In the luminance component of each pixel be adjusted, the target image after generating brightness adjustment.By using above-mentioned technical proposal, lead to It crosses brightness mapping matrix and determines that model can accurately and rapidly determine the brightness mapping square for the image showed the score from color mode remarkably Battle array, and independent luminance component in image to be processed is handled based on brightness mapping matrix, it is not related to any pair of color point The adjusting of amount ensure that in above-mentioned image processing process, image color is kept intact, and there is no treatment processes to the shadow of color It rings and changes, the image color of guarantee is clearly undistorted.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of image processing method provided by the embodiments of the present application;
Fig. 2 is the flow diagram of another image processing method provided by the embodiments of the present application;
Fig. 3 is the flow diagram of another image processing method provided by the embodiments of the present application;
Fig. 4 is a kind of schematic diagram of image brightness distribution figure provided by the embodiments of the present application;
Fig. 5 is a kind of curve synoptic diagram of brightness mapping relations provided by the embodiments of the present application;
Fig. 6 is the flow diagram of another image processing method provided by the embodiments of the present application;
Fig. 7 is a kind of structural block diagram of image processing apparatus provided by the embodiments of the present application;
Fig. 8 is a kind of structural schematic diagram of terminal provided by the embodiments of the present application;
Fig. 9 is the structural schematic diagram of another terminal provided by the embodiments of the present application.
Specific embodiment
Further illustrate the technical solution of the application below with reference to the accompanying drawings and specific embodiments.It is understood that It is that specific embodiment described herein is used only for explaining the application, rather than the restriction to the application.It further needs exist for illustrating , part relevant to the application is illustrated only for ease of description, in attached drawing rather than entire infrastructure.
It should be mentioned that some exemplary embodiments are described as before exemplary embodiment is discussed in greater detail The processing or method described as flow chart.Although each step is described as the processing of sequence by flow chart, many of these Step can be implemented concurrently, concomitantly or simultaneously.In addition, the sequence of each step can be rearranged.When its operation The processing can be terminated when completion, it is also possible to have the additional step being not included in attached drawing.The processing can be with Corresponding to method, function, regulation, subroutine, subprogram etc..
Fig. 1 is a kind of flow diagram of image processing method provided by the embodiments of the present application, and this method can be by image Processing unit executes, and wherein the device can be implemented by software and/or hardware, and can generally integrate in the terminal.As shown in Figure 1, should Method includes:
Step 101 obtains the image to be processed that color is showed the score from color mode.
Illustratively, the terminal in the embodiment of the present application may include mobile phone, tablet computer, laptop, computer etc. Show the electronic equipment of image.Operating system is integrated in terminal in the embodiment of the present application, to operation in the embodiment of the present application The type of system without limitation, such as may include Android (Android) operating system, window (Windows) operating system and Apple (ios) operating system etc..
It should be noted that color is usually described with three relatively independent attributes, three independent variable comprehensive functions, Naturally a space coordinate is just constituted, here it is color modes.Color mode can be divided into primary colours color mode and color, show the score from face Color pattern, for example, primary colours color mode includes but is not limited to RGB color mode, color, show the score includes but is not limited to from color mode YUV color mode, Lab color mode, hsv color mode and HSB color mode.Y-component characterization is bright in YUV color mode Degree, U component characterize coloration, and V component characterizes concentration, wherein U component and V component indicate the color of image jointly.In Lab color L * component characterizes brightness in mode, and a and b indicate color jointly.In HSB color mode, H component characterizes form and aspect, S component characterization Saturation degree, B component characterize brightness.In hsv color mode, H component characterizes form and aspect, and S component characterizes saturation degree, V component characterization Lightness namely brightness.In color, the image showed the score from color mode, extract light intensity level and color component can be distinguished, it can be to figure Processing as carrying out either side in brightness and color is illustratively treated in journey luminance component, will not be to image Color component cause any impact.
In the embodiment of the present application, the image to be processed showed the score from color mode of color is obtained, wherein image to be processed can be with It is interpreted as the image for needing to carry out brightness adjustment.The image to be processed can be to be shot to obtain by camera according to shooting instruction Image, can also be by camera shooting instruction execution before, acquisition presentation on a terminal screen, for the figure of user's preview Picture can also be the image obtained from terminal photograph album, or the image obtained by cloud platform.It should be noted that this Shen Please the embodiment acquisition source of image to be processed of showing the score color from color mode or acquisition modes without limitation.
Optionally, when detecting that brightness of image adjustment event is triggered, acquisition color is showed the score to be processed from color mode Image.It is understood that brightness of image adjustment thing can be preset in order to carry out brightness adjustment to image on suitable opportunity The trigger condition of part.Illustratively, it in order to meet user to the visual demand of acquisition image, can be opened detecting that camera is in When opening state, triggering brightness of image adjusts event.It optionally, can be when user is dissatisfied to the brightness of certain image in terminal When detecting that user actively opens brightness of image adjustment permission, triggering brightness of image adjusts event.Optionally, in order to keep image bright Degree adjustment is applied to more valuable Time window, adjusts brought extra power consumption to save brightness of image, can be bright to image The Time window and application scenarios for spending adjustment are analyzed or are investigated, and reasonably default scene is arranged, and are in detection terminal When default scene, triggering brightness of image adjusts event.It should be noted that the embodiment of the present application adjusts event quilt to brightness of image The specific manifestation form of triggering is without limitation.
In the present embodiment, the color of the acquisition image from color mode of showing the score can be the image of YUV color mode, Lab The image of the image of color mode, the image of hsv color mode or HSB color mode.When the image processing method of the application When applied to mobile phone, optionally, the image that the color of acquisition is showed the score from color mode is the image of YUV color mode, can be in image Acquisition device is handled after collecting the image of YUV color mode, is converted without extra image, is reduced the converted of image Journey improves image processing efficiency.
The generation method of the image to be processed of the YUV color mode, comprising: the original letter obtained based on imaging sensor Number, the original signal is converted to the image of RGB color mode;YUV color is generated according to the image of the RGB color mode The image to be processed of mode.
The image to be processed is input in advance trained brightness mapping matrix and determines in model by step 102.
In the embodiment of the present application, brightness mapping matrix determines that model can be understood as after inputting image to be processed, fastly Speed determines the learning model of brightness mapping matrix corresponding with the image to be processed.Brightness mapping matrix determines that model can be pair The sample image of acquisition and corresponding sample brightness mapping matrix are trained the learning model of generation, wherein sample brightness is reflected Penetrating matrix can be understood as after sample image is carried out brightness adjustment, image after brightness adjustment is changed to by sample image it is each The brightness mapping relations of a pixel.It is understood that by sample image and corresponding sample brightness mapping matrix, and Corresponding relationship between the two is learnt, and brightness mapping matrix can be generated and determine model.
Step 103 determines the output of model as a result, the determining and image pair to be processed according to the brightness mapping matrix The brightness mapping matrix answered.
Illustratively, image to be processed is input to after brightness mapping matrix determines model, brightness mapping matrix determines mould Type analyzes the image to be processed, and determines brightness mapping square corresponding with the image to be processed based on the analysis results Battle array.It is understood that the brightness mapping matrix is identical as the size of image to be processed, each element in brightness mapping matrix Indicate the value for needing to adjust with the luminance component of the element corresponding position in image to be processed.Illustratively, brightness maps square First element (element of the first row first row of brightness mapping matrix) can be expressed as in battle array, and the in image to be processed The value that the corresponding luminance component of one pixel (pixel of the first row first row of image to be processed) needs to adjust.For example, bright The first element spent in mapping matrix is 5, then can indicate that the luminance component of the first pixel in image to be processed is needed in original Increase by 5 on the basis of coming.If the original luminance component of the first pixel in image to be processed is 20, then target adjusted is bright Spending component is 25.
Step 104 is carried out according to luminance component of the brightness mapping matrix to each pixel in the image to be processed Adjustment, the target image after generating brightness adjustment.
Illustratively, each of image to be processed pixel is traversed, the luminance component of each pixel, base are obtained The corresponding object brightness component of the luminance component is determined in brightness mapping matrix, wherein the object brightness component of certain pixel can Be interpreted as the luminance component of the pixel in brightness mapping matrix the sum of with the element of the pixel corresponding position.It will be every The luminance component of one pixel is adjusted to object brightness component, to realize to the brightness regulation of image to be processed, obtains brightness Target image adjusted.
In one embodiment, the pre- of the YUV color mode of camera acquisition is shown in the display interface of terminal (mobile phone) It lookes at image or shooting image, obtains the color amplification instruction of user's input, wherein it is logical that the color amplification instruction can be user Cross click display interface in virtual control (such as PI control) or user it is defeated by modes such as touch control gesture or phonetic orders Enter.The luminance component of the image shown in the display interface is traversed according to color amplification instruction, and the image is input to pre- First trained brightness mapping matrix determines in model, determines the brightness mapping matrix of the image, the brightness mapping based on image Matrix is adjusted the luminance component of pixel each in image, the image after obtaining color mapping.In the present embodiment, only Mapping processing is carried out to the Y-component of image, does not influence the ratio between UV completely, color component is not present any variation, that is, schemes The color of picture is kept intact, and all there is no distortion phenomenons in any region of image, is carried out by the variation of brightness to color empty Quasi- amplification, realization do not damage on the basis of color, promote color representation power, allow color to be more fully apparent from gorgeous.
The image processing method provided in the embodiment of the present application obtains the image to be processed that color is showed the score from color mode, and Image to be processed is input to brightness mapping matrix trained in advance to determine in model, model is determined according to brightness mapping matrix Output is as a result, determine brightness mapping matrix corresponding with image to be processed, then according to brightness mapping matrix to image to be processed In the luminance component of each pixel be adjusted, the target image after generating brightness adjustment.By using above-mentioned technical proposal, lead to It crosses brightness mapping matrix and determines that model can accurately and rapidly determine the brightness mapping square for the image showed the score from color mode remarkably Battle array, and independent luminance component in image to be processed is handled based on brightness mapping matrix, it is not related to any pair of color point The adjusting of amount ensure that in above-mentioned image processing process, image color is kept intact, and there is no treatment processes to the shadow of color It rings and changes, the image color of guarantee is clearly undistorted.
Fig. 2 is the flow diagram of image processing method provided by the embodiments of the present application, and this method comprises the following steps:
Step 201 obtains color and shows the score the sample original image from color mode.
In the embodiment of the present application, the image that color is showed the score from color mode is obtained, and as sample original image.Its In, it includes in YUV color mode, Lab color mode, hsv color mode and HSB color mode that the color, which is showed the score from color mode, Any one.
Step 202 carries out brightness adjustment to the sample original image, obtains sample corresponding with the sample original image This target image.
In the embodiment of the present application, brightness tune is carried out to sample original image using conventional images luminance regulating method It is whole, obtain sample object image corresponding with sample original image.Optionally, sample original image is input to ISP (Image Signal Processor, image-signal processor) in tool, brightness adjustment is carried out to sample original image manually, will be adjusted The image best to brightness display effect is as sample object image corresponding with sample original image.Wherein, original to sample When image carries out brightness adjustment, if adjusting to the best image of brightness effects can be confirmed by the visual sense of human eye, It can also be assessed by image quality measure standard, until obtaining the best image of brightness display effect.
Step 203, according to the sample original image and the sample object image, determine the sample original image Variation is the corresponding sample brightness mapping matrix of the sample object image.
In the embodiment of the present application, according to sample original image and sample object image corresponding with sample original image, Determine that corresponding sample brightness mapping matrix, i.e. determination are former to sample when change sample original image for sample object image When beginning image progress brightness adjustment obtains sample object image, the brightness mapping matrix of use during brightness adjustment.
Optionally, it according to the sample original image and the sample object image, determines the sample original image Variation is the corresponding sample brightness mapping matrix of the sample object image, comprising: is obtained each in the sample original image Second luminance component of each pixel in first luminance component of pixel and the sample object image;For all pixels Point, by the difference of each pixel corresponding second luma component values and the first luminance component, as sample brightness mapping matrix Described in the corresponding brightness adjustment value of pixel.Illustratively, (sample is original for first pixel in acquisition sample original image The pixel of the first row first row in image) the first luminance component and sample object image in first pixel (sample mesh The pixel of the first row first row in logo image) the second luminance component, and by second luminance component and first brightness point The difference of amount, the brightness adjustment value as the first row first row in sample brightness mapping matrix.In the manner described above, successively class It pushes away, determines the brightness adjustment value of each element in sample brightness mapping matrix respectively.
Step 204 is marked the sample original image according to the sample brightness mapping matrix, obtains training sample This collection.
Illustratively, corresponding sample original image is marked respectively according to obtained each sample brightness mapping matrix Note, and the sample original image that will mark corresponding sample brightness mapping matrix, the instruction of model is determined as brightness mapping matrix Practice sample set.
Step 205 is trained default machine learning model using the sample training collection, obtains brightness mapping matrix Determine model.
Illustratively, default machine learning model is trained using training sample set, it is true generates brightness mapping matrix Cover half type.Wherein, default machine learning model may include convolutional neural networks model or the long machines such as memory network model in short-term Device learning model.The embodiment of the present application to default machine learning model without limitation.
Step 206 obtains the image to be processed that color is showed the score from color mode.
The image to be processed is input in advance trained brightness mapping matrix and determines in model by step 207.
Step 208 determines the output of model as a result, the determining and image pair to be processed according to the brightness mapping matrix The brightness mapping matrix answered.
Step 209 is carried out according to luminance component of the brightness mapping matrix to each pixel in the image to be processed Adjustment, the target image after generating brightness adjustment.
Wherein, it before obtaining color and showing the score from the image to be processed of color mode, obtains brightness mapping matrix and determines model. It should be noted that can be terminal obtains above-mentioned training sample set, using training sample set to default machine learning model into Row training, directly generates brightness mapping matrix and determines model.It can also be that terminal calls directly other terminal device training generation Brightness mapping matrix determine model, for example, obtaining above-mentioned training sample set using terminal before factory and generating brightness Mapping matrix determines model, the brightness mapping matrix is then determined model storage into the terminal, for other terminal devices Directly use.Alternatively, server obtains a large amount of above-mentioned sample training collection, then server is to based on default machine learning model Training sample set is trained, brightness mapping matrix is obtained and determines model.When terminal needs to carry out image procossing, from service Trained brightness mapping matrix determines model to device calling.
Image processing method provided by the embodiments of the present application obtains the image to be processed that color is showed the score from color mode, and will Image to be processed is input to brightness mapping matrix trained in advance and determines in model, determines model according to the brightness mapping matrix Output as a result, determining brightness mapping matrix corresponding with the image to be processed, then according to the brightness mapping matrix pair The luminance component of each pixel is adjusted in the image to be processed, the target image after generating brightness adjustment, wherein brightness Mapping matrix determines that model is based on having marked the sample original image of sample brightness mapping matrix to be trained generation.Pass through By adopting the above technical scheme, the sample original image that color is showed the score from color mode can be efficiently used, and to sample original image Sample brightness mapping matrix when brightness adjustment is carried out, the training study that brightness mapping matrix determines model is carried out, it can be effective Improve brightness mapping matrix and determine the accuracy of model, at the same using brightness mapping matrix determine model can accurately determine out with The brightness mapping matrix of images match to be processed carries out brightness adjustment to image to be processed, can effectively provide picture quality.
Fig. 3 is the flow diagram of image processing method provided by the embodiments of the present application, and this method comprises the following steps:
Step 301 obtains color and shows the score the sample original image from color mode.
The luminance component of each pixel, determines that each luminance component is corresponding in step 302, the traversal sample original image Pixel quantity.
Illustratively, the luminance component for traversing each pixel in sample original image, for example, in YUV color mode Sample original image in, extract image in each pixel Y-component, and to the corresponding pixel of each luminance component carry out Statistics.Optionally, the image data of YUV color mode is stored using planar format, i.e., by three components of Y, U, V It is stored in different matrixes, in traversing sample original image when the luminance component of each pixel, reads for storing Y respectively The matrix of component can obtain the luminance component of each pixel in the image.
Step 303, according to each luminance component and the corresponding pixel quantity of each luminance component, generate the sample The Luminance Distribution of original image.
Wherein, the data of Luminance Distribution can be is shown in the form of histogram, brightness distribution curve or integrogram, optional , step 303 are as follows: according to each luminance component and the corresponding pixel quantity of each luminance component, it is original to generate the sample The intensity map of image.Illustratively, referring to fig. 4, Fig. 4 is the signal of image brightness distribution figure provided by the embodiments of the present application Figure, in Fig. 4, horizontal axis is each luminance component of sample original image, and range 0-255, the longitudinal axis is in the sample original image The quantity of the corresponding pixel of each luminance component.Luminance Distribution can embody the chroma-luminance of image, when luminance component is 1-128's When the large percentage of pixel, show that the sample original image overall brightness is partially dark, when the pixel that luminance component is 128-155 Large percentage when, show that the sample original image overall brightness is partially bright.
Step 304, according to preset normal brightness distribution and the sample original image Luminance Distribution, generate brightness reflect Penetrate relationship.
Illustratively, it is accounted in preset normal brightness distribution comprising the corresponding pixel quantity of each luminance component of 0-255 The standard proportional of whole image pixel quantity.When the Luminance Distribution situation of sample original image meets preset normal brightness point When cloth, which meets user to the brightness demand of image.When sample original image Luminance Distribution with it is preset The luminance component of pixel in sample original image is adjusted when having differences in normal brightness distribution so that adjusting after scheme The Luminance Distribution of picture and the distribution of preset normal brightness are unanimously or within the scope of allowable error.In the present embodiment, brightness maps Include the corresponding relationship of image original luminance component and mapped luminance component in relationship, can be used for pixel in sample original image The luminance component of point is adjusted to mapped luminance component, and the Luminance Distribution situation of the image after adjusting meets preset normal brightness Distribution.It illustratively, is a kind of curve synoptic diagram of brightness mapping relations provided by the embodiments of the present application referring to Fig. 5, Fig. 5.Its In, brightness mapping relations can be with curve form or the displaying of inquiry table (look up table, LUT) form, the present embodiment It does not limit this, Fig. 5 is only that a curve of brightness mapping relations shows example.In Fig. 5, it is image that curve, which corresponds to horizontal axis, Former luminance component, curve correspond to the longitudinal axis be adjust after luminance component.Optionally, in step 304, according to preset standard The intensity map of intensity map and the sample original image generates brightness mapping table.
Step 305, according to the brightness mapping relations to the luminance component of each pixel in the sample original image into Row is adjusted, and generates sample object image corresponding with the sample original image.
Each of sample original image pixel is traversed, the luminance component of each pixel is obtained, is based on brightness Mapping relations determine the corresponding mapped luminance component of the luminance component, the luminance component of each pixel are adjusted to map bright Component is spent, to realize the brightness regulation to sample original image, generates sample object figure corresponding with the sample original image Picture.
Step 306, according to the sample original image and the sample object image, determine the sample original image Variation is the corresponding sample brightness mapping matrix of the sample object image.
Step 307 is marked the sample original image according to the sample brightness mapping matrix, obtains training sample This collection.
Step 308 is trained default machine learning model using the sample training collection, obtains brightness mapping matrix Determine model.
Step 309 obtains the image to be processed that color is showed the score from color mode.
The image to be processed is input in advance trained brightness mapping matrix and determines in model by step 310.
Step 311 determines the output of model as a result, the determining and image pair to be processed according to the brightness mapping matrix The brightness mapping matrix answered.
Step 312 is carried out according to luminance component of the brightness mapping matrix to each pixel in the image to be processed Adjustment, the target image after generating brightness adjustment.
Image processing method provided by the embodiments of the present application, to sample original image carry out brightness adjustment when, by time The luminance component for going through each pixel in sample original image determines the corresponding pixel quantity of each luminance component, according to described each Luminance component and the corresponding pixel quantity of each luminance component, generate the Luminance Distribution of the sample original image, according to pre- If normal brightness distribution and the sample original image Luminance Distribution, generate brightness mapping relations, reflected according to the brightness It penetrates relationship the luminance component of each pixel in the sample original image is adjusted, obtain and the sample original image pair The sample object image answered.By using above-mentioned technical proposal, show the score brightness in the sample original image from color mode to color Component is extracted, analyzes and is handled, and the luminance component of image is adjusted to preset standard state, is highlighted to image, The details for showing image improves the clarity of image.Meanwhile based on the available accurate sample brightness mapping of aforesaid way Matrix, so that the brightness mapping matrix of training determines that model accuracy is higher, in addition, trained brightness mapping through the above way Matrix determines the brightness mapping matrix for the image to be processed that model obtains, to image to be processed carry out brightness adjustment, ensure that It handles in image processing process, image color to be processed is kept intact, influence and change there is no treatment process to color.
Fig. 6 is the flow diagram of image processing method provided by the embodiments of the present application, and this method comprises the following steps:
Step 601 obtains color and shows the score the sample original image from color mode.
Step 602 carries out brightness adjustment to the sample original image, obtains sample corresponding with the sample original image This target image.
Step 603, according to the sample original image and the sample object image, determine the sample original image Variation is the corresponding sample brightness mapping matrix of the sample object image.
Step 604, the sample scene information for obtaining the sample original image.
Illustratively, the sample scene information of sample original image can include but is not limited to portrait scene, night scene scene, Setting sun scene, daylight scene, backlight scene and night scene backlight scene etc..
Step 605, according to the sample brightness mapping matrix and the sample scene information to the sample original image It is marked, obtains training sample set.
In the embodiment of the present application, in addition to corresponding sample original image is marked according to brightness mapping matrix, also Corresponding sample original image is marked according to sample scene information, i.e., marks sample original in sample original image The sample scene information of beginning image.Illustratively, different sample scene informations can be marked corresponding with Arabic numerals In sample original image.For example, 0 can be labeled as the sample original image of portrait scene, by the sample original graph of night scene scene As being labeled as 1, the sample original image of setting sun scene is labeled as 2, the sample original image of daylight scene is labeled as 3, it will The sample original image of backlight scene is labeled as 4, and the sample original image of night scene backlight scene is labeled as 5.Sample will have been marked The sample original image of this scene information and sample brightness mapping matrix determines the training sample of model as brightness mapping matrix Collection.Step 606 is trained default machine learning model using the sample training collection, obtains brightness mapping matrix and determines Model.
Step 607 obtains the image to be processed that color is showed the score from color mode.
Step 608 carries out scene Recognition to the image to be processed, determines the target scene letter of the image to be processed Breath.
Illustratively, the scene type of image to be processed can include but is not limited to portrait scene, night scene scene, setting sun field Scape, daylight scene, backlight scene and night scene backlight scene etc., can be according to the object for including in image to be processed and to It handles the factors such as the light and shade ratio of image and scene Recognition is carried out to image to be processed.For example, it may be being based on deep learning model Image recognition is carried out to image to be processed, deep learning model can be convolutional neural networks.Presetting in the terminal has The deep learning model of scene Recognition function, which can be is obtained based on the training of supervised learning mode, such as Acquisition great amount of images simultaneously marks the real scene of each image and is input to sample image untrained as training sample In deep learning model, output scene is obtained, when the output scene of deep learning model and real scene are inconsistent, according to defeated Scene and the difference of real scene reversely adjust the network parameters such as weight and deviant in deep learning model out, recycle on executing Training process is stated, when the precision of the output scene of deep learning model reaches default precision, is completed to deep learning model Training.
Optionally, scene Recognition is carried out to the image to be processed, determines the target scene information of the image to be processed, It include: to be input to the image to be processed in scene Recognition model trained in advance;According to the defeated of the scene Recognition model Out as a result, determining the target scene information of the image to be processed.
In the embodiment of the present application, scene Recognition model can be understood as after inputting image to be processed, quickly determine with The learning model of the corresponding scene type of the image to be processed.Scene Recognition model can be the sample image and correspondence to acquisition The scene type of sample image be trained the learning model of generation.It is understood that by sample image and correspondence Sample image scene type, and corresponding relationship between the two learnt, and scene Recognition model can be generated.
Optionally, the scene Recognition model is obtained by such as under type: acquiring at least two scene types by camera Under sample image;The sample image is marked according to the scene type, generates training sample set;Utilize the instruction Practice sample set to be trained default machine learning model, obtains scene Recognition model.Illustratively, by camera acquire to Image under few two kinds of scene types, as sample image.For example, by camera acquire respectively portrait scene, night scene scene, Image under the different scenes such as setting sun scene, daylight scene, backlight scene and night scene backlight scene, as sample image, and Sample labeling is carried out to corresponding sample image according to scene type.It such as can be by Arabic numerals to the sample of different scenes classification This image carries out sample labeling, is 1 by the pictorial symbolization of night scene scene for example, can be 0 by the image tagged of portrait scene, will The image tagged of setting sun scene is that the image tagged of 2, Qiang Guang scene is 3, is 6 by the image tagged of backlight scene, by night scene The image tagged of backlight scene is 5.The sample image of scene type will have been marked as the training sample set of scene Recognition model. And default machine learning model is trained using training sample set, generate scene Recognition model.Wherein, machine learning is preset Model may include convolutional neural networks model or the long machine learning models such as memory network model in short-term.The embodiment of the present application pair Default machine learning model is without limitation.
Illustratively, after image to be processed being input to scene Recognition model, scene Recognition model is to the figure to be processed As being analyzed, and target scene information corresponding with the image is determined based on the analysis results.For example, scene Recognition model can be with Provide the probability value of the corresponding each scene type of the image respectively, wherein the probability value of each scene type and be 1, can Using the maximum scene type of probability value as the target scene information of the image.
The target scene information and the image to be processed are input to brightness mapping square trained in advance by step 609 Battle array determines in model.
In the embodiment of the present application, brightness mapping matrix determines that model can be understood as inputting image to be processed and should be to After the target scene information for handling image, which is analyzed, quickly to determine and be somebody's turn to do The learning model of the corresponding brightness mapping matrix of image to be processed.
Step 610 determines the output of model as a result, the determining and image pair to be processed according to the brightness mapping matrix The brightness mapping matrix answered.
Step 611 is carried out according to luminance component of the brightness mapping matrix to each pixel in the image to be processed Adjustment, the target image after generating brightness adjustment.
Image processing method provided by the embodiments of the present application, according to sample brightness mapping matrix and the sample of sample original image The sample original image is marked in this scene information, obtains training sample set, and using the sample training collection to pre- If machine learning model is trained, obtains brightness mapping matrix and determine model, can effectively improve brightness mapping matrix in this way Determine the accuracy of model training, and can be determined by the brightness mapping matrix model accurately determine out different scenes to The brightness mapping matrix for handling image, can reach optimum efficiency to the image to be processed of different scenes, improves image procossing The applicability of mode.
Fig. 7 is the structural block diagram of image processing apparatus provided by the embodiments of the present application, which can be by software and/or hardware It realizes, is typically integrated in terminal, brightness of image can be adjusted by executing image processing method.As shown in fig. 7, should Device includes:
Image collection module 701 to be processed, the image to be processed showed the score for obtaining color from color mode;
Image input module 702 to be processed maps square for the image to be processed to be input to brightness trained in advance Battle array determines in model;
Brightness mapping matrix determining module 703, for determining the output of model according to the brightness mapping matrix as a result, really Fixed brightness mapping matrix corresponding with the image to be processed;
Target image generation module 704 is used for according to the brightness mapping matrix to each pixel in the image to be processed The luminance component of point is adjusted, the target image after generating brightness adjustment.
The image processing apparatus provided in the embodiment of the present application obtains the image to be processed that color is showed the score from color mode, and Image to be processed is input to brightness mapping matrix trained in advance to determine in model, model is determined according to brightness mapping matrix Output is as a result, determine brightness mapping matrix corresponding with image to be processed, then according to brightness mapping matrix to image to be processed In the luminance component of each pixel be adjusted, the target image after generating brightness adjustment.By using above-mentioned technical proposal, lead to It crosses brightness mapping matrix and determines that model can accurately and rapidly determine the brightness mapping square for the image showed the score from color mode remarkably Battle array, and independent luminance component in image to be processed is handled based on brightness mapping matrix, it is not related to any pair of color point The adjusting of amount ensure that in above-mentioned image processing process, image color is kept intact, and there is no treatment processes to the shadow of color It rings and changes, the image color of guarantee is clearly undistorted.
Optionally, described device further include:
Matrix determines that model obtains module, for obtaining before obtaining color and showing the score from the image to be processed of color mode The brightness mapping matrix determines in model;
Wherein, the brightness mapping matrix determines that model is obtained by such as under type:
Color is obtained to show the score the sample original image from color mode;
Brightness adjustment is carried out to the sample original image, obtains sample object figure corresponding with the sample original image Picture;
According to the sample original image and the sample object image, determination changes the sample original image for institute State the corresponding sample brightness mapping matrix of sample object image;
The sample original image is marked according to the sample brightness mapping matrix, obtains training sample set;
Default machine learning model is trained using the sample training collection, brightness mapping matrix is obtained and determines mould Type.
Optionally, brightness adjustment is carried out to the sample original image, obtains sample corresponding with the sample original image This target image, comprising:
The luminance component for traversing each pixel in the sample original image determines the corresponding pixel number of each luminance component Amount;
According to each luminance component and the corresponding pixel quantity of each luminance component, the sample original image is generated Luminance Distribution;
According to the Luminance Distribution of preset normal brightness distribution and the sample original image, brightness mapping relations are generated;
It is adjusted according to luminance component of the brightness mapping relations to each pixel in the sample original image, it is raw At sample object image corresponding with the sample original image.
Optionally, the sample original image is being marked according to the sample brightness mapping matrix, is being trained Before sample set, further includes:
Obtain the sample scene information of the sample original image;
The sample original image is marked according to the sample brightness mapping matrix, obtains training sample set, is wrapped It includes:
The sample original image is marked according to the sample brightness mapping matrix and the sample scene information, Obtain training sample set;
Correspondingly, described device further include:
Scene Recognition module, for determining mould the image to be processed is input in advance trained brightness mapping matrix Before in type, scene Recognition is carried out to the image to be processed, determines the target scene information of the image to be processed;
Image input module to be processed, is used for:
The target scene information and the image to be processed are input to brightness mapping matrix trained in advance and determine mould In type.
Optionally, the scene Recognition module, is used for:
The image to be processed is input in scene Recognition model trained in advance;
According to the output of the scene Recognition model as a result, determining the target scene information of the image to be processed.
Optionally, the color show the score from color mode include YUV color mode, Lab color mode, hsv color mode and Any one in HSB color mode.
Optionally, the generation method of the image to be processed of the YUV color mode, comprising:
Based on the original signal that imaging sensor obtains, the original signal is converted to the image of RGB color mode;
The image to be processed of YUV color mode is generated according to the image of the RGB color mode.
The embodiment of the present application also provides a kind of storage medium comprising computer executable instructions, and the computer is executable Instruction is used to execute image processing method when being executed by computer processor, this method comprises:
Obtain the image to be processed that color is showed the score from color mode;
The image to be processed is input to brightness mapping matrix trained in advance to determine in model;
Determine the output of model as a result, determining brightness corresponding with the image to be processed according to the brightness mapping matrix Mapping matrix;
It is adjusted, is generated according to luminance component of the brightness mapping matrix to each pixel in the image to be processed Target image after brightness adjustment.
Storage medium --- any various types of memory devices or storage equipment.Term " storage medium " is intended to wrap It includes: install medium, such as CD-ROM, floppy disk or magnetic tape equipment;Computer system memory or random access memory, such as DRAM, DDRRAM, SRAM, EDORAM, Lan Basi (Rambus) RAM etc.;Nonvolatile memory, such as flash memory, magnetic medium (example Such as hard disk or optical storage);Register or the memory component of other similar types etc..Storage medium can further include other types Memory or combinations thereof.In addition, storage medium can be located at program in the first computer system being wherein performed, or It can be located in different second computer systems, second computer system is connected to the first meter by network (such as internet) Calculation machine system.Second computer system can provide program instruction to the first computer for executing.Term " storage medium " can To include two or more that may reside in different location (such as in the different computer systems by network connection) Storage medium.Storage medium can store the program instruction that can be performed by one or more processors and (such as be implemented as counting Calculation machine program).
Certainly, a kind of storage medium comprising computer executable instructions, computer provided by the embodiment of the present application Image provided by the application any embodiment can also be performed in the image processing operations that executable instruction is not limited to the described above Relevant operation in processing method.
The embodiment of the present application provides a kind of terminal, and image procossing dress provided by the embodiments of the present application can be integrated in the terminal It sets.Fig. 8 is a kind of structural schematic diagram of terminal provided by the embodiments of the present application.Terminal 800 may include: memory 801, processing On a memory and can be in the computer program of processor operation, the processor 802 execute the computer for device 802 and storage The image processing method as described in the embodiment of the present application is realized when program.
Terminal provided by the embodiments of the present application determines that model can be determined accurately and rapidly by brightness mapping matrix The brightness mapping matrix for the image that color is showed the score from color mode, and based on brightness mapping matrix to luminance component independent in image It is handled, is not related to the adjusting of any pair of color component, ensure that in above-mentioned image processing process, image color keeps former Sample, influence and change there is no treatment process to color, the image color of guarantee are clearly undistorted.
Fig. 9 is the structural schematic diagram of another terminal provided by the embodiments of the present application, which may include: shell (figure In be not shown), memory 901, central processing unit (central processing unit, CPU) 902 (also known as processor, with Lower abbreviation CPU), circuit board (not shown) and power circuit (not shown).The circuit board is placed in the shell The space interior surrounded;The CPU902 and the memory 901 are arranged on the circuit board;The power circuit, is used for It powers for each circuit or device of the terminal;The memory 901, for storing executable program code;It is described CPU902 is run and the executable program code pair by reading the executable program code stored in the memory 901 The computer program answered, to perform the steps of
Obtain the image to be processed that color is showed the score from color mode;
The image to be processed is input to brightness mapping matrix trained in advance to determine in model;
Determine the output of model as a result, determining brightness corresponding with the image to be processed according to the brightness mapping matrix Mapping matrix;
It is adjusted, is generated according to luminance component of the brightness mapping matrix to each pixel in the image to be processed Target image after brightness adjustment.
The terminal further include: Peripheral Interface 903, RF (Radio Frequency, radio frequency) circuit 905, voicefrequency circuit 906, loudspeaker 911, power management chip 908, input/output (I/O) subsystem 909, other input/control devicess 910, touching Touch screen 912, other input/control devicess 910 and outside port 904, these components pass through one or more communication bus or Signal wire 907 communicates.
It should be understood that graphic terminal 900 is only an example of terminal, and terminal 900 can have than figure Shown in more or less component, two or more components can be combined, or can have different portions Part configuration.Various parts shown in the drawings can be including one or more signal processings and/or specific integrated circuit Hardware, software or hardware and software combination in realize.
Just the terminal provided in this embodiment for image procossing is described in detail below, which is with mobile phone Example.
Memory 901, the memory 901 can be accessed by CPU902, Peripheral Interface 903 etc., and the memory 901 can It can also include nonvolatile memory to include high-speed random access memory, such as one or more disk memory, Flush memory device or other volatile solid-state parts.
The peripheral hardware that outputs and inputs of equipment can be connected to CPU902 and deposited by Peripheral Interface 903, the Peripheral Interface 903 Reservoir 901.
I/O subsystem 909, the I/O subsystem 909 can be by the input/output peripherals in equipment, such as touch screen 912 With other input/control devicess 910, it is connected to Peripheral Interface 903.I/O subsystem 909 may include 9091 He of display controller For controlling one or more input controllers 9092 of other input/control devicess 910.Wherein, one or more input controls Device 9092 processed receives electric signal from other input/control devicess 910 or sends electric signal to other input/control devicess 910, Other input/control devicess 910 may include physical button (push button, rocker buttons etc.), dial, slide switch, behaviour Vertical pole clicks idler wheel.It is worth noting that input controller 9092 can with it is following any one connect: keyboard, infrared port, The indicating equipment of USB interface and such as mouse.
Touch screen 912, the touch screen 912 are the input interface and output interface between user terminal and user, can It is shown to user depending on output, visual output may include figure, text, icon, video etc..
Display controller 9091 in I/O subsystem 909 receives electric signal from touch screen 912 or sends out to touch screen 912 Electric signals.Touch screen 912 detects the contact on touch screen, and the contact that display controller 9091 will test is converted to and is shown The interaction of user interface object on touch screen 912, i.e. realization human-computer interaction, the user interface being shown on touch screen 912 Object can be the icon of running game, the icon for being networked to corresponding network etc..It is worth noting that equipment can also include light Mouse, light mouse are the extensions for the touch sensitive surface for not showing the touch sensitive surface visually exported, or formed by touch screen.
RF circuit 905 is mainly used for establishing the communication of mobile phone Yu wireless network (i.e. network side), realizes mobile phone and wireless network The data receiver of network and transmission.Such as transmitting-receiving short message, Email etc..Specifically, RF circuit 905 receives and sends RF letter Number, RF signal is also referred to as electromagnetic signal, and RF circuit 905 converts electrical signals to electromagnetic signal or electromagnetic signal is converted to telecommunications Number, and communicated by the electromagnetic signal with communication network and other equipment.RF circuit 905 may include for executing The known circuit of these functions comprising but it is not limited to antenna system, RF transceiver, one or more amplifiers, tuner, one A or multiple oscillators, digital signal processor, CODEC (COder-DECoder, coder) chipset, user identifier mould Block (Subscriber Identity Module, SIM) etc..
Voicefrequency circuit 906 is mainly used for receiving audio data from Peripheral Interface 903, which is converted to telecommunications Number, and the electric signal is sent to loudspeaker 911.
Loudspeaker 911 is reduced to sound for mobile phone to be passed through RF circuit 905 from the received voice signal of wireless network And the sound is played to user.
Power management chip 908, the hardware for being connected by CPU902, I/O subsystem and Peripheral Interface are powered And power management.
The application any embodiment institute can be performed in image processing apparatus, storage medium and the terminal provided in above-described embodiment The image processing method of offer has and executes the corresponding functional module of this method and beneficial effect.Not in the above-described embodiments in detail The technical detail described to the greatest extent, reference can be made to image processing method provided by the application any embodiment.
Note that above are only the preferred embodiment and institute's application technology principle of the application.It will be appreciated by those skilled in the art that The application is not limited to specific embodiment described here, be able to carry out for a person skilled in the art it is various it is apparent variation, The protection scope readjusted and substituted without departing from the application.Therefore, although being carried out by above embodiments to the application It is described in further detail, but the application is not limited only to above embodiments, in the case where not departing from the application design, also It may include more other equivalent embodiments, and scope of the present application is determined by the scope of the appended claims.

Claims (10)

1. a kind of image processing method characterized by comprising
Obtain the image to be processed that color is showed the score from color mode;
The image to be processed is input to brightness mapping matrix trained in advance to determine in model;
Determine the output of model as a result, determining brightness mapping corresponding with the image to be processed according to the brightness mapping matrix Matrix;
It is adjusted according to luminance component of the brightness mapping matrix to each pixel in the image to be processed, generates brightness Target image adjusted.
2. the method according to claim 1, wherein obtain the image to be processed showed the score from color mode of color it Before, further includes:
The brightness mapping matrix is obtained to determine in model;
Wherein, the brightness mapping matrix determines that model is obtained by such as under type:
Color is obtained to show the score the sample original image from color mode;
Brightness adjustment is carried out to the sample original image, obtains sample object image corresponding with the sample original image;
According to the sample original image and the sample object image, determination changes the sample original image for the sample The corresponding sample brightness mapping matrix of this target image;
The sample original image is marked according to the sample brightness mapping matrix, obtains training sample set;
Default machine learning model is trained using the sample training collection, brightness mapping matrix is obtained and determines model.
3. according to the method described in claim 2, it is characterized in that, being obtained to sample original image progress brightness adjustment Sample object image corresponding with the sample original image, comprising:
The luminance component for traversing each pixel in the sample original image determines the corresponding pixel quantity of each luminance component;
According to each luminance component and the corresponding pixel quantity of each luminance component, the bright of the sample original image is generated Degree distribution;
According to the Luminance Distribution of preset normal brightness distribution and the sample original image, brightness mapping relations are generated;
Be adjusted according to luminance component of the brightness mapping relations to each pixel in the sample original image, generate with The corresponding sample object image of the sample original image.
4. according to the method described in claim 2, it is characterized in that, according to the sample brightness mapping matrix to the sample Original image is marked, before obtaining training sample set, further includes:
Obtain the sample scene information of the sample original image;
The sample original image is marked according to the sample brightness mapping matrix, obtains training sample set, comprising:
The sample original image is marked according to the sample brightness mapping matrix and the sample scene information, is obtained Training sample set;
Correspondingly, also being wrapped before the image to be processed to be input to brightness mapping matrix trained in advance and is determined in model It includes:
Scene Recognition is carried out to the image to be processed, determines the target scene information of the image to be processed;
The image to be processed is input to brightness mapping matrix trained in advance to determine in model, comprising:
The target scene information and the image to be processed are input to brightness mapping matrix trained in advance to determine in model.
5. according to the method described in claim 4, it is characterized in that, determining institute to the image progress scene Recognition to be processed State the target scene information of image to be processed, comprising:
The image to be processed is input in scene Recognition model trained in advance;
According to the output of the scene Recognition model as a result, determining the target scene information of the image to be processed.
6. -5 any method according to claim 1, which is characterized in that it includes YUV face that the color, which is showed the score from color mode, Any one in color pattern, Lab color mode, hsv color mode and HSB color mode.
7. according to the method described in claim 6, the generation method of the image to be processed of the YUV color mode, comprising:
Based on the original signal that imaging sensor obtains, the original signal is converted to the image of RGB color mode;
The image to be processed of YUV color mode is generated according to the image of the RGB color mode.
8. a kind of image processing apparatus characterized by comprising
Image collection module to be processed, the image to be processed showed the score for obtaining color from color mode;
Image input module to be processed determines mould for the image to be processed to be input to brightness mapping matrix trained in advance In type;
Brightness mapping matrix determining module, for determining the output of model according to the brightness mapping matrix as a result, determining and institute State the corresponding brightness mapping matrix of image to be processed;
Target image generation module, for the brightness according to the brightness mapping matrix to each pixel in the image to be processed Component is adjusted, the target image after generating brightness adjustment.
9. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is held by processor The image processing method as described in any in claim 1-7 is realized when row.
10. a kind of terminal, which is characterized in that including memory, processor and storage can be run on a memory and in processor Computer program, the processor realizes at image as claimed in claim 1 when executing the computer program Reason method.
CN201910008661.1A 2019-01-04 2019-01-04 Image processing method, image processing device, storage medium and terminal Active CN109741281B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910008661.1A CN109741281B (en) 2019-01-04 2019-01-04 Image processing method, image processing device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910008661.1A CN109741281B (en) 2019-01-04 2019-01-04 Image processing method, image processing device, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN109741281A true CN109741281A (en) 2019-05-10
CN109741281B CN109741281B (en) 2020-09-29

Family

ID=66363511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910008661.1A Active CN109741281B (en) 2019-01-04 2019-01-04 Image processing method, image processing device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN109741281B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311711A (en) * 2020-02-16 2020-06-19 拉扎斯网络科技(上海)有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111461996A (en) * 2020-03-06 2020-07-28 合肥师范学院 Rapid and intelligent color matching method for image
CN111754492A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Image quality evaluation method and device, electronic equipment and storage medium
CN111784598A (en) * 2020-06-18 2020-10-16 Oppo(重庆)智能科技有限公司 Method for training tone mapping model, tone mapping method and electronic equipment
CN111953977A (en) * 2020-07-09 2020-11-17 西安万像电子科技有限公司 Image transmission method, system and device
CN112102204A (en) * 2020-09-27 2020-12-18 苏州科达科技股份有限公司 Image enhancement method and device and electronic equipment
CN112489144A (en) * 2020-12-14 2021-03-12 Oppo(重庆)智能科技有限公司 Image processing method, image processing apparatus, terminal device, and storage medium
CN112991206A (en) * 2021-03-10 2021-06-18 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN113840134A (en) * 2021-09-03 2021-12-24 大连中科创达软件有限公司 Camera tuning method and device
CN116485645A (en) * 2023-04-13 2023-07-25 北京百度网讯科技有限公司 Image stitching method, device, equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036474A (en) * 2014-06-12 2014-09-10 厦门美图之家科技有限公司 Automatic adjustment method for image brightness and contrast
US20150269717A1 (en) * 2010-03-17 2015-09-24 Texas Instruments Incorporated Scene adaptive brightness/contrast enhancement
CN105825479A (en) * 2016-01-31 2016-08-03 西安电子科技大学 Image enhancement method under ambient light
US9609233B2 (en) * 2011-11-01 2017-03-28 Canon Kabushiki Kaisha Method and system for luminance adjustment of images in an image sequence
CN108364267A (en) * 2018-02-13 2018-08-03 北京旷视科技有限公司 Image processing method, device and equipment
CN108416744A (en) * 2018-01-30 2018-08-17 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and computer readable storage medium
CN108900819A (en) * 2018-08-20 2018-11-27 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108961189A (en) * 2018-07-11 2018-12-07 北京字节跳动网络技术有限公司 Image processing method, device, computer equipment and storage medium
CN109003231A (en) * 2018-06-11 2018-12-14 广州视源电子科技股份有限公司 A kind of image enchancing method, device and display equipment
CN109087269A (en) * 2018-08-21 2018-12-25 厦门美图之家科技有限公司 Low light image Enhancement Method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150269717A1 (en) * 2010-03-17 2015-09-24 Texas Instruments Incorporated Scene adaptive brightness/contrast enhancement
US9609233B2 (en) * 2011-11-01 2017-03-28 Canon Kabushiki Kaisha Method and system for luminance adjustment of images in an image sequence
CN104036474A (en) * 2014-06-12 2014-09-10 厦门美图之家科技有限公司 Automatic adjustment method for image brightness and contrast
CN105825479A (en) * 2016-01-31 2016-08-03 西安电子科技大学 Image enhancement method under ambient light
CN108416744A (en) * 2018-01-30 2018-08-17 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and computer readable storage medium
CN108364267A (en) * 2018-02-13 2018-08-03 北京旷视科技有限公司 Image processing method, device and equipment
CN109003231A (en) * 2018-06-11 2018-12-14 广州视源电子科技股份有限公司 A kind of image enchancing method, device and display equipment
CN108961189A (en) * 2018-07-11 2018-12-07 北京字节跳动网络技术有限公司 Image processing method, device, computer equipment and storage medium
CN108900819A (en) * 2018-08-20 2018-11-27 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109087269A (en) * 2018-08-21 2018-12-25 厦门美图之家科技有限公司 Low light image Enhancement Method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KAZUYA YOSHINARI等: "Color image enhancement in improved HSI color space", 《2013 INTERNATIONAL SYMPOSIUM ON INTELLIGENT SIGNAL PROCESSING AND COMMUNICATION SYSTEMS》 *
何林远等: "基于亮度反馈的彩色雾霾图像增强算法", 《电子学报》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311711B (en) * 2020-02-16 2023-03-31 拉扎斯网络科技(上海)有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111311711A (en) * 2020-02-16 2020-06-19 拉扎斯网络科技(上海)有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111461996A (en) * 2020-03-06 2020-07-28 合肥师范学院 Rapid and intelligent color matching method for image
CN111461996B (en) * 2020-03-06 2023-08-29 合肥师范学院 Quick intelligent color matching method for image
CN111784598A (en) * 2020-06-18 2020-10-16 Oppo(重庆)智能科技有限公司 Method for training tone mapping model, tone mapping method and electronic equipment
CN111784598B (en) * 2020-06-18 2023-06-02 Oppo(重庆)智能科技有限公司 Training method of tone mapping model, tone mapping method and electronic equipment
CN111754492A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Image quality evaluation method and device, electronic equipment and storage medium
CN111953977A (en) * 2020-07-09 2020-11-17 西安万像电子科技有限公司 Image transmission method, system and device
CN112102204A (en) * 2020-09-27 2020-12-18 苏州科达科技股份有限公司 Image enhancement method and device and electronic equipment
CN112489144A (en) * 2020-12-14 2021-03-12 Oppo(重庆)智能科技有限公司 Image processing method, image processing apparatus, terminal device, and storage medium
CN112991206A (en) * 2021-03-10 2021-06-18 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN112991206B (en) * 2021-03-10 2023-11-10 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN113840134A (en) * 2021-09-03 2021-12-24 大连中科创达软件有限公司 Camera tuning method and device
CN113840134B (en) * 2021-09-03 2023-12-15 大连中科创达软件有限公司 Camera tuning method and device
CN116485645A (en) * 2023-04-13 2023-07-25 北京百度网讯科技有限公司 Image stitching method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN109741281B (en) 2020-09-29

Similar Documents

Publication Publication Date Title
CN109741281A (en) Image processing method, device, storage medium and terminal
CN109685746A (en) Brightness of image method of adjustment, device, storage medium and terminal
CN109191410B (en) Face image fusion method and device and storage medium
CN109272459B (en) Image processing method, image processing device, storage medium and electronic equipment
US11138700B2 (en) Method for image processing, non-transitory computer readable medium, and electronic device
CN108566516B (en) Image processing method, device, storage medium and mobile terminal
CN108712606B (en) Reminding method, device, storage medium and mobile terminal
CN108234882B (en) Image blurring method and mobile terminal
CN109741279A (en) Image saturation method of adjustment, device, storage medium and terminal
CN109146814A (en) Image processing method, device, storage medium and electronic equipment
CN109741288A (en) Image processing method, device, storage medium and electronic equipment
CN108551552B (en) Image processing method, device, storage medium and mobile terminal
CN109741280A (en) Image processing method, device, storage medium and electronic equipment
CN109298912B (en) Theme color adjusting method and device, storage medium and electronic equipment
CN108765380A (en) Image processing method, device, storage medium and mobile terminal
CN108494996B (en) Image processing method, device, storage medium and mobile terminal
CN109729281A (en) Image processing method, device, storage medium and terminal
CN112669197A (en) Image processing method, image processing device, mobile terminal and storage medium
CN109784252A (en) Image processing method, device, storage medium and electronic equipment
CN111182212A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN108683845A (en) Image processing method, device, storage medium and mobile terminal
CN107292817B (en) Image processing method, device, storage medium and terminal
JP2022027436A (en) Image processing method and device, terminal, and storage medium
CN110933312B (en) Photographing control method and related product
CN111625213B (en) Picture display method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant