CN108965731A - A kind of half-light image processing method and device, terminal, storage medium - Google Patents
A kind of half-light image processing method and device, terminal, storage medium Download PDFInfo
- Publication number
- CN108965731A CN108965731A CN201810961442.0A CN201810961442A CN108965731A CN 108965731 A CN108965731 A CN 108965731A CN 201810961442 A CN201810961442 A CN 201810961442A CN 108965731 A CN108965731 A CN 108965731A
- Authority
- CN
- China
- Prior art keywords
- picture
- image
- preset
- convolutional neural
- neural networks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/76—Circuitry for compensating brightness variation in the scene by influencing the image signals
Abstract
One exemplary embodiment of the application discloses a kind of half-light image processing method, and the method is applied in a terminal, comprising: obtains current context information locating for the camera having been turned on and the corresponding picture to be captured of the camera;If the current context information meets preset condition, the picture to be captured is input in trained convolutional neural networks model, the picture that obtains that treated;Wherein, the convolutional neural networks model is trained by using short exposed images and preset normal picture;Output is described treated picture.One exemplary embodiment of the application also provides a kind of terminal and computer storage medium simultaneously.
Description
Technical field
This application involves technical field of image processing, a kind of half-light image processing method and device, end are related to, but are not limited to
End, storage medium.
Background technique
In the related art, mobile phone photograph is very huge for the demand for restoring half-light image.Short exposed images picture is deposited
In a large amount of noises, and fuzzy problem can be brought by growing exposure reduction half-light image, be needed by a series of denoising, deblurring, figure
The processing such as image intensifying, however even across these processing or extreme case cannot be covered, half-light reduction effect not can guarantee.
Summary of the invention
In view of this, one exemplary embodiment of the application is to solve the problems, such as present in the relevant technologies at least one and provide
A kind of half-light image processing method and device, terminal, storage medium.
The technical solution of one exemplary embodiment of the application is achieved in that
One exemplary embodiment of the application provides a kind of half-light image processing method, comprising:
Obtain current context information and the corresponding picture to be captured of the camera locating for camera;
If the current context information meets preset condition, the picture to be captured is input to trained convolution mind
The picture that in network model, obtained that treated;Wherein, the convolutional neural networks model is by using preset short exposure
What image and the corresponding preset normal picture of the preset short exposed images were trained;
Output is described treated picture.
In the above-mentioned methods, the short exposed images are that the quality information of image is unsatisfactory for the image of preset quality condition,
The preset normal picture is that the quality information of image meets the image of preset quality condition, and the preset normal picture
Time for exposure be greater than the short exposed images time for exposure;Wherein, the quality information of described image include at least it is following it
One: the time for exposure of image, the pixel value of image, the resolution ratio of image, sensitivity ISO.
In the above-mentioned methods, the training process of the convolutional neural networks model, comprising:
Convolutional Neural net is inputted using short exposed images as sample image and the corresponding normal picture of the short exposed images
Network obtains the first output result;
According to convolutional neural networks described in the first output result and preset loss function Image Adjusting, trained
The good convolutional neural networks model.
In the above-mentioned methods, the method also includes:
If current context information locating for the camera is unsatisfactory for preset condition, it is corresponding to be captured to export the camera
Picture.
In the above-mentioned methods, if the current context information meets preset condition, the picture to be captured is defeated
Enter into trained convolutional neural networks model, the picture that obtains that treated, comprising:
If the current context information shows that the brightness of the current environment is more than or equal to preset threshold, work as described in determination
Preceding environmental information meets preset condition, and generates starting request;
The starting request is responded, the convolutional neural networks model is started;
The convolutional neural networks model that will be had been turned on described in the picture to be captured input, the picture that obtains that treated.
In the above-mentioned methods, described according to the first output result, preset normal brightness image and preset loss
Functional image adjusts the convolutional neural networks, obtains the trained convolutional neural networks model, comprising:
Using the preset loss function, determine the first output result and the preset normal brightness image it
Between difference;
The convolutional neural networks are adjusted according to the difference, obtain the trained convolutional neural networks model.
In the above-mentioned methods, described to utilize the preset loss function, determine the first output result and described pre-
If normal brightness image between difference, comprising:
Using the preset loss function, determine the corresponding pixel value of the first output result and it is described it is preset just
Difference between the pixel value of normal luminance picture.
One exemplary embodiment of the application provides a kind of half-light image processing apparatus, and described device includes: the first acquisition mould
Block, the first input module and the first output module, in which:
Described first obtains module, corresponding to be captured for obtaining current context information locating for camera and the camera
Picture;
First input module, if meeting preset condition for the current context information, by the picture to be captured
Face is input in trained convolutional neural networks model, the picture that obtains that treated;Wherein, the convolutional neural networks model
It is to be trained to obtain by preset short exposed images and the corresponding preset normal picture of the preset short exposed images
's;
First output module, for exporting treated the picture.
In above-mentioned apparatus, the short exposed images are that the quality information of image is unsatisfactory for the image of preset quality condition,
The preset normal picture is that the quality information of image meets the image of preset quality condition, and the preset normal picture
Time for exposure be greater than the short exposed images time for exposure;Wherein, the quality information of described image include at least it is following it
One: the time for exposure of image, the pixel value of image, the resolution ratio of image, sensitivity ISO.
In above-mentioned apparatus, described device further include: the first training module, for training the convolutional neural networks mould
Type;Wherein, first training module, comprising:
First input unit, for using short exposed images as sample image and the corresponding normogram of the short exposed images
As input convolutional neural networks, the first output result is obtained;
First training unit, for the convolution according to the first output result and preset loss function Image Adjusting
Neural network obtains the trained convolutional neural networks model.
In above-mentioned apparatus, described device further include:
Second output module exports institute if being unsatisfactory for preset condition for current context information locating for the camera
State the corresponding picture to be captured of camera.
In above-mentioned apparatus, first input module, comprising:
First determination unit, if showing that the brightness of the current environment is more than or equal in advance for the current context information
If threshold value, determine that the current context information meets preset condition, and generates starting request;
First start unit starts the convolutional neural networks model for responding the starting request;
Second input unit, the convolutional neural networks model for will have been turned on described in the picture input to be captured, obtains
To treated picture.
In above-mentioned apparatus, first training unit, comprising:
First determines subelement, for utilizing the preset loss function, determines the first output result and described
Difference between preset normal brightness image;
First training subelement obtains trained described for adjusting the convolutional neural networks according to the difference
Convolutional neural networks model.
In above-mentioned apparatus, described first determines subelement, is also used to using the preset loss function, determine described in
Difference between the corresponding pixel value of first output result and the pixel value of the preset normal brightness image.
One exemplary embodiment of the application provides a kind of terminal, including memory and processor, and the memory is stored with
The computer program that can be run on a processor, the processor realize above-mentioned half-light image processing method when executing described program
In step.
One exemplary embodiment of the application provides a kind of computer readable storage medium, is stored thereon with computer program,
The computer program realizes the step in above-mentioned half-light image processing method when being executed by processor.
One exemplary embodiment one of the application is exemplary to be provided a kind of half-light image processing method and device, terminal, deposits
Storage media, wherein firstly, obtaining current context information and the corresponding picture to be captured of the camera locating for camera;Secondly, such as
Current context information described in fruit meets preset condition, and the picture to be captured is input to trained convolutional neural networks model
In, the picture that obtains that treated;Wherein, the convolutional neural networks model is by using preset short exposed images and described
The corresponding preset normal picture of preset short exposed images is trained;Finally, output treated the picture;
In this way, convolutional neural networks model is obtained by using short exposed images and the training of preset normal picture, half-light image is defeated
Enter the model, the image under different subdued light conditions can be restored, greatly promote picture quality.
Detailed description of the invention
Fig. 1 is one exemplary embodiment half-light image processing method implementation process schematic diagram of the application;
Fig. 2 is the one another implementation process schematic diagram of exemplary embodiment half-light image processing method of the application;
Fig. 3 A is another implementation process schematic diagram of one exemplary embodiment half-light image processing method of the application;
Fig. 3 B is the composed structure schematic diagram for the terminal that one exemplary embodiment of the application shows half-light image;
Fig. 3 C is the composed structure schematic diagram for the terminal that one exemplary embodiment of the application restores half-light image;
Fig. 4 is the composed structure schematic diagram of one exemplary embodiment half-light image processing apparatus of the application;
Fig. 5 is a kind of terminal hardware entity schematic diagram of one exemplary embodiment of the application.
Specific embodiment
Below in conjunction with the attached drawing in one exemplary embodiment of the application, to the technology in one exemplary embodiment of the application
Scheme is clearly and completely described.
One exemplary embodiment of the application proposes a kind of half-light image processing method, and this method is applied to have preposition camera shooting
Or the mobile terminal of postposition camera function, the mobile terminal can be implemented in a variety of manners.For example, one example of the application
Mobile terminal described in property embodiment may include mobile phone, tablet computer, palm PC, personal digital assistant
(Personal Digital Assistant, PDA) etc..In addition, the function that this method is realized can be by mobile terminal
Processor caller code realize that certain program code can be stored in computer storage medium, it is seen then that the movement
Terminal includes at least pocessor and storage media.
Fig. 1 is one exemplary embodiment half-light image processing method implementation process schematic diagram of the application, as shown in Fig. 1, institute
State method the following steps are included:
Step S101 obtains current context information and the corresponding picture to be captured of the camera locating for camera.
Here, the camera is starting camera, i.e., the current environment of camera is obtained after camera starting in the present embodiment
Information.The camera can be the camera carried in terminal, for example, the camera on mobile phone, can also be as stand-alone terminal
Camera;The current context information can be, light luminance, picture background of current environment etc.;The picture to be captured
It may be considered after opening camera, camera can picture in capture range.
The picture to be captured is input to training if the current context information meets preset condition by step S102
In good convolutional neural networks model, the picture that obtains that treated.
Here, the picture to be captured can be understood as not depositing through the picture that camera is seen
Storage is inside video camera.The convolutional neural networks model is instructed by using short exposed images and preset normal picture
It gets.The convolutional neural networks model includes self-encoding encoder.The step S102, it is believed that being will be trained
Convolutional neural networks model is deployed in mobile phone, when opening the camera of mobile phone, the brightness of detection current environment automatically first,
If the brightness of the current environment is greater than preset threshold, automatic to start the corresponding function of convolutional neural networks model, will be wait clap
It takes the photograph picture to be input in trained convolutional neural networks model, the picture that obtains that treated.If the camera having been turned on
Locating current context information is unsatisfactory for preset condition, exports the corresponding picture to be captured of the camera.That is, if institute
The brightness for stating current environment is less than preset threshold, does not start the corresponding function of convolutional neural networks model, directly by picture to be captured
Face is exported to user.
Step S103, output is described treated picture.
Here, treated the picture, it can be understood as be after carrying out half-light reduction using convolutional neural networks model
The preferable image of obtained picture quality.
In a kind of half-light image processing method that one exemplary embodiment of the application provides, by using short exposed images
Convolutional neural networks model is obtained with the training of preset normal picture, half-light image is inputted into the model, can be restored different dark
Image under the conditions of light, greatly promotes picture quality, to promote user experience.
In other embodiments, the training process of the convolutional neural networks model, can be realized by following steps:
Step S11, using short exposed images as sample image and the corresponding normal picture input volume of the short exposed images
Product neural network, obtains the first output result.
Here, the short exposed images are that the quality information of image is unsatisfactory for the image of preset quality condition, described default
Normal picture be that the quality information of image meets the image of preset quality condition, and when the exposure of the preset normal picture
Between be greater than the short exposed images time for exposure;Wherein, the quality information of described image includes at least following one: image
Time for exposure, the pixel value of image, the resolution ratio of image, sensitivity ISO.For example, the time for exposure of short exposed images is 1/30
Preset normal picture was image clearly, did not had noise to 1/10 second (i.e. there are a large amount of noises for the short exposed images picture) second
Image, and the time for exposure be greater than short exposed images time for exposure (i.e. the ideal image quality of user).The pixel value of image
The pixel that can be understood as the image is how many, and the resolution ratio of image can be understood as whether clear (the i.e. resolution ratio of the image
Higher, image is more clear), sensitivity ISO is it is to be understood that when shooting the image, and camera is for the quick of current environment brightness
Sense degree.
Step S12 exports convolutional neural networks described in result and preset loss function Image Adjusting according to described first,
Obtain the trained convolutional neural networks model.
Here, the step S12 may is that firstly, determining first output using the preset loss function
As a result the difference between the preset normal brightness image;Then, the convolutional neural networks are adjusted according to the difference,
Obtain the trained convolutional neural networks model.That is, firstly, determining institute using the preset loss function
The difference between the corresponding pixel value of the first output result and the pixel value of the preset normal brightness image is stated, then should
Difference feeds back each layer to convolutional neural networks, and is adjusted according to this difference to each layer, so that convolutional Neural
The quality of the image of network output is identical in quality with preset normal brightness image.
One exemplary embodiment of the application provides a kind of half-light image processing method again, and Fig. 2 is the exemplary reality of the application one
The another implementation process schematic diagram of a half-light image processing method is applied, as shown in Fig. 2, the described method comprises the following steps:
Step S201 obtains current context information and the corresponding picture to be captured of the camera locating for camera.
Step S202, if the current context information shows that the brightness of the current environment is more than or equal to preset threshold,
It determines that the current context information meets preset condition, and generates starting request.
Step S203 responds the starting request, starts the convolutional neural networks model.
Step S204, the convolutional neural networks model that will be had been turned on described in the picture input to be captured, after obtaining processing
Picture.
Step S205, output is described treated picture.
In one exemplary embodiment of the application, when the brightness for the current environment for detecting camera is more than or equal to preset threshold
When, it automatically turns on convolutional neural networks model and picture to be captured is handled, and export to the image after half-light reduction, make
The image that user sees be clearly, the qualitative picture of not no noise.
One exemplary embodiment of the application provides a kind of half-light image processing method, by acquiring a series of short exposed images
It is used as Target Photo as training set and corresponding long exposure image (preset normal brightness image), by training set and mesh
Piece of marking on a map, which is input in depth convolutional encoding decoding network, to be trained and (obtains trained convolutional neural networks model), is obtained
The depth convolutional encoding decoding network model of half-light image can be restored.By acquiring a series of short exposures and its corresponding long exposure
Image can restore the image under different subdued light conditions.
One exemplary embodiment of the application in order to facilitate understanding, here to network model (the i.e. convolution including self-encoding encoder
Neural network model) it explains: it the use of convolutional neural networks is better when input is image.Convolution self-encoding encoder
Encoder section is made of convolutional layer and maximum pond (Max Pooling) layer, Pooling layers of Max responsible airspace down-sampling.
And decoder is made of convolutional layer and up-sampling layer.
Self-encoding encoder (Auto encoder) is one kind of neural network, can attempt to copy to input after training defeated
Out.There is a hidden layer h inside self-encoding encoder, can produce coded representation input.The network is considered as consisting of two parts:
One is generated the decoder of reconstruct by the encoder of function representation and one.Here, it usually needs force to self-encoding encoder
Constraint, replicate self-encoding encoder can only approximatively, and can only replicate input similar with training data.Model is forced in these constraints
Consider which part of input data needs preferentially to be replicated, therefore self-encoding encoder tends to the useful properties that data are arrived in study.
Fig. 3 A is another implementation process schematic diagram of one exemplary embodiment half-light image processing method of the application, is such as schemed
Shown in 3A, it the described method comprises the following steps:
Step S301 acquires a series of short exposed images as training set and corresponding long exposure image as target
Picture.
Here, the time for exposure of short exposed images can be 1/30 second to 1/10 second, and long exposure image is normal brightness figure
Picture, i.e., the image of the image quality wished to for user.
Step S302 decodes net using the depth convolutional encoding that training set and Target Photo training can restore half-light image
Network model, and be deployed in mobile phone.
Here, depth convolutional encoding decoding network model is convolutional neural networks model.
Step S303 opens the camera photographing mode of mobile phone.
Step S304, when detecting that current environment is half-light environment, depth convolutional encoding decoding network model exports automatically
Half-light also original image.
Here, described to detect that current environment is half-light environment, it can be and detect that the brightness of current environment is greater than threshold value,
Then start depth convolutional encoding decoding network model automatically, picture to be captured is automatically entered into the model, the model
After processing, output half-light also original image (i.e. treated picture).
Step S305, user click and take pictures, and save half-light also original image.
In one exemplary embodiment of the application, by acquiring a series of short exposures, (time for exposure is 1/30 second to 1/10
Second) image be used as Target Photo, trained depth convolution volume neural network mould as training set and corresponding long exposure image
Type is simultaneously deployed in mobile phone, automatic to export half-light also original image when detecting half-light environment, is improved user and is taken pictures experience.
One exemplary embodiment of the application provides a kind of half-light image processing method, and the terminal can at least install application
Program has camera, and Fig. 3 B is the composed structure schematic diagram for the terminal that one exemplary embodiment of the application shows half-light image,
As shown in Figure 3B, camera 401 has turned on, and picture 402 is the picture obtained under half-light environment, and untreated;It can from Fig. 3 B
To find out, will directly handle the picture output of half-light environment, obtained image there are a large amount of noises, be it is unsharp, this
It is not the image that user wishes to.
To solve the above problems, one exemplary embodiment of the application provides a kind of half-light image processing method, work as camera
When focusing successfully with object to be shot, obtain undressed image, as shown in Figure 3 C, camera 501 open, and camera with wait clap
It takes the photograph object to focus successfully, whether the brightness for then detecting current environment locating for camera, which is greater than preset threshold, (judges current
Whether environment is half-light), if it is, the picture that camera 501 captures is automatically entered trained convolutional neural networks mould
In type, the image of half-light reduction, i.e. image 502 are obtained, can be seen that half-light image from Fig. 3 C by trained convolution mind
After restoring through network model, obtained image noise is significantly reduced, and the clarity of image is higher, meets user dark
To the demand of picture quality when photo-beat is taken the photograph.
One exemplary embodiment of the application provides a kind of half-light image processing apparatus, which includes included each mould
Each unit included by block and each module can be realized by the processor in computer equipment;Certainly it can also pass through tool
The logic circuit of body is realized;In the process of implementation, processor can be central processing unit (CPU), microprocessor (MPU), number
Word signal processor (DSP) or field programmable gate array (FPGA) etc..
Fig. 4 is the composed structure schematic diagram of one exemplary embodiment half-light image processing apparatus of the application, as shown in figure 4,
Described device 40 includes the first acquisition module 41, the first input module 42 and the first output module 43, in which:
Described first obtains module 41, corresponding wait clap for obtaining current context information locating for camera and the camera
Take the photograph picture;
First input module 42 will be described to be captured if meeting preset condition for the current context information
Picture is input in trained convolutional neural networks model, the picture that obtains that treated;Wherein, the convolutional neural networks mould
Type is instructed by using preset short exposed images and the corresponding preset normal picture of the preset short exposed images
It gets;
First output module 43, for exporting treated the picture.
In above-mentioned apparatus, the short exposed images are that the quality information of image is unsatisfactory for the image of preset quality condition,
The preset normal picture is that the quality information of image meets the image of preset quality condition, and the preset normal picture
Time for exposure be greater than the short exposed images time for exposure;Wherein, the quality information of described image include at least it is following it
One: the time for exposure of image, the pixel value of image, the resolution ratio of image, sensitivity ISO.
In above-mentioned apparatus, described device further include: the first training module, for training the convolutional neural networks mould
Type;Wherein, first training module, comprising:
First input unit, for using short exposed images as sample image and the corresponding normogram of the short exposed images
As input convolutional neural networks, the first output result is obtained;
First training unit, for the convolution according to the first output result and preset loss function Image Adjusting
Neural network obtains the trained convolutional neural networks model.
In above-mentioned apparatus, described device further include:
Second output module exports institute if being unsatisfactory for preset condition for current context information locating for the camera
State the corresponding picture to be captured of camera.
In above-mentioned apparatus, first input module, comprising:
First determination unit, if showing that the brightness of the current environment is more than or equal in advance for the current context information
If threshold value, determine that the current context information meets preset condition, and generates starting request;
First start unit starts the convolutional neural networks model for responding the starting request;
Second input unit, the convolutional neural networks model for will have been turned on described in the picture input to be captured, obtains
To treated picture.
In above-mentioned apparatus, first training unit, comprising:
First determines subelement, for utilizing the preset loss function, determines the first output result and described
Difference between preset normal brightness image;
First training subelement obtains trained described for adjusting the convolutional neural networks according to the difference
Convolutional neural networks model.
In above-mentioned apparatus, described first determines subelement, is also used to using the preset loss function, determine described in
Difference between the corresponding pixel value of first output result and the pixel value of the preset normal brightness image.
The description of apparatus above embodiment, be with the description of above method embodiment it is similar, have same embodiment of the method
Similar beneficial effect.For undisclosed technical detail in the application Installation practice, the application embodiment of the method is please referred to
Description and understand.
It should be noted that in one exemplary embodiment of the application, if realized in the form of software function module above-mentioned
Half-light image processing method also can store computer-readable at one and when sold or used as an independent product
In storage medium.Based on this understanding, the technical solution of one exemplary embodiment of the application is substantially in other words to related skill
The part that art contributes can be embodied in the form of software products, which is stored in a storage and is situated between
In matter, including some instructions are used so that a computer equipment (can be personal computer, server or the network equipment
Deng) execute each embodiment the method for the application all or part.And storage medium above-mentioned includes: USB flash disk, moves firmly
The various media that can store program code such as disk, read-only memory (Read Only Memory, ROM), magnetic or disk.
It is combined in this way, one exemplary embodiment of the application is not limited to any specific hardware and software.
Fig. 5 is a kind of terminal hardware entity schematic diagram of one exemplary embodiment of the application, as shown in figure 5, the application one
Exemplary embodiment provides a kind of terminal 500, comprising:
Processor 51 and the storage medium 52 for being stored with 51 executable instruction of processor, the storage medium 52 are logical
It crosses communication bus 53 and relies on the processor 51 and execute operation, when described instruction is executed by the processor 51, execute above-mentioned
Notification method described in embodiment one.
It should be noted that the various components in terminal are coupled by communication bus 53 when practical application.It can manage
Solution, communication bus 53 is for realizing the connection communication between these components.Communication bus 53 also wraps in addition to including data/address bus
Include power bus, control bus and status signal bus in addition.But for the sake of clear explanation, various buses are all marked in Fig. 5
For communication bus 53.
Here, it should be noted that the terminal is usually to have preposition double take the photograph or the mobile end of the double camera shooting functions of postposition
End, the mobile terminal can be implemented in a variety of manners.For example, mobile end described in one exemplary embodiment of the application
End may include mobile phone, tablet computer, palm PC, personal digital assistant (Personal Digital Assistant,
PDA) etc..
Accordingly, one exemplary embodiment of the application provides a kind of computer readable storage medium, is stored thereon with calculating
Machine program, the computer program realize the step in the half-light image processing method provided in above-described embodiment when being executed by processor
Suddenly.
It need to be noted that: the description of medium stored above and apparatus embodiments, with retouching for above method embodiment
It is similar for stating, and has with embodiment of the method similar beneficial effect.For in the application storage medium and apparatus embodiments not
The technical detail of disclosure please refers to the description of the application embodiment of the method and understands.
It should be understood that " one embodiment " or " embodiment " that specification is mentioned in the whole text mean it is related with embodiment
A particular feature, structure, or characteristic includes at least one embodiment of the application.Therefore, occur everywhere in the whole instruction
" in one embodiment " or " in one embodiment " not necessarily refer to identical embodiment.In addition, these specific features, knot
Structure or characteristic can combine in any suitable manner in one or more embodiments.It should be understood that in the various implementations of the application
In example, magnitude of the sequence numbers of the above procedures are not meant that the order of the execution order, and the execution sequence of each process should be with its function
It can be determined with internal logic, the implementation process without coping with one exemplary embodiment of the application constitutes any restriction.This above-mentioned Shen
Please an exemplary embodiment serial number it is for illustration only, do not represent the advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it
Its mode is realized.Apparatus embodiments described above are merely indicative, for example, the division of the unit, only
A kind of logical function partition, there may be another division manner in actual implementation, such as: multiple units or components can combine, or
It is desirably integrated into another system, or some features can be ignored or not executed.In addition, shown or discussed each composition portion
Mutual coupling or direct-coupling or communication connection is divided to can be through some interfaces, the INDIRECT COUPLING of equipment or unit
Or communication connection, it can be electrical, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member, which can be or may not be, to be physically separated, aobvious as unit
The component shown can be or may not be physical unit;Both it can be located in one place, and may be distributed over multiple network lists
In member;It can select some or all of units according to the actual needs to realize one exemplary embodiment scheme of the application
Purpose.
In addition, each functional unit in each embodiment of the application can be fully integrated in one processing unit, it can also
To be each unit individually as a unit, can also be integrated in one unit with two or more units;It is above-mentioned
Integrated unit both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above method embodiment can pass through
The relevant hardware of program instruction is completed, and program above-mentioned can store in computer-readable storage medium, which exists
When execution, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned includes: movable storage device, read-only deposits
The various media that can store program code such as reservoir (Read Only Memory, ROM), magnetic or disk.
If alternatively, the above-mentioned integrated unit of the application is realized in the form of software function module and as independent product
When selling or using, it also can store in a computer readable storage medium.Based on this understanding, the application one shows
The part that the technical solution of example property embodiment substantially in other words contributes to the relevant technologies can be in the form of software products
It embodies, which is stored in a storage medium, including some instructions are used so that terminal executes sheet
Apply for all or part of each embodiment the method.And storage medium above-mentioned includes: movable storage device, ROM, magnetic disk
Or the various media that can store program code such as CD.
The above, only presently filed embodiment, but the protection scope of the application is not limited thereto, it is any to be familiar with
Those skilled in the art within the technical scope of the present application, can easily think of the change or the replacement, and should all cover
Within the protection scope of the application.Therefore, the protection scope of the application should be based on the protection scope of the described claims.
Claims (10)
1. a kind of half-light image processing method, which is characterized in that the described method includes:
Obtain current context information and the corresponding picture to be captured of the camera locating for camera;
If the current context information meets preset condition, the picture to be captured is input to trained convolutional Neural net
In network model, the picture that obtains that treated;Wherein, the convolutional neural networks model is by using preset short exposed images
What preset normal picture corresponding with the preset short exposed images was trained;
Output is described treated picture.
2. the method according to claim 1, wherein the quality information that the short exposed images are image is unsatisfactory for
The image of preset quality condition, the preset normal picture are that the quality information of image meets the image of preset quality condition,
And the time for exposure of the preset normal picture is greater than the time for exposure of the short exposed images;Wherein, the matter of described image
It measures information and includes at least following one: the time for exposure of image, the pixel value of image, the resolution ratio of image, sensitivity ISO.
3. the method according to claim 1, wherein the training process of the convolutional neural networks model, comprising:
Convolutional neural networks are inputted using short exposed images as sample image and the corresponding normal picture of the short exposed images, are obtained
To the first output result;
According to convolutional neural networks described in the first output result and preset loss function Image Adjusting, obtain trained
The convolutional neural networks model.
4. the method according to claim 1, wherein the method also includes:
If current context information locating for the camera is unsatisfactory for preset condition, the corresponding picture to be captured of the camera is exported
Face.
5. if the method according to claim 1, wherein the current context information meets default item
The picture to be captured is input in trained convolutional neural networks model by part, the picture that obtains that treated, comprising:
If the current context information shows that the brightness of the current environment is more than or equal to preset threshold, determination is described to work as front ring
Border information meets preset condition, and generates starting request;
The starting request is responded, the convolutional neural networks model is started;
The convolutional neural networks model that will be had been turned on described in the picture to be captured input, the picture that obtains that treated.
6. according to the method described in claim 3, it is characterized in that, described according to the first output result and preset loss
Functional image adjusts the convolutional neural networks, obtains the trained convolutional neural networks model, comprising:
Using the preset loss function, the difference between the first output result and the normal brightness image is determined;
The convolutional neural networks are adjusted according to the difference, obtain the trained convolutional neural networks model.
7. according to the method described in claim 6, it is characterized in that, it is described utilize the preset loss function, determine described in
Difference between first output result and the preset normal brightness image, comprising:
Using the preset loss function, determines the corresponding pixel value of the first output result and described preset be just always on
Spend the difference between the pixel value of image.
8. a kind of half-light image processing apparatus, which is characterized in that described device includes: the first acquisition module, the first input module
With the first output module, in which:
It is described first obtain module, for obtain current context information locating for the camera having been turned on and the camera it is corresponding to
Shooting picture;
First input module, it is if meeting preset condition for the current context information, the picture to be captured is defeated
Enter into trained convolutional neural networks model, the picture that obtains that treated;The convolutional neural networks model is by adopting
It is trained with short exposed images and preset normal picture;
First output module, for exporting treated the picture.
9. a kind of terminal, including memory and processor, the memory is stored with the computer journey that can be run on a processor
Sequence, which is characterized in that the processor realizes any one of claim 1 to 7 half-light image procossing when executing described program
Step in method.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program quilt
The step in any one of claim 1 to 7 the method is realized when processor executes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810961442.0A CN108965731A (en) | 2018-08-22 | 2018-08-22 | A kind of half-light image processing method and device, terminal, storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810961442.0A CN108965731A (en) | 2018-08-22 | 2018-08-22 | A kind of half-light image processing method and device, terminal, storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108965731A true CN108965731A (en) | 2018-12-07 |
Family
ID=64473816
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810961442.0A Pending CN108965731A (en) | 2018-08-22 | 2018-08-22 | A kind of half-light image processing method and device, terminal, storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108965731A (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110062173A (en) * | 2019-03-15 | 2019-07-26 | 北京旷视科技有限公司 | Image processor and image processing method, equipment, storage medium and intelligent terminal |
CN110189354A (en) * | 2019-04-18 | 2019-08-30 | 北京迈格威科技有限公司 | Image processing method and image processor, image processing equipment and medium |
CN110266939A (en) * | 2019-05-27 | 2019-09-20 | 联想(上海)信息技术有限公司 | Display methods and electronic equipment, storage medium |
CN110490118A (en) * | 2019-08-14 | 2019-11-22 | 厦门美图之家科技有限公司 | Image processing method and device |
CN111064904A (en) * | 2019-12-26 | 2020-04-24 | 深圳深知未来智能有限公司 | Dark light image enhancement method |
CN111325679A (en) * | 2020-01-08 | 2020-06-23 | 深圳深知未来智能有限公司 | Method for enhancing dark light image from Raw to Raw |
CN111383188A (en) * | 2018-12-29 | 2020-07-07 | Tcl集团股份有限公司 | Image processing method, system and terminal equipment |
CN111383171A (en) * | 2018-12-27 | 2020-07-07 | Tcl集团股份有限公司 | Picture processing method, system and terminal equipment |
CN111953888A (en) * | 2019-05-16 | 2020-11-17 | 武汉Tcl集团工业研究院有限公司 | Dim light imaging method and device, computer readable storage medium and terminal equipment |
CN112335228A (en) * | 2019-11-22 | 2021-02-05 | 深圳市大疆创新科技有限公司 | Image processing method, image acquisition device, movable platform and storage medium |
WO2021052111A1 (en) * | 2019-09-19 | 2021-03-25 | 华为技术有限公司 | Image processing method and electronic device |
CN113052768A (en) * | 2019-12-27 | 2021-06-29 | 武汉Tcl集团工业研究院有限公司 | Method for processing image, terminal and computer readable storage medium |
CN113273180A (en) * | 2019-02-27 | 2021-08-17 | 华为技术有限公司 | Image processing apparatus and method |
CN113379610A (en) * | 2020-03-10 | 2021-09-10 | Tcl科技集团股份有限公司 | Training method of image processing model, image processing method, medium, and terminal |
CN113491099A (en) * | 2020-09-28 | 2021-10-08 | 深圳市大疆创新科技有限公司 | Image processing method, device, system, platform and computer readable storage medium |
CN113542591A (en) * | 2021-06-02 | 2021-10-22 | 惠州Tcl移动通信有限公司 | Time-lapse shooting processing method and device, mobile terminal and storage medium |
CN113766143A (en) * | 2019-01-25 | 2021-12-07 | 原相科技股份有限公司 | Light detection chip, image processing device and operation method thereof |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030020826A1 (en) * | 2001-06-25 | 2003-01-30 | Nasser Kehtarnavaz | Automatic white balancing via illuminant scoring autoexposure by neural network mapping |
CN107463052A (en) * | 2017-08-30 | 2017-12-12 | 北京小米移动软件有限公司 | Shoot exposure method and device |
CN107491771A (en) * | 2017-09-21 | 2017-12-19 | 百度在线网络技术(北京)有限公司 | Method for detecting human face and device |
CN109005364A (en) * | 2018-08-13 | 2018-12-14 | Oppo广东移动通信有限公司 | Image formation control method, device, electronic equipment and computer readable storage medium |
-
2018
- 2018-08-22 CN CN201810961442.0A patent/CN108965731A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030020826A1 (en) * | 2001-06-25 | 2003-01-30 | Nasser Kehtarnavaz | Automatic white balancing via illuminant scoring autoexposure by neural network mapping |
CN107463052A (en) * | 2017-08-30 | 2017-12-12 | 北京小米移动软件有限公司 | Shoot exposure method and device |
CN107491771A (en) * | 2017-09-21 | 2017-12-19 | 百度在线网络技术(北京)有限公司 | Method for detecting human face and device |
CN109005364A (en) * | 2018-08-13 | 2018-12-14 | Oppo广东移动通信有限公司 | Image formation control method, device, electronic equipment and computer readable storage medium |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111383171B (en) * | 2018-12-27 | 2022-08-09 | Tcl科技集团股份有限公司 | Picture processing method, system and terminal equipment |
CN111383171A (en) * | 2018-12-27 | 2020-07-07 | Tcl集团股份有限公司 | Picture processing method, system and terminal equipment |
CN111383188A (en) * | 2018-12-29 | 2020-07-07 | Tcl集团股份有限公司 | Image processing method, system and terminal equipment |
CN113766143B (en) * | 2019-01-25 | 2024-01-19 | 原相科技股份有限公司 | Light detection chip, image processing device and operation method thereof |
CN113766143A (en) * | 2019-01-25 | 2021-12-07 | 原相科技股份有限公司 | Light detection chip, image processing device and operation method thereof |
CN113273180A (en) * | 2019-02-27 | 2021-08-17 | 华为技术有限公司 | Image processing apparatus and method |
CN113273180B (en) * | 2019-02-27 | 2023-03-24 | 华为技术有限公司 | Image processing apparatus and method |
CN110062173A (en) * | 2019-03-15 | 2019-07-26 | 北京旷视科技有限公司 | Image processor and image processing method, equipment, storage medium and intelligent terminal |
CN110189354A (en) * | 2019-04-18 | 2019-08-30 | 北京迈格威科技有限公司 | Image processing method and image processor, image processing equipment and medium |
CN110189354B (en) * | 2019-04-18 | 2021-12-28 | 北京迈格威科技有限公司 | Image processing method, image processor, image processing apparatus, and medium |
CN111953888A (en) * | 2019-05-16 | 2020-11-17 | 武汉Tcl集团工业研究院有限公司 | Dim light imaging method and device, computer readable storage medium and terminal equipment |
CN111953888B (en) * | 2019-05-16 | 2021-12-24 | 武汉Tcl集团工业研究院有限公司 | Dim light imaging method and device, computer readable storage medium and terminal equipment |
CN110266939A (en) * | 2019-05-27 | 2019-09-20 | 联想(上海)信息技术有限公司 | Display methods and electronic equipment, storage medium |
CN110266939B (en) * | 2019-05-27 | 2022-04-22 | 联想(上海)信息技术有限公司 | Display method, electronic device, and storage medium |
CN110490118A (en) * | 2019-08-14 | 2019-11-22 | 厦门美图之家科技有限公司 | Image processing method and device |
WO2021052111A1 (en) * | 2019-09-19 | 2021-03-25 | 华为技术有限公司 | Image processing method and electronic device |
CN112335228A (en) * | 2019-11-22 | 2021-02-05 | 深圳市大疆创新科技有限公司 | Image processing method, image acquisition device, movable platform and storage medium |
CN111064904A (en) * | 2019-12-26 | 2020-04-24 | 深圳深知未来智能有限公司 | Dark light image enhancement method |
CN113052768A (en) * | 2019-12-27 | 2021-06-29 | 武汉Tcl集团工业研究院有限公司 | Method for processing image, terminal and computer readable storage medium |
CN113052768B (en) * | 2019-12-27 | 2024-03-19 | 武汉Tcl集团工业研究院有限公司 | Method, terminal and computer readable storage medium for processing image |
CN111325679A (en) * | 2020-01-08 | 2020-06-23 | 深圳深知未来智能有限公司 | Method for enhancing dark light image from Raw to Raw |
CN113379610A (en) * | 2020-03-10 | 2021-09-10 | Tcl科技集团股份有限公司 | Training method of image processing model, image processing method, medium, and terminal |
CN113379610B (en) * | 2020-03-10 | 2024-03-15 | Tcl科技集团股份有限公司 | Training method of image processing model, image processing method, medium and terminal |
CN113491099A (en) * | 2020-09-28 | 2021-10-08 | 深圳市大疆创新科技有限公司 | Image processing method, device, system, platform and computer readable storage medium |
CN113542591A (en) * | 2021-06-02 | 2021-10-22 | 惠州Tcl移动通信有限公司 | Time-lapse shooting processing method and device, mobile terminal and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108965731A (en) | A kind of half-light image processing method and device, terminal, storage medium | |
JP6267224B2 (en) | Method and system for detecting and selecting the best pictures | |
KR20190143785A (en) | Image processing method and apparatus, and electronic device | |
US20200134787A1 (en) | Image processing apparatus and method | |
US8731325B2 (en) | Automatic generation of a photo guide | |
CN111491102B (en) | Detection method and system for photographing scene, mobile terminal and storage medium | |
Brummer et al. | Natural image noise dataset | |
CN110248098A (en) | Image processing method, device, storage medium and electronic equipment | |
WO2019184654A1 (en) | Method and device for training neural network model, and method and device for generating time-lapse photography video | |
Aksoy et al. | A dataset of flash and ambient illumination pairs from the crowd | |
CN102907082A (en) | Image pickup device, image processing device, image processing method, and image processing program | |
JP2001503218A (en) | Systematic image group formation method | |
CN110148088B (en) | Image processing method, image rain removing method, device, terminal and medium | |
WO2023065523A1 (en) | Isp-based image processing method and apparatus, storage medium, and camera device | |
CN102693528A (en) | Noise suppression in low light images | |
CN113160178A (en) | High dynamic range ghost image removing imaging system and method based on attention module | |
Schewe | The digital negative: raw image processing in lightroom, camera Raw, and photoshop | |
CN114331902B (en) | Noise reduction method and device, electronic equipment and medium | |
CN110717864B (en) | Image enhancement method, device, terminal equipment and computer readable medium | |
CN110838088B (en) | Multi-frame noise reduction method and device based on deep learning and terminal equipment | |
Alghamdi et al. | Transfer deep learning for reconfigurable snapshot HDR imaging using coded masks | |
CN107180417B (en) | Photo processing method and device, computer readable storage medium and electronic equipment | |
WO2023151511A1 (en) | Model training method and apparatus, image moire removal method and apparatus, and electronic device | |
Fatima | AI in photography: scrutinizing implementation of super-resolution techniques in photo-editors | |
CN108495038A (en) | Image processing method, device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181207 |