CN109102460A - A kind of image processing method, image processing apparatus and terminal device - Google Patents

A kind of image processing method, image processing apparatus and terminal device Download PDF

Info

Publication number
CN109102460A
CN109102460A CN201810990824.6A CN201810990824A CN109102460A CN 109102460 A CN109102460 A CN 109102460A CN 201810990824 A CN201810990824 A CN 201810990824A CN 109102460 A CN109102460 A CN 109102460A
Authority
CN
China
Prior art keywords
image
neural network
network model
training
lens reflex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810990824.6A
Other languages
Chinese (zh)
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810990824.6A priority Critical patent/CN109102460A/en
Publication of CN109102460A publication Critical patent/CN109102460A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

This application provides a kind of image processing method, image processing apparatus and terminal devices, which comprises obtains image to be processed;The image to be processed is input to the neural network model after training, the image with slr camera shooting effect of the neural network model output after obtaining the training;The image of neural network model output after showing the training.How present application addresses in the case where no slr camera, obtain the image with slr camera shooting effect.

Description

A kind of image processing method, image processing apparatus and terminal device
Technical field
The application belongs to technical field of image processing more particularly to a kind of image processing method, image processing apparatus, terminal Equipment and computer readable storage medium.
Background technique
Currently, slr camera in the market can replace camera lens according to different photographing requests, accessory is also more, and single Reverse phase machine has processing speed faster, the better central processing unit of performance, and therefore, slr camera is compared to general camera, imaging Better quality, and the photo of great aesthetic feeling can be shot.However slr camera is more heavy, and accessory is more, is not easy to take Therefore band when user goes out, will not often carry slr camera, however, beautiful scenery is ubiquitous, in the feelings of not slr camera Under condition, how to obtain the image with slr camera shooting effect is current technical problem urgently to be solved.
Summary of the invention
It can in view of this, this application provides a kind of image processing method, image processing apparatus, terminal device and computers Storage medium is read, can make in the case where no slr camera, can still obtain with slr camera shooting effect Image.
The application first aspect provides a kind of image processing method, comprising:
Obtain image to be processed;
Above-mentioned image to be processed is input to the neural network model after training, the neural network mould after obtaining above-mentioned training The image with slr camera shooting effect of type output;
The image of neural network model output after showing above-mentioned training.
The application second aspect provides a kind of image processing apparatus, comprising:
Image collection module, for obtaining image to be processed;
Single-lens reflex camera effect conversion module is obtained for above-mentioned image to be processed to be input to the neural network model after training The image with slr camera shooting effect of neural network model output after above-mentioned training;
Display module, for showing the image of the output of the neural network model after above-mentioned training.
The application third aspect provides a kind of terminal device, including memory, processor and is stored in above-mentioned storage In device and the computer program that can run on above-mentioned processor, above-mentioned processor are realized as above when executing above-mentioned computer program The step of stating first aspect method.
The application fourth aspect provides a kind of computer readable storage medium, above-mentioned computer-readable recording medium storage There is computer program, realizes when above-mentioned computer program is executed by processor such as the step of above-mentioned first aspect method.
The 5th aspect of the application provides a kind of computer program product, and above-mentioned computer program product includes computer journey Sequence is realized when above-mentioned computer program is executed by one or more processors such as the step of above-mentioned first aspect method.
Therefore this application provides a kind of image processing methods.Firstly, obtaining image to be processed, for example user is logical Cross image captured by mobile phone camera;Secondly, the image to be processed is converted to tool using the neural network model after training There is the image of slr camera shooting effect;Finally, showing the image of the output of the neural network model after the training.Wherein, the instruction Neural network model after white silk is neural network model trained in advance, is had for being converted to the image for being input to the model The image of slr camera shooting effect.Therefore, technical solution provided herein can use the neural network of precondition Model obtains the image with slr camera shooting effect, therefore solves in no single-lens reflex camera without relying on slr camera In the case where camera, the technical issues of how obtaining the image with slr camera shooting effect.
Detailed description of the invention
It in order to more clearly explain the technical solutions in the embodiments of the present application, below will be to embodiment or description of the prior art Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these Attached drawing obtains other attached drawings.
Fig. 1 is a kind of implementation process schematic diagram for image processing method that the embodiment of the present application one provides;
Fig. 2 is a kind of training process implementation process schematic diagram for neural network model that the embodiment of the present application one provides;
Fig. 3 is a kind of training process schematic diagram for neural network model that the embodiment of the present application one provides;
Fig. 4 is the training process implementation process schematic diagram for another neural network model that the embodiment of the present application two provides;
Fig. 5 is the training process schematic diagram for the discrimination model that the embodiment of the present application two provides;
Fig. 6 is the training process schematic diagram for another neural network model that the embodiment of the present application two provides;
Fig. 7 is a kind of structural schematic diagram for image processing apparatus that the embodiment of the present application three provides;
Fig. 8 is the structural schematic diagram for the terminal device that the embodiment of the present application four provides.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed Body details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specific The application also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity The detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
Image processing method provided by the embodiments of the present application can be adapted for terminal device, and illustratively, above-mentioned terminal is set It is standby to include but is not limited to: smart phone, tablet computer, learning machine, intelligent wearable device etc..
It should be appreciated that ought use in this specification and in the appended claims, term " includes " instruction is described special Sign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step, Operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment And be not intended to limit the application.As present specification and it is used in the attached claims, unless on Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or " if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In addition, term " first ", " second " etc. are only used for distinguishing description, and should not be understood as in the description of the present application Indication or suggestion relative importance.
In order to illustrate the above-mentioned technical solution of the application, the following is a description of specific embodiments.
Embodiment one
A kind of image processing method provided below the embodiment of the present application one is described, and please refers to attached drawing 1, the application Image processing method in embodiment one includes:
In step s101, image to be processed is obtained;
In the embodiment of the present application, image to be processed is obtained first.Wherein, above-mentioned image to be processed can be user and pass through Image captured by this ground camera is clicked captured after shooting button for example, user starts the camera application program in mobile phone Image;Alternatively, after can be terminal device starting camera or video camera, camera a certain frame preview image collected, for example, After user starts the camera application program in mobile phone, camera a certain frame preview image collected;Alternatively, it is logical to can be user The received image of other applications is crossed, for example, image transmitted by other wechats contact person that user receives in wechat; Alternatively, being also possible to the image that user downloads from internet, for example, under user passes through public operators network in a browser The image of load;Alternatively, can also be a certain frame image in video, for example, the wherein frame in the TV play that user is watched Image is herein not construed as limiting the source of image to be processed.
In step s 102, above-mentioned image to be processed is input to the neural network model after training, obtains above-mentioned training The image with slr camera shooting effect of neural network model output afterwards;
After getting above-mentioned image to be processed, which can be directly input to the nerve net after training Network model;Alternatively, prompt information can also be exported first, which agrees to for prompting the user whether by the figure to be processed As being converted to the image with slr camera shooting effect, the feedback information of user's input is then received, which is used for Whether instruction user agrees to the image to be processed being converted to the image with slr camera shooting effect, if the feedback information refers to Show that user agrees to the image to be processed being converted to the image with slr camera shooting effect, then it is again that the image to be processed is defeated Enter the neural network model to training.Wherein, the neural network model after the training be before terminal device factory just The neural network model being deployed in terminal device is shot for being converted to the image for being input to the model with slr camera The image of effect.
Illustratively, the training process of the neural network model can be as shown in Fig. 2, include step S201-S204:
In step s 201, each non-single-lens reflex camera sample image and each non-single-lens reflex camera sample graph are chosen from sample database As corresponding single-lens reflex camera sample image, wherein each non-single-lens reflex camera sample image is the image shot by non-slr camera, each Single-lens reflex camera sample image is the image shot by slr camera;
In the embodiment of the present application, it needs to train the neural network in advance using each sample image in sample database Model, wherein include in the sample database multiple non-single-lens reflex camera sample images (i.e. the image of non-slr camera shooting) and The corresponding single-lens reflex camera sample image (i.e. the image of slr camera shooting) of each non-single-lens reflex camera sample image difference, and each non-single-lens reflex camera Sample image and the corresponding single-lens reflex camera sample image picture material having the same of the non-single-lens reflex camera sample image, such as some non-list Anti- sample image A is the image of somewhere building, then single-lens reflex camera sample image corresponding to the non-single-lens reflex camera sample image A is still phase With shot under angle this at building image.In addition, in order to guarantee the having of being exported of neural network model after training The image of slr camera shooting effect has more aesthetic feeling, and each single-lens reflex camera sample image in the sample database can be Professional Photography The image of Shi Liyong slr camera shooting.
In the embodiment of the present application, each sample image which is included can be according to the nerve after training The difference of the network model terminal device to be configured and it is different.For example, if the neural network model after the training is to configure In the smart phone of model X, then each non-single-lens reflex camera sample image can be figure captured by the smart phone of model X The size of picture, each single-lens reflex camera sample image is identical as the size of image captured by the smart phone of model X, it is assumed that model The image that image captured by the smart phone of X is 1000 × 2000, then each non-single-lens reflex camera sample image is the intelligence of model X 1000 × 2000 image captured by mobile phone, each single-lens reflex camera sample image are captured by slr camera 1000 × 2000 figures As (if the picture size of slr camera shooting is not 1000 × 2000,1000 × 2000 figure can be obtained by cutting Picture, but to guarantee single-lens reflex camera sample image and corresponding non-single-lens reflex camera sample image picture material having the same during the cutting process).
In this step, need to choose each non-single-lens reflex camera sample image and each non-single-lens reflex camera sample from sample database The corresponding single-lens reflex camera sample image of image, in the embodiment of the present application, in order to preferably train neural network model, Ke Yijin The non-single-lens reflex camera sample image of selection more than possible, however, step S201 can also choose a non-single-lens reflex camera sample image, the application This is not construed as limiting.As shown in Figure 3, it is assumed that include 4 non-single-lens reflex camera sample images and 4 single-lens reflex camera samples in sample database 301 This image, we can choose non-single-lens reflex camera sample image A, non-single-lens reflex camera sample in step S201 from the sample database 301 This image B and corresponding single-lens reflex camera sample image A1, single-lens reflex camera sample image B1 train neural network model 302.
In step S202, each non-single-lens reflex camera sample image is separately input into initial neural network model, so that It obtains the initial neural network model and each non-single-lens reflex camera sample image is respectively converted into the life with slr camera shooting effect At image;
In the embodiment of the present application, it initially sets up and initial is imitated for being converted to shoot input picture with slr camera The neural network model of fruit image, and each non-single-lens reflex camera sample image that above-mentioned steps S201 is chosen is separately input into this initially Neural network model in, to obtain each generation image of initial neural network model output.As shown in figure 3, if Each non-single-lens reflex camera sample image that step S201 chooses is non-single-lens reflex camera sample image A and non-single-lens reflex camera sample image B, then by non-single-lens reflex camera Sample image A is input in initial neural network model 302, obtains the generation that the initial neural network model 302 exports Image A2, and non-single-lens reflex camera sample image B is input in initial neural network model 302, obtain the initial neural network The generation image B2 that model 302 exports.
In step S203, by generation image corresponding to each non-single-lens reflex camera sample image and corresponding single-lens reflex camera sample graph As carrying out similarity mode, and count non-single-lens reflex camera of the non-single-lens reflex camera sample image in all selections that similarity is greater than similarity threshold The ratio-dependent is the generation accuracy of current neural network model by shared ratio in sample image;
In the embodiment of the present application, the image that can extract each generation image and each single-lens reflex camera sample image respectively is special Sign, such as textural characteristics, color characteristic, brightness and/or edge feature etc., then, by each non-single-lens reflex camera sample image institute The corresponding characteristics of image progress similarity mode for generating image and corresponding single-lens reflex camera sample image, calculates the non-single-lens reflex camera sample Generation image and corresponding single-lens reflex camera sample image similarity corresponding to image.As shown in figure 3, image A2 and single-lens reflex camera will be generated Sample image A1 carries out similarity mode, obtains the similarity for generating image A2 and single-lens reflex camera sample image A1, and will generate image B2 and single-lens reflex camera sample image B1 carries out similarity mode, obtains the similarity for generating image B2 and single-lens reflex camera sample image B1, it is assumed that The similarity for generating image A2 and single-lens reflex camera sample image A1 is 60%, generates image B2 and the similarity of single-lens reflex camera sample image B1 is 95%, and above-mentioned similarity threshold is 90%, then available similarity is greater than the non-single-lens reflex camera sample image of the similarity threshold Shared ratio is 0.5, then the generation accuracy of initial neural network model is also 0.5.
In step S204, the parameters of current neural network model are constantly adjusted, and continuing will be selected each A non-single-lens reflex camera sample image is separately input into parameter neural network model adjusted, until current neural network model It generates accuracy to be greater than until presetting accuracy, then using the current neural network model as the neural network mould after training Type;
Under normal conditions, the phase of initial neural network model is exported generation image and corresponding single-lens reflex camera sample image It is often smaller like spending, therefore, it is necessary to adjust the parameters of the initial neural network model, and again by step S201 institute The each non-single-lens reflex camera sample image chosen is input in parameter neural network model adjusted, and again will be after parameter adjustment Each generation image for being exported of neural network model carry out phase with corresponding single-lens reflex camera sample image selected by step S201 It is matched like degree, to obtain the generation accuracy of parameter neural network model adjusted, constantly adjusts current nerve net The parameters of network model then should until the generation accuracy of current neural network model is greater than default accuracy Current neural network model is as the neural network model after training.As shown in Figure 3, it is assumed that above-mentioned default accuracy is 0.9, If the generation accuracy of initial neural network model is 0.5, need to adjust the parameters of current neural network model, And re-use non-single-lens reflex camera sample image A, non-single-lens reflex camera sample image B and single-lens reflex camera sample image A1 selected by step S201, Single-lens reflex camera sample image B1 trains parameter neural network model adjusted.The method of common adjusting parameter has under stochastic gradient Algorithm (Stochastic Gradient Descent, SGD), power more new algorithm (Momentum update) etc. drop, this Place is not construed as limiting method used in adjusting parameter.
Furthermore, it is generally the case that the neural network model after training can only be handled the fixed image of picture size (picture size includes pixel number in pixel number and short transverse on picture traverse direction, usually uses " width side Pixel number in upward pixel number × short transverse " indicates), therefore, in the embodiment of the present application, in order to guarantee to instruct Neural network model after white silk can correctly be handled image to be processed acquired in step S101, can first determine whether this Whether whether the picture size of processing image meets pre-set dimension, i.e., be ruler that the neural network model after the training is capable of handling It is very little, if so, directly the image to be processed can be input in the neural network model after the training, if it is not, then needing pair The image to be processed carries out Dimension correction (for example, up-sampling, down-sampling, cutting and/or rotation etc.), thus by the figure to be processed As being modified to the image for meeting above-mentioned pre-set dimension, then the image obtained after Dimension correction is input to the nerve after the training In network model.
In step s 103, the image of the neural network model output after showing above-mentioned training;
It in the embodiment of the present application, will after using the neural network model after training to above-mentioned image procossing to be processed The image with slr camera shooting effect of neural network model output after the training is shown to the display screen of terminal device In curtain, so that user checks the image.
Therefore the embodiment of the present application one provides a kind of image processing method.Utilize the neural network after training Image to be processed is converted to the image with slr camera shooting effect by model, wherein the neural network model after the training For neural network model trained in advance, for being converted to the image for being input to the model with slr camera shooting effect Image.Therefore, technical solution provided herein, the neural network model that can use precondition, which obtains, has single-lens reflex camera phase The image of machine shooting effect solves in the case where no slr camera, how to be had without relying on slr camera There is the technical issues of image of slr camera shooting effect.
Embodiment two
The training process of another neural network model provided below the embodiment of the present application two is described, and please refers to The training process of attached drawing 4, the neural network model in the embodiment of the present application two includes:
In step S401, each non-single-lens reflex camera sample image and each non-single-lens reflex camera sample graph are chosen from sample database As corresponding single-lens reflex camera sample image, wherein each non-single-lens reflex camera sample image is the image shot by non-slr camera, each Single-lens reflex camera sample image is the image shot by slr camera;
In step S402, each non-single-lens reflex camera sample image is separately input into initial neural network model, so that It obtains the initial neural network model and each non-single-lens reflex camera sample image is respectively converted into the life with slr camera shooting effect At image;
In the embodiment of the present application two, above-mentioned steps S401-S402 is identical as the step S201-S202 in embodiment one, For details, reference can be made to the descriptions of embodiment one, and details are not described herein again.
In step S403, by generation image corresponding to each non-single-lens reflex camera sample image and corresponding single-lens reflex camera sample graph As being input in the discrimination model after training as an image group, so that the discrimination model after the training is according to each image Single-lens reflex camera sample image in group judges whether the generation image in the image group is correct;
In the embodiment of the present application two, discrimination model can be trained in advance before the training neural network model, thus Discrimination model after being trained, the judgment models after the training are used to judge each life of current neural network model output It is whether correct at image.
In the embodiment of the present application two, above-mentioned initial neural network model and above-mentioned sample database can use, Initial discrimination model is trained, thus the discrimination model after being trained, specifically, discrimination model after the training Training process can be as shown in fig. 5, it is assumed that include non-single-lens reflex camera sample image A and corresponding single-lens reflex camera sample image in sample database A1, non-single-lens reflex camera sample image B and corresponding single-lens reflex camera sample image B1, non-single-lens reflex camera sample image C and corresponding single-lens reflex camera sample image C1, non-single-lens reflex camera sample image D and corresponding single-lens reflex camera sample image D1, firstly, being chosen from the sample database any number of non- The input of the single-lens reflex camera sample image neural network model initial as this obtains generating image accordingly, as shown in figure 5, can be with It chooses non-single-lens reflex camera sample image A and non-single-lens reflex camera sample image B is input in the initial neural network model, to be given birth to At image A2 and generate image B2;Secondly, using each generation image and its corresponding single-lens reflex camera sample image as an image Group, and the label that the image group is arranged is " incorrect ", as shown in figure 5, image A2 and corresponding single-lens reflex camera sample image will be generated A1 is as an image group, and the label that the image group is arranged is " incorrect ", similarly, will generate image B2 and corresponding list For anti-sample image B1 as an image group, the label for being also provided with the image group is " incorrect ";Again, from sample database Any number of single-lens reflex camera sample images are chosen, and using two identical single-lens reflex camera sample images as an image group, and the figure is set The label of picture group is " correct ", as shown in figure 5, choosing single-lens reflex camera sample image B1 and single-lens reflex camera sample image from sample database C1, and using two single-lens reflex camera sample image B1 as an image group, and the label that the image group is arranged is " correct ", while by two For a single-lens reflex camera sample image C1 as an image group, the label that the image group is equally arranged is " correct ";Then, will be arranged above Each image group of good label is input in initial discrimination model, so that the initial discrimination model judges each image group It is whether correct, the differentiation result of the initial discrimination model output is compared with the label of each image group, judges that this is first Whether the differentiation accuracy rate of the discrimination model of beginning reaches preset accuracy rate threshold value, constantly adjusts each of current discrimination model Parameter, until the differentiation accuracy rate of current discrimination model reaches preset accuracy rate threshold value.
In addition, the training process of the discrimination model in the step S403 that attached drawing 5 is presented is only discrimination model training side The one of which of method, the training method of the discrimination model after the training are not limited solely to attached drawing 5, can also there is other methods, So that the discrimination model after the training can judge whether the two images being input in the image group of the discrimination model are identical. The embodiment of the present application does not repeat other training methods of discrimination model.
Corresponding single-lens reflex camera selected by each generation image and step S401 that current neural network model is exported Sample image is input in the discrimination model after the training as an image group, so that the discrimination model after the training judges Whether label corresponding to each image group is " correct ", if the discrimination model after the training determines some corresponding mark of image group Label are " correct ", then it is assumed that the generation image in the image group that current neural network model is exported is correctly, if should Discrimination model after training determines that the corresponding label of image group is " incorrect ", then it is assumed that current neural network model output The image group in generation image be incorrect.
Step S404, statistics generates the correct image group of image ratio shared in all image groups, and the ratio is true It is set to the generation accuracy of current neural network model;
Assuming that the image group one in the discrimination model being input to after training shares N number of, step S403 judges there be M image Generation image in group is correctly, then M/N can be determined as to the generation accuracy of current neural network model.
In step S405, the parameters of current neural network model are constantly adjusted, and continuing will be selected each A non-single-lens reflex camera sample image is separately input into parameter neural network model adjusted, until current neural network model It generates accuracy to be greater than until presetting accuracy, then using the current neural network model as the neural network mould after training Type;
In the embodiment of the present application two, above-mentioned steps S405 is identical as the step S204 in embodiment one, for details, reference can be made to The description of embodiment one, details are not described herein again.
It can be seen that in the training method of neural network model provided by the embodiment of the present application two, it first will be initial Corresponding single-lens reflex camera sample image selected by the generation image and step S401 of neural network model output is as an image Group is input in the discrimination model after above-mentioned training, then constantly adjusts the parameters of current neural network model, until Discrimination model judgement after above-mentioned training is input in each image group of the discrimination model, and label is the image group of " correct " Until when large percentage shared by (that is to say that generate image in the image group is closer to single-lens reflex camera sample results image), thus The image for being input to the model can be converted to the figure with slr camera shooting effect by the neural network model after the training Picture.
In addition, in the embodiment of the present application, can also the training process described in above-mentioned steps S401-S405 recycle Iteration, so that the better neural network model of performance is generated, it is specific as shown in Figure 6:
(i.e. the part that dotted line frame is framed in attached drawing 6), step S601 be in step s 601, using initial Neural network model and sample database train initial discrimination model, and the discrimination model after generating training (was specifically trained Journey can be found in attached drawing 5), the discrimination model after recycling training, the initial neural network model of training, thus after generating training Neural network model (specific training method can be found in attached drawing 4);
In step S602, initial neural network model and initial discrimination model are updated, that is, will be in step S601 Neural network model after acquired training is as initial neural network model, after the training obtained in step S601 Discrimination model is as initial discrimination model;
After having updated initial neural network model and initial discrimination model, S601 is returned to step, then Neural network model after the secondary discrimination model generated after training and training, constantly by the neural network mould after current training Type is as initial neural network model, using the discrimination model after current training as initial discrimination model, and returns and holds Row step S601, until when the number of return step S601 reaches preset times, then by training acquired in last time The neural network model that neural network model afterwards is completed as final training, and the neural network mould that the final training is completed Type is deployed in terminal device.
In addition, in the embodiment of the present application, the training process described in above-mentioned steps S401-S405 carries out loop iteration, May be to generate the better neural network model of performance:
Firstly, with step S601, that is, utilize the initial differentiation of initial neural network model and sample database training Then model, the discrimination model after generating training using the discrimination model after training, train initial neural network model, from And the neural network model after being trained;
Secondly, judging (in the embodiment of the present application, whether the neural network model after currently available training meets the requirements The parameters of neural network model and current initial neural network model after current training can be compared, in turn Whether the neural network model after judging current training meets the requirements, and for details, reference can be made to the subsequent descriptions of the embodiment of the present application), If being unsatisfactory for requiring, with step S602 using the neural network model after current training as initial neural network model, And using the discrimination model after current training as initial discrimination model, above-mentioned steps S601 is repeated, is instructed again The neural network model after discrimination model and training after white silk;
Finally, constantly judge whether the neural network model after currently available training meets the requirements, if being unsatisfactory for requiring, Initial neural network model and initial discrimination model are then updated, and executes step S601 again, until after current training Neural network model when meeting the requirements until, using the neural network model after the training obtained for the last time as final training The neural network model of completion, and the neural network model that the final training is completed is deployed in terminal device.
Wherein, whether the neural network model after above-mentioned judgement is currently trained meets the requirements, comprising:
By in the neural network model after current training parameters with it is each in currently initial neural network model A parameter is compared, and obtains the adjustment percentage of parameters, and wherein the adjustment percentage is for parameter adjustment amount and currently first The ratio of corresponding parameter value in the neural network model of beginning, for example, including two ginsengs in current initial neural network model Number, respectively a=1, b=2, the parameter a in neural network model after current training is 1.2, and parameter b is 1.8, then parameter a Adjustment percentage be (1.2-1)/1=0.2, the adjustment percentage of parameter b is (1.8-2)/2=-0.1;
Judge whether the absolute value of the adjustment percentage of each parameter is respectively less than default adjustment threshold value;
If so, confirming that the neural network model after above-mentioned training is met the requirements;
Otherwise, then the neural network model after confirming above-mentioned training is unsatisfactory for requiring.
It that is to say, if the parameters ratio of neural network model and current initial neural network model after current training It is closer to, then it is assumed that the training of neural network model is completed.
The embodiment of the present application two provides another neural network model training method for being different from embodiment one, the application In training method provided by embodiment one, after the parameters for having adjusted neural network model every time, tune will be extracted The characteristics of image for the generation image that neural network model after whole parameter is exported, and also needing will be after extracted adjusting parameter The characteristics of image progress phase of characteristics of image and the corresponding single-lens reflex camera sample image of generation image that is exported of neural network model It is matched like degree, therefore, training method provided by the embodiment of the present application one can make the training process of neural network model spend Longer time.And training method provided by the embodiment of the present application two is avoided by precondition discrimination model each After the parameters for having adjusted neural network model, the extraction of characteristics of image and the calculating of similarity mode will be carried out, Therefore, compared to model training method provided by embodiment one, model training method provided by the embodiment of the present application two can Accelerate the training process of neural network model.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above method embodiment, respectively The execution sequence of process should be determined by its function and internal logic, and the implementation process without coping with the embodiment of the present application constitutes any It limits.
Embodiment three
The embodiment of the present application three provides a kind of image processing apparatus, for purposes of illustration only, only showing relevant to the application Part, image processing apparatus 700 as shown in Figure 7 include,
Image collection module 701, for obtaining image to be processed;
Single-lens reflex camera effect conversion module 702 is obtained for above-mentioned image to be processed to be input to the neural network model after training The image with slr camera shooting effect of neural network model output after to above-mentioned training;
Display module 703, for showing the image of the output of the neural network model after above-mentioned training.
Optionally, above-mentioned image processing apparatus 700 further include:
Output module is prompted, for exporting prompt information, above-mentioned prompt information is for prompting the user whether that agreement will be above-mentioned Image to be processed is converted to the image with slr camera shooting effect;
Feedback reception module, for receiving the feedback information of above-mentioned user's input, above-mentioned feedback information is used to indicate above-mentioned Whether user agrees to above-mentioned image to be processed being converted to the image with slr camera shooting effect;
Correspondingly, above-mentioned single-lens reflex camera effect conversion module 702 is specifically used for:
It is shot if above-mentioned feedback information indicates that above-mentioned user agrees to be converted to above-mentioned image to be processed with slr camera Above-mentioned image to be processed is then input to the neural network model after training, the nerve after obtaining above-mentioned training by the image of effect The image with slr camera shooting effect of network model output.
Optionally, above-mentioned image processing apparatus 700 further include:
Dimension acquisition module, for obtaining the picture size of above-mentioned image to be processed, above-mentioned picture size includes that image is wide Spend on direction pixel number in pixel number and short transverse;
Size judgment module, for judging whether above-mentioned picture size meets pre-set dimension;
Correspondingly, above-mentioned single-lens reflex camera effect conversion module 702 is specifically used for:
If above-mentioned picture size does not meet above-mentioned pre-set dimension, Dimension correction is carried out to above-mentioned image to be processed, by ruler The image obtained after very little amendment is input to the neural network model after training, the neural network model output after obtaining above-mentioned training The image with slr camera shooting effect.
Optionally, using the above-mentioned neural network model of training module training, above-mentioned training module includes:
Sample selection unit, for choosing each non-single-lens reflex camera sample image and each non-single-lens reflex camera sample from sample database The corresponding single-lens reflex camera sample image of this image, wherein each non-single-lens reflex camera sample image is the image shot by non-slr camera, Each single-lens reflex camera sample image is the image shot by slr camera;
Elementary area is generated, for each non-single-lens reflex camera sample image to be separately input into initial neural network model, It is imitated so that each non-single-lens reflex camera sample image is respectively converted into shoot with slr camera by above-mentioned initial neural network model The generation image of fruit;
Judgement unit is generated, for image and corresponding single-lens reflex camera will to be generated corresponding to each non-single-lens reflex camera sample image Sample image is input in the discrimination model after training as an image group so that discrimination model after above-mentioned training according to Single-lens reflex camera sample image in each image group judges whether the generation image in the image group is correct;
Accuracy statistic unit generates the correct image group of image ratio shared in all image groups for counting, Aforementioned proportion is determined as to the generation accuracy of current neural network model;
Parameter adjustment unit, for constantly adjusting the parameters of current neural network model, until current nerve Until the generation accuracy of network model is greater than default accuracy.
Optionally, above-mentioned training module further include:
Discrimination model training unit, it is right for utilizing above-mentioned initial neural network model and above-mentioned sample database Initial discrimination model is trained, the discrimination model after obtaining above-mentioned training.
Optionally, above-mentioned training module further include:
It is required that judging unit, for judging whether the neural network model after above-mentioned training meets the requirements;
Updating unit, if the neural network model after above-mentioned training is unsatisfactory for requiring, by the nerve net after above-mentioned training Network model is as initial neural network model, and using the discrimination model after above-mentioned training as initial discrimination model.
Optionally, above-mentioned requirements judging unit includes:
Parameter adjust computation subunit, for by the neural network model after above-mentioned training parameters and it is above-mentioned just Parameters in the neural network model of beginning are compared, and obtain the adjustment percentage of parameters, wherein above-mentioned adjustment hundred Divide than the ratio for parameter adjustment amount and corresponding parameter value in above-mentioned initial neural network model;
Parameter adjusts judgment sub-unit, for judging it is default whether the absolute value of adjustment percentage of each parameter is respectively less than Adjust threshold value;
First requirement judgment sub-unit, if the absolute value of the adjustment percentage for each parameter is respectively less than default adjustment threshold Value, the then neural network model after confirming above-mentioned training are met the requirements;
Second requires judgment sub-unit, if the absolute value of the adjustment percentage for each parameter is not respectively less than default adjustment Threshold value, the then neural network model after confirming above-mentioned training are unsatisfactory for requiring.
It should be noted that the contents such as information exchange, implementation procedure between above-mentioned apparatus/unit, due to the application Embodiment of the method is based on same design, concrete function and bring technical effect, for details, reference can be made to embodiment of the method part, this Place repeats no more.
Example IV
Fig. 8 is the schematic diagram for the terminal device that the embodiment of the present application four provides.As shown in figure 8, the terminal of the embodiment is set Standby 8 include: processor 80, memory 81 and are stored in the meter that can be run in above-mentioned memory 81 and on above-mentioned processor 80 Calculation machine program 82.Above-mentioned processor 80 realizes the step in above-mentioned each embodiment of the method when executing above-mentioned computer program 82, Such as step S101 to S103 shown in FIG. 1.Alternatively, above-mentioned processor 80 realized when executing above-mentioned computer program 82 it is above-mentioned each The function of each module/unit in Installation practice, such as the function of module 701 to 703 shown in Fig. 7.
Illustratively, above-mentioned computer program 82 can be divided into one or more module/units, said one or Multiple module/units are stored in above-mentioned memory 81, and are executed by above-mentioned processor 80, to complete the application.Above-mentioned one A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, which is used for Implementation procedure of the above-mentioned computer program 82 in above-mentioned terminal device 8 is described.For example, above-mentioned computer program 82 can be divided It is cut into image collection module, single-lens reflex camera effect conversion module and display module, each module concrete function is as follows:
Obtain image to be processed;
Above-mentioned image to be processed is input to the neural network model after training, the neural network mould after obtaining above-mentioned training The image with slr camera shooting effect of type output;
The image of neural network model output after showing above-mentioned training.
Above-mentioned terminal device may include, but be not limited only to, processor 80, memory 81.Those skilled in the art can manage Solution, Fig. 8 is only the example of terminal device 8, does not constitute the restriction to terminal device 8, may include more or more than illustrating Few component perhaps combines certain components or different components, such as above-mentioned terminal device can also be set including input and output Standby, network access equipment, bus etc..
Alleged processor 80 can be central processing unit (Central Processing Unit, CPU), can also be Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field- Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor Deng.
Above-mentioned memory 81 can be the internal storage unit of above-mentioned terminal device 8, such as the hard disk or interior of terminal device 8 It deposits.Above-mentioned memory 81 is also possible to the External memory equipment of above-mentioned terminal device 8, such as be equipped on above-mentioned terminal device 8 Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge Deposit card (Flash Card) etc..Further, above-mentioned memory 81 can also both include the storage inside list of above-mentioned terminal device 8 Member also includes External memory equipment.Above-mentioned memory 81 is for storing needed for above-mentioned computer program and above-mentioned terminal device Other programs and data.Above-mentioned memory 81 can be also used for temporarily storing the data that has exported or will export.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of above-mentioned apparatus is divided into different functional unit or module, more than completing The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above system The specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed Scope of the present application.
In embodiment provided herein, it should be understood that disclosed device/terminal device and method, it can be with It realizes by another way.For example, device described above/terminal device embodiment is only schematical, for example, on The division of module or unit is stated, only a kind of logical function partition, there may be another division manner in actual implementation, such as Multiple units or components can be combined or can be integrated into another system, or some features can be ignored or not executed.Separately A bit, shown or discussed mutual coupling or direct-coupling or communication connection can be through some interfaces, device Or the INDIRECT COUPLING or communication connection of unit, it can be electrical property, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If above-mentioned integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or In use, can store in a computer readable storage medium.Based on this understanding, the application realizes above-mentioned implementation All or part of the process in example method, can also instruct relevant hardware to complete, above-mentioned meter by computer program Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on The step of stating each embodiment of the method.Wherein, above-mentioned computer program includes computer program code, above-mentioned computer program generation Code can be source code form, object identification code form, executable file or certain intermediate forms etc..Above-mentioned computer-readable medium It may include: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic that can carry above-mentioned computer program code Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that above-mentioned The content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practice Subtract, such as does not include electric carrier signal and electricity according to legislation and patent practice, computer-readable medium in certain jurisdictions Believe signal.
Above above-described embodiment is only to illustrate the technical solution of the application, rather than its limitations;Although referring to aforementioned reality Example is applied the application is described in detail, those skilled in the art should understand that: it still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all Comprising within the scope of protection of this application.

Claims (10)

1. a kind of image processing method characterized by comprising
Obtain image to be processed;
The image to be processed is input to the neural network model after training, the neural network model after obtaining the training is defeated The image with slr camera shooting effect out;
The image of neural network model output after showing the training.
2. image processing method as described in claim 1, which is characterized in that after the image to be processed to be input to training Neural network model, the image with slr camera shooting effect of the neural network model output after obtaining the training Before step, further includes:
Prompt information is exported, the prompt information is for prompting the user whether to agree to be converted to the image to be processed have list The image of reverse phase machine shooting effect;
The feedback information of user's input is received, the feedback information is used to indicate whether the user agrees to described wait locate Reason image is converted to the image with slr camera shooting effect;
Correspondingly, the image to be processed is input to the neural network model after training, the nerve net after obtaining the training The image with slr camera shooting effect of network model output, comprising:
If the feedback information indicates that the user agrees to be converted to the image to be processed with slr camera shooting effect Image, then the image to be processed is input to the neural network model after training, neural network after obtaining the training The image with slr camera shooting effect of model output.
3. image processing method as described in claim 1, which is characterized in that after the image to be processed to be input to training Neural network model, the image with slr camera shooting effect of the neural network model output after obtaining the training Before step, further includes:
Obtain the picture size of the image to be processed, described image size include on picture traverse direction pixel number and Pixel number in short transverse;
Judge whether described image size meets pre-set dimension;
Correspondingly, the image to be processed is input to the neural network model after training, the nerve net after obtaining the training The image with slr camera shooting effect of network model output, comprising:
If described image size does not meet the pre-set dimension, Dimension correction is carried out to the image to be processed, size is repaired The image just obtained afterwards is input to the neural network model after training, the tool of the neural network model output after obtaining the training There is the image of slr camera shooting effect.
4. image processing method as claimed any one in claims 1 to 3, which is characterized in that the neural network model Training process includes:
Each non-single-lens reflex camera sample image and the corresponding single-lens reflex camera of each non-single-lens reflex camera sample image are chosen from sample database Sample image, wherein each non-single-lens reflex camera sample image is the image shot by non-slr camera, each single-lens reflex camera sample image be by The image of slr camera shooting;
Each non-single-lens reflex camera sample image is separately input into initial neural network model, so that the initial nerve net Each non-single-lens reflex camera sample image is respectively converted into the generation image with slr camera shooting effect by network model;
Using generation image and corresponding single-lens reflex camera sample image corresponding to each non-single-lens reflex camera sample image as an image Group is input in the discrimination model after training, so that the discrimination model after the training is according to the single-lens reflex camera sample in each image group This image judges whether the generation image in the image group is correct;
Statistics generates the correct image group of image ratio shared in all image groups, is current mind by the ratio-dependent Generation accuracy through network model;
The parameters of current neural network model are constantly adjusted, and are continued selected each non-single-lens reflex camera sample image point It is not input in parameter neural network model adjusted, is preset until the generation accuracy of current neural network model is greater than Until accuracy, and using the current neural network model as the neural network model after training.
5. image processing method as claimed in claim 4, which is characterized in that described that each non-single-lens reflex camera sample image institute is right The generation image and corresponding single-lens reflex camera sample image answered are input in the discrimination model after training as an image group Before step, the training process of the neural network model further include:
Using the initial neural network model and the sample database, initial discrimination model is trained, is obtained Discrimination model after obtaining the training.
6. image processing method as claimed in claim 5, which is characterized in that make the current neural network model described After the step of for neural network model after training, the training process of the neural network model further include:
Whether the neural network model after judging the training meets the requirements;
If the neural network model after the training is unsatisfactory for requiring:
Using the neural network model after the training as initial neural network model, and by the discrimination model after the training As initial discrimination model, returns and execute each non-single-lens reflex camera sample image and each non-chosen from sample database The step of single-lens reflex camera sample image corresponding single-lens reflex camera sample image and subsequent step.
7. image processing method as claimed in claim 6, which is characterized in that the neural network mould after the judgement training Whether type meets the requirements, comprising:
By each ginseng in the parameters and the initial neural network model in the neural network model after the training Number be compared, obtain the adjustment percentage of parameters, wherein the adjustment percentage be parameter adjustment amount and it is described initially Neural network model in corresponding parameter value ratio;
Judge whether the absolute value of the adjustment percentage of each parameter is respectively less than default adjustment threshold value;
If so, confirming that the neural network model after the training is met the requirements;
Otherwise, then the neural network model after confirming the training is unsatisfactory for requiring.
8. a kind of image processing apparatus characterized by comprising
Image collection module, for obtaining image to be processed;
Single-lens reflex camera effect conversion module obtains described for the image to be processed to be input to the neural network model after training The image with slr camera shooting effect of neural network model output after training;
Display module, for showing the image of the output of the neural network model after the training.
9. a kind of terminal device, including memory, processor and storage are in the memory and can be on the processor The computer program of operation, which is characterized in that the processor realizes such as claim 1 to 7 when executing the computer program The step of any one the method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In when the computer program is executed by processor the step of any one of such as claim 1 to 7 of realization the method.
CN201810990824.6A 2018-08-28 2018-08-28 A kind of image processing method, image processing apparatus and terminal device Pending CN109102460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810990824.6A CN109102460A (en) 2018-08-28 2018-08-28 A kind of image processing method, image processing apparatus and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810990824.6A CN109102460A (en) 2018-08-28 2018-08-28 A kind of image processing method, image processing apparatus and terminal device

Publications (1)

Publication Number Publication Date
CN109102460A true CN109102460A (en) 2018-12-28

Family

ID=64864116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810990824.6A Pending CN109102460A (en) 2018-08-28 2018-08-28 A kind of image processing method, image processing apparatus and terminal device

Country Status (1)

Country Link
CN (1) CN109102460A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113094118A (en) * 2021-04-26 2021-07-09 深圳思谋信息科技有限公司 Data processing system, method, apparatus, computer device and storage medium
CN114266324A (en) * 2021-12-30 2022-04-01 智慧眼科技股份有限公司 Model visualization modeling method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609506A (en) * 2017-09-08 2018-01-19 百度在线网络技术(北京)有限公司 Method and apparatus for generating image
US20180150947A1 (en) * 2016-11-28 2018-05-31 Adobe Systems Incorporated Facilitating sketch to painting transformations
CN108154465A (en) * 2017-12-19 2018-06-12 北京小米移动软件有限公司 Image processing method and device
CN108305223A (en) * 2018-01-09 2018-07-20 珠海格力电器股份有限公司 Image background blurring processing method and device
CN108416326A (en) * 2018-03-27 2018-08-17 百度在线网络技术(北京)有限公司 Face identification method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180150947A1 (en) * 2016-11-28 2018-05-31 Adobe Systems Incorporated Facilitating sketch to painting transformations
CN107609506A (en) * 2017-09-08 2018-01-19 百度在线网络技术(北京)有限公司 Method and apparatus for generating image
CN108154465A (en) * 2017-12-19 2018-06-12 北京小米移动软件有限公司 Image processing method and device
CN108305223A (en) * 2018-01-09 2018-07-20 珠海格力电器股份有限公司 Image background blurring processing method and device
CN108416326A (en) * 2018-03-27 2018-08-17 百度在线网络技术(北京)有限公司 Face identification method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113094118A (en) * 2021-04-26 2021-07-09 深圳思谋信息科技有限公司 Data processing system, method, apparatus, computer device and storage medium
CN113094118B (en) * 2021-04-26 2023-05-30 深圳思谋信息科技有限公司 Data processing system, method, apparatus, computer device, and storage medium
CN114266324A (en) * 2021-12-30 2022-04-01 智慧眼科技股份有限公司 Model visualization modeling method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111950723B (en) Neural network model training method, image processing method, device and terminal equipment
CN109166156A (en) A kind of generation method, mobile terminal and the storage medium of camera calibration image
CN109377502A (en) A kind of image processing method, image processing apparatus and terminal device
CN108765278A (en) A kind of image processing method, mobile terminal and computer readable storage medium
CN108961303A (en) A kind of image processing method, device, electronic equipment and computer-readable medium
CN109040597A (en) A kind of image processing method based on multi-cam, mobile terminal and storage medium
EP3779891A1 (en) Method and device for training neural network model, and method and device for generating time-lapse photography video
CN102918545B (en) For the method and apparatus of visual search stability
CN104333700A (en) Image blurring method and image blurring device
CN109389135A (en) A kind of method for screening images and device
WO2020064253A1 (en) Methods for generating a deep neural net and for localising an object in an input image, deep neural net, computer program product, and computer-readable storage medium
CN109120862A (en) High-dynamic-range image acquisition method, device and mobile terminal
CN109241318A (en) Picture recommendation method, device, computer equipment and storage medium
CN103116754A (en) Batch image segmentation method and batch image segmentation system based on recognition models
CN109005367A (en) A kind of generation method of high dynamic range images, mobile terminal and storage medium
CN108364269A (en) A kind of whitepack photo post-processing method based on intensified learning frame
CN109102460A (en) A kind of image processing method, image processing apparatus and terminal device
CN109255390A (en) Preprocess method and module, discriminator, the readable storage medium storing program for executing of training image
CN110288511A (en) Minimum error joining method, device, electronic equipment based on double camera image
CN109255768A (en) Image completion method, apparatus, terminal and computer readable storage medium
CN110838088B (en) Multi-frame noise reduction method and device based on deep learning and terminal equipment
CN109697090A (en) A kind of method, terminal device and the storage medium of controlling terminal equipment
CN103136745A (en) System and method for performing depth estimation utilizing defocused pillbox images
CN105959593B (en) A kind of exposure method and photographing device of photographing device
CN103888655B (en) A kind of photographic method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181228