CN108898169A - Image processing method, picture processing unit and terminal device - Google Patents
Image processing method, picture processing unit and terminal device Download PDFInfo
- Publication number
- CN108898169A CN108898169A CN201810629462.8A CN201810629462A CN108898169A CN 108898169 A CN108898169 A CN 108898169A CN 201810629462 A CN201810629462 A CN 201810629462A CN 108898169 A CN108898169 A CN 108898169A
- Authority
- CN
- China
- Prior art keywords
- picture
- processed
- testing result
- classification
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The application is suitable for image processing technology, provides a kind of image processing method, the method includes:Detect the foreground target in picture to be processed, obtain the first testing result, first testing result is used to indicate in the picture to be processed with the presence or absence of foreground target, and the position of the classification and each foreground target of each foreground target in the picture to be processed is used to indicate when there are foreground target;Scene classification is carried out to the picture to be processed, obtains classification results, the classification results are used to indicate whether to identify the background of the picture to be processed, and are used to indicate the background classification of the picture to be processed when identifying the background of the picture to be processed;According to first testing result and the classification results, the picture to be processed is handled.The processing that the application can more refine picture to be processed effectively promotes the treatment effect of picture entirety.
Description
Technical field
The application belongs to image processing technology more particularly to image processing method, picture processing unit, terminal device
And computer readable storage medium.
Background technique
Currently, many users like sharing the picture captured by oneself in social common platform, in order to make captured by oneself
Picture more create an aesthetic feeling, usually picture can all be handled.
However, existing image processing method is usually:Some goal-selling included in picture is first obtained, such as
Face, animal, blue sky, greenweed etc. perform corresponding processing whole picture according to the acquired goal-selling.For example, if
The goal-selling of acquisition is face, then carries out the processing such as whitening and mill skin to whole picture.
Although existing picture processing mode can meet user to the place of certain goal-selling in picture to a certain extent
Reason demand.But be possible to influence picture treated overall effect, although such as the face whitening in picture, but scheme
The effect of greenweed and blue sky background in piece is but deteriorated.
Summary of the invention
In view of this, the embodiment of the present application provides a kind of image processing method, picture processing unit, terminal device and meter
Calculation machine readable storage medium storing program for executing can effectively promote the treatment effect of picture entirety.
The first aspect of the embodiment of the present application provides a kind of image processing method, including:
The foreground target in picture to be processed is detected, the first testing result is obtained, first testing result is used to indicate
It whether there is foreground target in the picture to be processed, and be used to indicate the class of each foreground target when there are foreground target
Other and position of each foreground target in the picture to be processed;
Scene classification is carried out to the picture to be processed, obtains classification results, the classification results are used to indicate whether to know
Not Chu the picture to be processed background, and be used to indicate when identifying the background of the picture to be processed described to be processed
The background classification of picture;
According to first testing result and the classification results, the picture to be processed is handled.
The embodiment of the present application passes through the foreground target detected in picture to be processed first, for example, face, animal etc., obtain
Foreground target testing result;Secondly, carrying out scene classification to the picture to be processed, that is, identify back current in picture to be processed
Scape belongs to which kind of scene, such as seabeach scene, scale Forest Scene, snowfield scene, grassland scene, lit desert scene, blue sky scene etc., obtains
Obtain scene classification result;Finally, further according to foreground target testing result and scene classification result to the picture to be processed into
Row processing, so as to realize the comprehensive processing to foreground target in picture to be processed and background image, such as allows prospect
Target face is whiter, background image blue sky is more blue, and greenweed is greener etc., effectively promotes the treatment effect of picture entirety, enhances user
Experience has stronger usability and practicality.
In one embodiment, if classification results instruction identifies the background of the picture to be processed, institute is detected
The target context in background is stated, the second testing result is obtained, second testing result is used to indicate in the background whether deposit
In target context, and the classification for being used to indicate each target context when there are target context and each target context are described
Position in picture to be processed;
Correspondingly, carrying out processing packet to the picture to be processed according to first testing result and the classification results
It includes:
According to first testing result, the classification results and second testing result, to the figure to be processed
Piece is handled.
The embodiment of the present application can not only handle foreground target in picture to be processed and background image, may be used also
To handle the target context in background image, so that the fineness of picture processing is higher, figure is more effectively promoted
The treatment effect of piece entirety enhances user experience.
The second aspect of the application provides a kind of picture processing unit, including:
First detection module obtains the first testing result for detecting the foreground target in picture to be processed, and described first
Testing result is used to indicate in the picture to be processed with the presence or absence of foreground target, and is used to indicate when there are foreground target
The position of the classification of each foreground target and each foreground target in the picture to be processed;
Categorization module obtains classification results, the classification results are used for carrying out scene classification to the picture to be processed
In the background for indicating whether to identify the picture to be processed, and when identifying the background of the picture to be processed for referring to
Show the background classification of the picture to be processed;
Processing module, for being carried out to the picture to be processed according to first testing result and the classification results
Processing.
The third aspect of the application provides a kind of terminal device, including including memory, processor and is stored in institute
The computer program that can be run in memory and on the processor is stated, the processor executes real when the computer program
Now such as the step of the image processing method.
The fourth aspect of the application provides a kind of computer readable storage medium, and the computer readable storage medium is deposited
Computer program is contained, is realized when the computer program is executed by one or more processors such as the image processing method
Step.
The 5th aspect of the application provides a kind of computer program product, and the computer program product includes computer
Program is realized when the computer program is executed by one or more processors such as the step of the image processing method.
Detailed description of the invention
It in order to more clearly explain the technical solutions in the embodiments of the present application, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application
Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is the implementation process schematic diagram for the image processing method that the embodiment of the present application one provides;
Fig. 2 is the implementation process schematic diagram for the image processing method that the embodiment of the present application two provides;
Fig. 3 is the concrete scene schematic diagram for the image processing method that the embodiment of the present application three provides;
Fig. 4 a, 4b are the exemplary diagrams for the picture processing that the embodiment of the present application three provides;
Fig. 5 is the schematic diagram for the picture processing unit that the embodiment of the present application four provides;
Fig. 6 is the schematic diagram for the terminal device that the embodiment of the present application five provides.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed
Body details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specific
The application also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity
The detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " instruction is described special
Sign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step,
Operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment
And be not intended to limit the application.As present specification and it is used in the attached claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is
Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true
It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In the specific implementation, terminal device described in the embodiment of the present application is including but not limited to such as with the sensitive table of touch
Mobile phone, laptop computer or the tablet computer in face (for example, touch-screen display and/or touch tablet) etc it is other
Portable device.It is to be further understood that in certain embodiments, the equipment is not portable communication device, but is had
The desktop computer of touch sensitive surface (for example, touch-screen display and/or touch tablet).
In following discussion, the terminal device including display and touch sensitive surface is described.However, should manage
Solution, terminal device may include that one or more of the other physical User of such as physical keyboard, mouse and/or control-rod connects
Jaws equipment.
Terminal device supports various application programs, such as one of the following or multiple:Drawing application program, demonstration application
Program, word-processing application, website creation application program, disk imprinting application program, spreadsheet applications, game are answered
With program, telephony application, videoconference application, email application, instant messaging applications, forging
Refining supports application program, photo management application program, digital camera application program, digital camera application program, web-browsing to answer
With program, digital music player application and/or video frequency player application program.
At least one of such as touch sensitive surface can be used in the various application programs that can be executed on the terminal device
Public physical user-interface device.It can be adjusted among applications and/or in corresponding application programs and/or change touch is quick
Feel the corresponding information shown in the one or more functions and terminal on surface.In this way, terminal public physical structure (for example,
Touch sensitive surface) it can support the various application programs with user interface intuitive and transparent for a user.
In addition, term " first ", " second " etc. are only used for distinguishing description, and should not be understood as in the description of the present application
Indication or suggestion relative importance.
In order to illustrate technical solution described herein, the following is a description of specific embodiments.
It is the implementation process schematic diagram for the image processing method that the embodiment of the present application one provides referring to Fig. 1, this method can be with
Including:
Step S101 detects the foreground target in picture to be processed, obtains the first testing result, first testing result
It is used to indicate in the picture to be processed with the presence or absence of foreground target, and is used to indicate each prospect when there are foreground target
The position of the classification of target and each foreground target in the picture to be processed.
In the present embodiment, the picture to be processed can be the picture of current shooting, pre-stored picture, from network
The picture of upper acquisition or the picture extracted from video etc..For example, the picture shot by the camera of terminal device;Alternatively,
The picture that pre-stored wechat good friend sends;Alternatively, the picture downloaded from appointed website;Alternatively, from currently played
The frame picture extracted in video.Preferably, can also be a certain frame picture after terminal device starting camera in preview screen.
In the present embodiment, the testing result includes but is not limited to:Whether there is or not foreground targets in the picture to be processed
It indicates information, and is used to indicate the class of each foreground target included in above-mentioned picture to be processed when comprising foreground target
Other and position information.Wherein, the foreground target can refer to the target in the picture to be processed with behavioral characteristics, example
Such as people, animal;The foreground target can also refer to the scenery relatively close and with static nature apart from audience, such as fresh
Flower, cuisines etc..Further, in order to more accurately recognize the position of foreground target, and to the foreground target recognized into
Row is distinguished.The present embodiment can also carry out frame using different selected frames to the foreground target after detecting foreground target
Choosing, such as box frame select animal, round frame face making etc..
Preferably, the present embodiment can using training after scene detection model to the foreground target in picture to be processed into
Row detection.Illustratively, which can detect (Single Shot Multibox for the more boxes of single-point
Detection, SSD) etc. with foreground target detection function model.It is of course also possible to use other scene detection modes, example
Such as being detected by target (such as face) recognizer whether there is predeterminated target in the picture to be processed, detect that there are institutes
After stating predeterminated target, determine the predeterminated target in the picture to be processed by target location algorithm or target tracking algorism
Position.
It should be noted that those skilled in the art are in the technical scope disclosed by the present invention, can be readily apparent that other
The scheme of detection foreground target should also will not repeat them here within protection scope of the present invention.
Illustrate field for detecting using the scene detection model after training to the foreground target in picture to be processed
The specific training process of scape detection model:
Samples pictures and the corresponding testing result of the samples pictures are obtained in advance, wherein the samples pictures are corresponding
Testing result include the classification of each foreground target and position in the samples pictures;
Using the foreground target in the initial above-mentioned samples pictures of scene detection model inspection, and according to the institute obtained in advance
The corresponding testing result of samples pictures is stated, the Detection accuracy of the initial scene detection model is calculated;
If above-mentioned Detection accuracy is less than preset first detection threshold value, the ginseng of initial scene detection model is adjusted
Number, then by samples pictures described in parameter scene detection model inspection adjusted, until scene detection model adjusted
Detection accuracy is greater than or equal to first detection threshold value, and using the scene detection model as the scene detection mould after training
Type.Wherein, the method for adjusting parameter includes but is not limited to stochastic gradient descent algorithm, power more new algorithm etc..
Step S102 carries out scene classification to the picture to be processed, obtains classification results, the classification results are for referring to
The background for whether identifying the picture to be processed shown, and is used to indicate institute when identifying the background of the picture to be processed
State the background classification of picture to be processed.
In the present embodiment, scene classification is carried out to the picture to be processed, that is, identifies back current in picture to be processed
Scape belongs to which kind of scene, such as seabeach scene, scale Forest Scene, snowfield scene, grassland scene, lit desert scene, blue sky scene etc..
Preferably, scene classification can be carried out to the picture to be processed using the scene classification model after training.Example
Property, which can have the model of background detection function for MobileNet etc..It is of course also possible to use its
His scene classification mode, such as gone out after the foreground target in the picture to be processed by foreground detection model inspection, by institute
The remainder in picture to be processed is stated as background, and identifies the classification of remainder by image recognition algorithm.
It should be noted that those skilled in the art are in the technical scope disclosed by the present invention, can be readily apparent that other
The scheme of detection background should also will not repeat them here within protection scope of the present invention.
Illustrate scene point for detecting using the scene classification model after training to the background in picture to be processed
The specific training process of class model:
Each samples pictures and the corresponding classification results of each samples pictures are obtained in advance;Such as samples pictures 1 are grass
Ground scene, samples pictures 2 are snowfield scene, samples pictures 3 are seabeach scene;
Scene classification is carried out to each samples pictures using initial scene classification model, and each according to what is obtained in advance
The classification results of samples pictures calculate the classification accuracy of the initial scene classification model, i.e., whether identify samples pictures 1
For meadow scene, samples pictures 2 be snowfield scene, samples pictures 3 are seabeach scene, samples pictures 4 are lit desert scene;
If above-mentioned classification accuracy is less than preset classification thresholds (such as 75%, that is, the samples pictures identified are less than 3),
The parameter of above-mentioned initial scene classification model is then adjusted, then passes through sample described in parameter scene classification model inspection adjusted
Picture, until the classification accuracy of scene classification model adjusted divides more than or equal to the classification thresholds, and by the scene
Class model is as the scene classification model after training.Wherein, the method for adjusting parameter includes but is not limited to that stochastic gradient descent is calculated
Method, power more new algorithm etc..
Step S103 is handled the picture to be processed according to first testing result and the classification results.
It can specifically include, if first testing result indicates that foreground target, institute are not present in the picture to be processed
The background that classification results instruction identifies the picture to be processed is stated, then:
According to the background classification of the picture to be processed, the picture tupe of the picture to be processed is obtained, and according to
The picture tupe handles the picture to be processed, obtains treated picture;
Alternatively, if first testing result indicates that there are foreground target, the classification results in the picture to be processed
Indicate the background of the unidentified picture to be processed out, then:
It is right according to the classification of each foreground target of first testing result instruction and the position of each foreground target
The picture to be processed is handled;
Alternatively, if first testing result indicates that there are foreground target, the classification results in the picture to be processed
Instruction identifies the background of the picture to be processed, then:
The classification of each foreground target, the position of each foreground target and the institute indicated according to first testing result
The background classification for stating the picture to be processed of classification results instruction, handles the picture to be processed.
Wherein, include but is not limited to foreground target and/or background progress style conversion to the processing of picture to be processed, satisfy
With the adjusting of the image parameters such as degree, brightness and/or contrast.
By the embodiment of the present application, the comprehensive place to foreground target in picture to be processed and background image may be implemented
Reason effectively promotes the treatment effect of picture entirety.
It referring to fig. 2, is the implementation process schematic diagram for the image processing method that the embodiment of the present application two provides, this method can be with
Including:
Step S201 detects the foreground target in picture to be processed, obtains the first testing result, first testing result
It is used to indicate in the picture to be processed with the presence or absence of foreground target, and is used to indicate each prospect when there are foreground target
The position of the classification of target and each foreground target in the picture to be processed;
Step S202 carries out scene classification to the picture to be processed, obtains classification results, the classification results are for referring to
The background for whether identifying the picture to be processed shown, and is used to indicate institute when identifying the background of the picture to be processed
State the background classification of picture to be processed.
Wherein, the specific implementation process of step S201 and S202 can refer to above-mentioned steps S101,102, herein no longer
It repeats.
Step S203 detects the background if classification results instruction identifies the background of the picture to be processed
In target context, obtain the second testing result, second testing result is used to indicate in the background with the presence or absence of background
Target, and the classification for being used to indicate each target context when there are target context and each target context are described to be processed
Position in picture.
As the preferred embodiment of the application, in order to further enhance the treatment effect of picture, the present embodiment is being identified
Out after the background in picture to be processed, it is also necessary to be detected to the target context in the background, in order to subsequent to institute
State the processing of target context.Wherein, the testing result includes but is not limited to:Whether there is or not the instruction of target context letters in the background
Breath, and the classification of each target context and the information of position are used to indicate when comprising target context.Wherein, the background mesh
Mark refers to all kinds of targets, such as blue sky, white clouds, the small grass in greenweed, flower in blue sky etc. of composition background.Further,
In order to more accurately recognize the position of target context, and the target context recognized is distinguished.The present embodiment is being examined
After measuring target context, frame choosing can also be carried out using different selected frame to the target context, for example, dotted line circle select it is white
Cloud, oval circle select flower etc..
Preferably, in order to improve the detection efficiency of target context, the present embodiment can be using the shallow-layer convolution mind after training
The target context in background is detected through network model, wherein the shallow-layer convolutional neural networks model refers to convolutional layer
Number is less than the neural network model of predetermined number (such as 8).Illustratively, which can be
AlexNet etc. has the model of target context detection function.It is of course also possible to use VGGNet model, GoogLeNet model,
ResNet model etc..
It should be noted that those skilled in the art are in the technical scope disclosed by the present invention, can be readily apparent that other
The scheme of detection foreground target should also will not repeat them here within protection scope of the present invention.
Illustrated for being detected using the shallow-layer convolutional neural networks model after training to the target context in background
The specific training process of the model:
The testing result of background in samples pictures and the samples pictures is obtained in advance, wherein the testing result packet
Include classification and the position of each target context;
It is obtained using the target context in the initial above-mentioned background of shallow-layer convolutional neural networks model inspection, and according to preparatory
Testing result, calculate the Detection accuracy of the initial shallow-layer convolutional neural networks model;
If above-mentioned Detection accuracy is less than preset second detection threshold value, initial shallow-layer convolutional neural networks mould is adjusted
The parameter of type, then by the background of samples pictures described in parameter shallow-layer convolutional neural networks model inspection adjusted, until adjusting
The Detection accuracy of shallow-layer convolutional neural networks model after whole is greater than or equal to second detection threshold value, and the shallow-layer is rolled up
Product neural network model is as the shallow-layer convolutional neural networks model after training.
Step S204, according to first testing result, the classification results and second testing result, to described
Picture to be processed is handled.
The specific can be that if first testing result indicates that foreground target, institute are not present in the picture to be processed
The background that classification results instruction identifies the picture to be processed is stated, second testing result indicates to be not present in the background
Target context, then:
According to the background classification of the picture to be processed, the picture tupe of the picture to be processed is obtained, and according to
The picture tupe handles the picture to be processed, obtains treated picture;
Alternatively, if first testing result indicates that there are foreground target, the classification results in the picture to be processed
Instruction identifies that the background of the picture to be processed, second testing result indicate that target context is not present in the background,
Then:
The classification of each foreground target, the position of each foreground target and the institute indicated according to first testing result
The background classification for stating the picture to be processed of classification results instruction, handles the picture to be processed;
Alternatively, if first testing result indicates that there are foreground target, the classification results in the picture to be processed
Instruction identifies that the background of the picture to be processed, second testing result indicate that there are target contexts in the background, then:
According to the classification of each foreground target of first testing result instruction, the position, described of each foreground target
The background classification of the picture to be processed of classification results instruction and each target context of second testing result instruction
Position in the picture to be processed of classification and each target context, the picture to be processed is handled.
Optionally, the classification of each foreground target according to first testing result instruction, each foreground target
Position and the classification results instruction the picture to be processed background classification, at the picture to be processed
Reason, including:
According to the background classification of the picture to be processed, the picture tupe of the picture to be processed is obtained, and according to
The picture tupe handles the picture to be processed, obtains treated the first picture;
According to the classification of each foreground target indicated by first testing result, the picture of each foreground target is obtained
Tupe, and position of each foreground target according to indicated by first testing result in the picture to be processed,
Determine the picture region where each foreground target;
According to the picture tupe of each foreground target, the picture region where each foreground target is handled,
Obtain corresponding treated picture region;
Picture region where each foreground target in first picture is substituted for corresponding treated picture
Region, obtains treated second picture, and will treated the second picture as treated final picture.
Optionally, the classification of each foreground target according to first testing result instruction, each foreground target
Position, the classification results instruction the picture to be processed background classification and second testing result instruction it is each
The position of the classification of a target context and each target context in the picture to be processed, at the picture to be processed
Reason, including:
According to the background classification of the picture to be processed, the picture tupe of the picture to be processed is obtained, and according to
The picture tupe handles the picture to be processed, obtains treated the first picture;
According to the classification of each foreground target indicated by first testing result, the picture of each foreground target is obtained
Tupe, and position of each foreground target according to indicated by first testing result in the picture to be processed,
Determine the picture region where each foreground target;
According to the picture tupe of each foreground target, the picture region where each foreground target is handled,
Obtain corresponding treated the first picture region;
According to the classification of each target context indicated by second testing result, the picture of each target context is obtained
Tupe, and position of each target context according to indicated by second testing result in the picture to be processed,
Determine the picture region where each target context;
According to the picture tupe of each target context, the picture region where each target context is handled,
Obtain corresponding treated second picture region;
Picture region where each foreground target in first picture is substituted for corresponding treated first
Picture region where each target context in first picture, is substituted for that corresponding treated second by picture region
Picture region, obtains treated second picture, and will treated the second picture as treated final picture.
Wherein, picture tupe includes but is not limited to carry out style to foreground target, background and/or target context to turn
It changes, the adjusting of saturation degree, the image parameters such as brightness and/or contrast.
By the embodiment of the present application, foreground target in picture to be processed and background image can not only be handled,
Target context in background image can also be handled, so that the fineness of picture processing is higher, more effectively be mentioned
The treatment effect of picture entirety is risen, user experience is enhanced.
It is the concrete scene schematic diagram for the image processing method that the embodiment of the present application three provides referring to Fig. 3, this method can be with
Including:
Step S301 obtains the picture in the reserved picture of the camera after detecting terminal device starting camera;
Step S302 carries out foreground target detection to the picture by trained scene detection model, obtains first
Testing result;
Step S303, first testing result indicates that there are foreground targets in the picture, and the foreground target is
" ship " carries out frame choosing to " ship " using solid box, with the position of " ship " in the picture described in determination, such as Fig. 4 a institute
Show;
Step S304 carries out scene classification to the picture by trained scene classification model, obtains classification results,
The classification results indicate that there are blue sky scene, green hill scene, sea scenes in the picture, using rectangle dotted line frame to institute
It states blue sky scene, green hill scene, sea scene and carries out frame choosing, as shown in fig. 4 a;
Step S305 detects the target context in background using the shallow-layer convolutional neural networks model after training,
Obtain the second testing result;
Step S306, second testing result indicates that there are target contexts in the background, and the target context is
" blue sky ", " white clouds ", using triangle dotted line frame to " blue sky ", " white clouds " carry out frame choosing, with " blue sky " described in determination,
The position of " white clouds " in the picture, as shown in fig. 4 a;
It is corresponding to obtain blue sky scene, green hill scene, sea scene according to the scene type of the picture by step S307
Picture tupe, and the blue sky scene, green hill scene, sea scene are handled according to the picture tupe,
So that blue sky is more blue, green hill is greener, sea is more blue, first picture that obtains that treated;
Step S308 obtains the picture tupe (such as style of missing old times or old friends) of foreground target " ship ", and according to the solid line
Frame determines the picture region where shown foreground target " ship ", according to the picture tupe of foreground target, to foreground target institute
Picture region handled, obtain corresponding treated the first picture region;
Step S309 obtains target context " blue sky ", (color increases mode to the picture tupe of " white clouds ", and blue sky is more
Indigo plant, white clouds are whiter), and the picture region where " blue sky ", " white clouds " according to triangle dotted line frame determination;According to each
The picture tupe of a target context, handles the picture region where each target context, obtains corresponding processing
Second picture region afterwards;
Picture region where foreground target " ship " in first picture is substituted for corresponding processing by step S310
Target context " blue sky " in first picture, the picture region where " white clouds " are substituted for by the first picture region afterwards
Corresponding treated second picture region, the second picture that obtains that treated, and will treated the second picture conduct
Treated final picture, as shown in Figure 4 b.
For ease of understanding, the embodiment of the present application illustrates the picture processing side of the application by a specific application scenarios
Case, the overall effect that can be seen that Fig. 4 b obtained after being handled using the application picture processing scheme from Fig. 4 a, 4b are substantially better than
Fig. 4 a.
It should be understood that in the above-described embodiments, the size of the serial number of each step is not meant that the order of the execution order, it is each to walk
Rapid execution sequence should be determined by its function and internal logic, and the implementation process without coping with the embodiment of the present invention constitutes any limit
It is fixed.
Fig. 5 be the application fourth embodiment provide picture processing unit schematic diagram, for ease of description, only show with
The relevant part of the embodiment of the present application.
The picture processing unit 5 can be the software list being built in the terminal devices such as mobile phone, tablet computer, notebook
Member, hardware cell or the unit of soft or hard combination, can also be used as independent pendant and are integrated into the mobile phone, tablet computer, pen
Remember in this grade terminal device.
The picture processing unit 5 includes:
First detection module 51 obtains the first testing result for detecting the foreground target in picture to be processed, and described the
One testing result is used to indicate in the picture to be processed with the presence or absence of foreground target, and for referring to when there are foreground target
Show the position of the classification and each foreground target of each foreground target in the picture to be processed;
Categorization module 52 obtains classification results, the classification results for carrying out scene classification to the picture to be processed
It is used to indicate whether to identify the background of the picture to be processed, and is used for when identifying the background of the picture to be processed
Indicate the background classification of the picture to be processed;
Processing module 53, for according to first testing result and the classification results, to the picture to be processed into
Row processing.
Optionally, the picture processing unit 5 further includes:
Second detection module, for detecting when classification results instruction identifies the background of the picture to be processed
Target context in the background, obtain the second testing result, second testing result be used to indicate in the background whether
There are target contexts, and the classification for being used to indicate each target context when there are target context and each target context are in institute
State the position in picture to be processed;
Correspondingly, the processing module 53, is specifically used for according to first testing result, the classification results and institute
The second testing result is stated, the picture to be processed is handled.
Optionally, the processing module 53 indicates in the picture to be processed if being specifically used for first testing result
There is no foreground target, the classification results instruction identifies that the background of the picture to be processed, second testing result refer to
Show that there is no target contexts in the background, then:
According to the background classification of the picture to be processed, the picture tupe of the picture to be processed is obtained, and according to
The picture tupe handles the picture to be processed, obtains treated picture.
Optionally, the processing module 53 indicates in the picture to be processed if being specifically used for first testing result
There are foreground target, the classification results instruction identifies the background of the picture to be processed, the second testing result instruction
Target context is not present in the background, then:
The classification of each foreground target, the position of each foreground target and the institute indicated according to first testing result
The background classification for stating the picture to be processed of classification results instruction, handles the picture to be processed.
Optionally, the processing module 53 includes:
First processing units obtain the figure of the picture to be processed for the background classification according to the picture to be processed
Piece tupe, and the picture to be processed being handled according to the picture tupe, first figure that obtains that treated
Piece;
The second processing unit is obtained for the classification of each foreground target according to indicated by first testing result
The picture tupe of each foreground target, and each foreground target according to indicated by first testing result it is described to
The position in picture is handled, determines the picture region where each foreground target;
Third processing unit, for the picture tupe according to each foreground target, to where each foreground target
Picture region is handled, and corresponding treated picture region is obtained;
Fourth processing unit, for the picture region where each foreground target in first picture to be substituted for pair
Picture region of answering that treated, the second picture that obtains that treated, and will treated the second picture as processing after
Final picture.
Optionally, the processing module 53 indicates in the picture to be processed if being specifically used for first testing result
There are foreground target, the classification results instruction identifies the background of the picture to be processed, the second testing result instruction
There are target contexts in the background, then:
According to the classification of each foreground target of first testing result instruction, the position, described of each foreground target
The background classification of the picture to be processed of classification results instruction and each target context of second testing result instruction
Position in the picture to be processed of classification and each target context, the picture to be processed is handled.
Optionally, the processing module 53 includes:
5th processing unit obtains the figure of the picture to be processed for the background classification according to the picture to be processed
Piece tupe, and the picture to be processed being handled according to the picture tupe, first figure that obtains that treated
Piece;
6th processing unit is obtained for the classification of each foreground target according to indicated by first testing result
The picture tupe of each foreground target, and each foreground target according to indicated by first testing result it is described to
The position in picture is handled, determines the picture region where each foreground target;
7th processing unit, for the picture tupe according to each foreground target, to where each foreground target
Picture region is handled, and corresponding treated the first picture region is obtained;
8th processing unit is obtained for the classification of each target context according to indicated by second testing result
The picture tupe of each target context, and each target context according to indicated by second testing result it is described to
The position in picture is handled, determines the picture region where each target context;
9th processing unit, for the picture tupe according to each target context, to where each target context
Picture region is handled, and corresponding treated second picture region is obtained;
Tenth processing unit, for the picture region where each foreground target in first picture to be substituted for pair
Picture region where each target context in first picture is substituted for pair by the first picture region of answering that treated
Second picture region of answering that treated, the second picture that obtains that treated, and will treated the second picture as at
Final picture after reason.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing
The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also
To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list
Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above-mentioned dress
Set/unit between the contents such as information exchange, implementation procedure, due to being based on same design, tool with the application embodiment of the method
Body function and bring technical effect, for details, reference can be made to embodiment of the method parts, and details are not described herein again.
Fig. 6 is the schematic diagram for the terminal device that the 5th embodiment of the application provides.As shown in fig. 6, the terminal of the embodiment
Equipment 6 includes:It processor 60, memory 61 and is stored in the memory 61 and can be run on the processor 60
Computer program 62, such as picture processing program.The processor 60 is realized above-mentioned each when executing the computer program 62
Step in image processing method embodiment, such as step 101 shown in FIG. 1 is to 103.Alternatively, the processor 60 executes institute
The function of each module/unit in above-mentioned each Installation practice, such as module 51 to 53 shown in Fig. 5 are realized when stating computer program 62
Function.
The terminal device 6 can be the calculating such as desktop PC, notebook, palm PC and cloud server and set
It is standby.The terminal device may include, but be not limited only to, processor 60, memory 61.It will be understood by those skilled in the art that Fig. 6
The only example of terminal device 6 does not constitute the restriction to terminal device 6, may include than illustrating more or fewer portions
Part perhaps combines certain components or different components, such as the terminal device can also include input-output equipment, net
Network access device, bus etc..
Alleged processor 60 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng.
The memory 61 can be the internal storage unit of the terminal device 6, such as the hard disk or interior of terminal device 6
It deposits.The memory 61 is also possible to the External memory equipment of the terminal device 6, such as be equipped on the terminal device 6
Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge
Deposit card (Flash Card) etc..Further, the memory 61 can also both include the storage inside list of the terminal device 6
Member also includes External memory equipment.The memory 61 is for storing needed for the computer program and the terminal device
Other programs and data.The memory 61 can be also used for temporarily storing the data that has exported or will export.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
Scope of the present application.
In embodiment provided herein, it should be understood that disclosed device/terminal device and method, it can be with
It realizes by another way.For example, device described above/terminal device embodiment is only schematical, for example, institute
The division of module or unit is stated, only a kind of logical function partition, there may be another division manner in actual implementation, such as
Multiple units or components can be combined or can be integrated into another system, or some features can be ignored or not executed.Separately
A bit, shown or discussed mutual coupling or direct-coupling or communication connection can be through some interfaces, device
Or the INDIRECT COUPLING or communication connection of unit, it can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or
In use, can store in a computer readable storage medium.Based on this understanding, the application realizes above-mentioned implementation
All or part of the process in example method, can also instruct relevant hardware to complete, the meter by computer program
Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on
The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generation
Code can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium
May include:Any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic of the computer program code can be carried
Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM,
Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that described
The content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practice
Subtract, such as does not include electric carrier signal and electricity according to legislation and patent practice, computer-readable medium in certain jurisdictions
Believe signal.
Specifically can be as follows, the embodiment of the present application also provides a kind of computer readable storage mediums, this is computer-readable
Storage medium can be computer readable storage medium included in the memory in above-described embodiment;It is also possible to individually deposit
Without the computer readable storage medium in supplying terminal device.The computer-readable recording medium storage have one or
More than one computer program of person, the one or more computer program is by one or more than one processor
The following steps of the image processing method are realized when execution:
The foreground target in picture to be processed is detected, the first testing result is obtained, first testing result is used to indicate
It whether there is foreground target in the picture to be processed, and be used to indicate the class of each foreground target when there are foreground target
Other and position of each foreground target in the picture to be processed;
Scene classification is carried out to the picture to be processed, obtains classification results, the classification results are used to indicate whether to know
Not Chu the picture to be processed background, and be used to indicate when identifying the background of the picture to be processed described to be processed
The background classification of picture;
According to first testing result and the classification results, the picture to be processed is handled.
Assuming that above-mentioned is the first possible embodiment, then provided based on the first possible embodiment
Second of possible embodiment in, further include:
If the classification results instruction identifies the background of the picture to be processed, the background mesh in the background is detected
Mark, obtains the second testing result, and second testing result is used to indicate in the background with the presence or absence of target context, Yi Ji
There are the position of the classification that each target context is used to indicate when target context and each target context in the picture to be processed
It sets;
Correspondingly, carrying out processing packet to the picture to be processed according to first testing result and the classification results
It includes:
According to first testing result, the classification results and second testing result, to the figure to be processed
Piece is handled.
Assuming that above-mentioned is second of possible embodiment, then provided based on second of possible embodiment
The third possible embodiment in, it is described according to first testing result, the classification results and it is described second inspection
It surveys as a result, handle the picture to be processed, including:
If first testing result indicates that foreground target is not present in the picture to be processed, the classification results instruction
Identify that the background of the picture to be processed, second testing result indicate that target context is not present in the background, then:
According to the background classification of the picture to be processed, the picture tupe of the picture to be processed is obtained, and according to
The picture tupe handles the picture to be processed, obtains treated picture.
Assuming that above-mentioned is second of possible embodiment, then provided based on second of possible embodiment
The 4th kind of possible embodiment in, it is described according to first testing result, the classification results and it is described second inspection
Survey as a result, to the picture to be processed carry out processing include:
If first testing result indicates that, there are foreground target in the picture to be processed, the classification results instruction is known
Not Chu the picture to be processed background, second testing result indicates that target context is not present in the background, then:
The classification of each foreground target, the position of each foreground target and the institute indicated according to first testing result
The background classification for stating the picture to be processed of classification results instruction, handles the picture to be processed.
In the 5th kind of possible embodiment provided based on the 4th kind of possible embodiment, the basis
The classification of each foreground target, the position of each foreground target and the classification results of the first testing result instruction refer to
The background classification of the picture to be processed shown handles the picture to be processed, including:
According to the background classification of the picture to be processed, the picture tupe of the picture to be processed is obtained, and according to
The picture tupe handles the picture to be processed, obtains treated the first picture;
According to the classification of each foreground target indicated by first testing result, the picture of each foreground target is obtained
Tupe, and position of each foreground target according to indicated by first testing result in the picture to be processed,
Determine the picture region where each foreground target;
According to the picture tupe of each foreground target, the picture region where each foreground target is handled,
Obtain corresponding treated picture region;
Picture region where each foreground target in first picture is substituted for corresponding treated picture
Region, obtains treated second picture, and will treated the second picture as treated final picture.
In the 6th kind of possible embodiment provided based on second of possible embodiment, the basis
First testing result, the classification results and second testing result carry out processing packet to the picture to be processed
It includes:
If first testing result indicates that, there are foreground target in the picture to be processed, the classification results instruction is known
Not Chu the picture to be processed background, second testing result indicates that there are target contexts in the background, then:
According to the classification of each foreground target of first testing result instruction, the position, described of each foreground target
The background classification of the picture to be processed of classification results instruction and each target context of second testing result instruction
Position in the picture to be processed of classification and each target context, the picture to be processed is handled.
In the 7th kind of possible embodiment provided based on the 6th kind of possible embodiment, the basis
The classification of each foreground target of the first testing result instruction, the position of each foreground target, classification results instruction
The background classification of the picture to be processed and the classification of each target context of second testing result instruction and each
Position of the target context in the picture to be processed, handles the picture to be processed, including:
According to the background classification of the picture to be processed, the picture tupe of the picture to be processed is obtained, and according to
The picture tupe handles the picture to be processed, obtains treated the first picture;
According to the classification of each foreground target indicated by first testing result, the picture of each foreground target is obtained
Tupe, and position of each foreground target according to indicated by first testing result in the picture to be processed,
Determine the picture region where each foreground target;
According to the picture tupe of each foreground target, the picture region where each foreground target is handled,
Obtain corresponding treated the first picture region;
According to the classification of each target context indicated by second testing result, the picture of each target context is obtained
Tupe, and position of each target context according to indicated by second testing result in the picture to be processed,
Determine the picture region where each target context;
According to the picture tupe of each target context, the picture region where each target context is handled,
Obtain corresponding treated second picture region;
Picture region where each foreground target in first picture is substituted for corresponding treated first
Picture region where each target context in first picture, is substituted for that corresponding treated second by picture region
Picture region, obtains treated second picture, and will treated the second picture as treated final picture.
Embodiment described above is only to illustrate the technical solution of the application, rather than its limitations;Although referring to aforementioned reality
Example is applied the application is described in detail, those skilled in the art should understand that:It still can be to aforementioned each
Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified
Or replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all
Comprising within the scope of protection of this application.
Claims (10)
1. a kind of image processing method, which is characterized in that including:
The foreground target in picture to be processed is detected, the first testing result is obtained, first testing result is used to indicate described
Whether there is foreground target in picture to be processed, and be used to indicate when there are foreground target each foreground target classification and
Position of each foreground target in the picture to be processed;
Scene classification is carried out to the picture to be processed, obtains classification results, the classification results are used to indicate whether to identify
The background of the picture to be processed, and the picture to be processed is used to indicate when identifying the background of the picture to be processed
Background classification;
According to first testing result and the classification results, the picture to be processed is handled.
2. image processing method as described in claim 1, which is characterized in that further include:
If the classification results instruction identifies the background of the picture to be processed, the target context in the background is detected,
The second testing result is obtained, second testing result is used to indicate in the background with the presence or absence of target context, and is being deposited
The position of the classification and each target context of each target context in the picture to be processed is used to indicate in target context;
Correspondingly, carrying out processing to the picture to be processed includes according to first testing result and the classification results:
According to first testing result, the classification results and second testing result, to the picture to be processed into
Row processing.
3. image processing method as claimed in claim 2, which is characterized in that it is described according to first testing result, it is described
Classification results and second testing result, handle the picture to be processed, including:
If first testing result indicates that foreground target is not present in the picture to be processed, the classification results instruction identification
The background of the picture to be processed out, second testing result indicate that there is no target contexts in the background, then:
According to the background classification of the picture to be processed, the picture tupe of the picture to be processed is obtained, and according to described
Picture tupe handles the picture to be processed, obtains treated picture.
4. image processing method as claimed in claim 2, which is characterized in that it is described according to first testing result, it is described
Classification results and second testing result, carrying out processing to the picture to be processed includes:
If first testing result indicates that, there are foreground target in the picture to be processed, the classification results instruction identifies
The background of the picture to be processed, second testing result indicate that there is no target contexts in the background, then:
According to the classification of each foreground target of first testing result instruction, the position of each foreground target and described point
The background classification of the picture to be processed of class result instruction, handles the picture to be processed.
5. image processing method as claimed in claim 4, which is characterized in that it is described according to first testing result instruction
The back of the picture to be processed of the classification of each foreground target, the position of each foreground target and classification results instruction
Scape classification handles the picture to be processed, including:
According to the background classification of the picture to be processed, the picture tupe of the picture to be processed is obtained, and according to described
Picture tupe handles the picture to be processed, obtains treated the first picture;
According to the classification of each foreground target indicated by first testing result, the picture processing of each foreground target is obtained
Mode, and position of each foreground target according to indicated by first testing result in the picture to be processed determine
Picture region where each foreground target;
According to the picture tupe of each foreground target, the picture region where each foreground target is handled, is obtained
Corresponding treated picture region;
Picture region where each foreground target in first picture is substituted for corresponding treated picture region,
Obtain treated second picture, and will treated the second picture as treated final picture.
6. image processing method as claimed in claim 2, which is characterized in that it is described according to first testing result, it is described
Classification results and second testing result, carrying out processing to the picture to be processed includes:
If first testing result indicates that, there are foreground target in the picture to be processed, the classification results instruction identifies
The background of the picture to be processed, second testing result indicate that there are target contexts in the background, then:
According to the classification of each foreground target of first testing result instruction, the position of each foreground target, the classification
As a result the class of the background classification of the picture to be processed indicated and each target context of second testing result instruction
Not with position of each target context in the picture to be processed, the picture to be processed is handled.
7. image processing method as claimed in claim 6, which is characterized in that it is described according to first testing result instruction
The background for the picture to be processed that the classification of each foreground target, the position of each foreground target, the classification results indicate
The classification and each target context of classification and each target context of second testing result instruction are in the figure to be processed
Position in piece handles the picture to be processed, including:
According to the background classification of the picture to be processed, the picture tupe of the picture to be processed is obtained, and according to described
Picture tupe handles the picture to be processed, obtains treated the first picture;
According to the classification of each foreground target indicated by first testing result, the picture processing of each foreground target is obtained
Mode, and position of each foreground target according to indicated by first testing result in the picture to be processed determine
Picture region where each foreground target;
According to the picture tupe of each foreground target, the picture region where each foreground target is handled, is obtained
Corresponding treated the first picture region;
According to the classification of each target context indicated by second testing result, the picture processing of each target context is obtained
Mode, and position of each target context according to indicated by second testing result in the picture to be processed determine
Picture region where each target context;
According to the picture tupe of each target context, the picture region where each target context is handled, is obtained
Corresponding treated second picture region;
Picture region where each foreground target in first picture is substituted for corresponding treated the first picture
Picture region where each target context in first picture is substituted for corresponding treated second picture by region
Region, obtains treated second picture, and will treated the second picture as treated final picture.
8. a kind of picture processing unit, which is characterized in that including:
First detection module obtains the first testing result, first detection for detecting the foreground target in picture to be processed
As a result it is used to indicate in the picture to be processed with the presence or absence of foreground target, and is used to indicate when there are foreground target each
The position of the classification of foreground target and each foreground target in the picture to be processed;
Categorization module obtains classification results, the classification results are for referring to for carrying out scene classification to the picture to be processed
The background for whether identifying the picture to be processed shown, and is used to indicate institute when identifying the background of the picture to be processed
State the background classification of picture to be processed;
Processing module, for handling the picture to be processed according to first testing result and the classification results.
9. a kind of terminal device, including memory, processor and storage are in the memory and can be on the processor
The computer program of operation, which is characterized in that the processor realizes such as claim 1 to 7 when executing the computer program
The step of any one image processing method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In the step of realization image processing method as described in any one of claim 1 to 7 when the computer program is executed by processor
Suddenly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810629462.8A CN108898169B (en) | 2018-06-19 | 2018-06-19 | Picture processing method, picture processing device and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810629462.8A CN108898169B (en) | 2018-06-19 | 2018-06-19 | Picture processing method, picture processing device and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108898169A true CN108898169A (en) | 2018-11-27 |
CN108898169B CN108898169B (en) | 2021-06-01 |
Family
ID=64345527
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810629462.8A Active CN108898169B (en) | 2018-06-19 | 2018-06-19 | Picture processing method, picture processing device and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108898169B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109597912A (en) * | 2018-12-05 | 2019-04-09 | 上海碳蓝网络科技有限公司 | Method for handling picture |
CN113038232A (en) * | 2021-03-10 | 2021-06-25 | 深圳创维-Rgb电子有限公司 | Video playing method, device, equipment, server and storage medium |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100169784A1 (en) * | 2008-12-30 | 2010-07-01 | Apple Inc. | Slide Show Effects Style |
CN103440501A (en) * | 2013-09-01 | 2013-12-11 | 西安电子科技大学 | Scene classification method based on nonparametric space judgment hidden Dirichlet model |
CN104156915A (en) * | 2014-07-23 | 2014-11-19 | 小米科技有限责任公司 | Skin color adjusting method and device |
CN105138693A (en) * | 2015-09-18 | 2015-12-09 | 联动优势科技有限公司 | Method and device for having access to databases |
CN105788142A (en) * | 2016-05-11 | 2016-07-20 | 中国计量大学 | Video image processing-based fire detection system and detection method |
CN106101547A (en) * | 2016-07-06 | 2016-11-09 | 北京奇虎科技有限公司 | The processing method of a kind of view data, device and mobile terminal |
CN106296629A (en) * | 2015-05-18 | 2017-01-04 | 富士通株式会社 | Image processing apparatus and method |
CN106934401A (en) * | 2017-03-07 | 2017-07-07 | 上海师范大学 | A kind of image classification method based on improvement bag of words |
US20180012211A1 (en) * | 2016-07-05 | 2018-01-11 | Rahul Singhal | Device for communicating preferences to a computer system |
CN107592517A (en) * | 2017-09-21 | 2018-01-16 | 青岛海信电器股份有限公司 | A kind of method and device of colour of skin processing |
CN107845072A (en) * | 2017-10-13 | 2018-03-27 | 深圳市迅雷网络技术有限公司 | Image generating method, device, storage medium and terminal device |
CN107862658A (en) * | 2017-10-31 | 2018-03-30 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN107944499A (en) * | 2017-12-10 | 2018-04-20 | 上海童慧科技股份有限公司 | A kind of background detection method modeled at the same time for prospect background |
CN108055501A (en) * | 2017-11-22 | 2018-05-18 | 天津市亚安科技有限公司 | A kind of target detection and the video monitoring system and method for tracking |
-
2018
- 2018-06-19 CN CN201810629462.8A patent/CN108898169B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100169784A1 (en) * | 2008-12-30 | 2010-07-01 | Apple Inc. | Slide Show Effects Style |
CN103440501A (en) * | 2013-09-01 | 2013-12-11 | 西安电子科技大学 | Scene classification method based on nonparametric space judgment hidden Dirichlet model |
CN104156915A (en) * | 2014-07-23 | 2014-11-19 | 小米科技有限责任公司 | Skin color adjusting method and device |
CN106296629A (en) * | 2015-05-18 | 2017-01-04 | 富士通株式会社 | Image processing apparatus and method |
CN105138693A (en) * | 2015-09-18 | 2015-12-09 | 联动优势科技有限公司 | Method and device for having access to databases |
CN105788142A (en) * | 2016-05-11 | 2016-07-20 | 中国计量大学 | Video image processing-based fire detection system and detection method |
US20180012211A1 (en) * | 2016-07-05 | 2018-01-11 | Rahul Singhal | Device for communicating preferences to a computer system |
CN106101547A (en) * | 2016-07-06 | 2016-11-09 | 北京奇虎科技有限公司 | The processing method of a kind of view data, device and mobile terminal |
CN106934401A (en) * | 2017-03-07 | 2017-07-07 | 上海师范大学 | A kind of image classification method based on improvement bag of words |
CN107592517A (en) * | 2017-09-21 | 2018-01-16 | 青岛海信电器股份有限公司 | A kind of method and device of colour of skin processing |
CN107845072A (en) * | 2017-10-13 | 2018-03-27 | 深圳市迅雷网络技术有限公司 | Image generating method, device, storage medium and terminal device |
CN107862658A (en) * | 2017-10-31 | 2018-03-30 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN108055501A (en) * | 2017-11-22 | 2018-05-18 | 天津市亚安科技有限公司 | A kind of target detection and the video monitoring system and method for tracking |
CN107944499A (en) * | 2017-12-10 | 2018-04-20 | 上海童慧科技股份有限公司 | A kind of background detection method modeled at the same time for prospect background |
Non-Patent Citations (2)
Title |
---|
ADITEE SHROTRE等: "Background recovery from multiple images", 《2013 IEEE DIGITAL SIGNAL PROCESSING AND SIGNAL PROCESSING EDUCATION MEETING (DSP/SPE)》 * |
吴联坤: "基于TensorFlow分布式与前景背景分离的实时图像风格化算法", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109597912A (en) * | 2018-12-05 | 2019-04-09 | 上海碳蓝网络科技有限公司 | Method for handling picture |
CN113038232A (en) * | 2021-03-10 | 2021-06-25 | 深圳创维-Rgb电子有限公司 | Video playing method, device, equipment, server and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108898169B (en) | 2021-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108961157A (en) | Image processing method, picture processing unit and terminal device | |
CN110210571B (en) | Image recognition method and device, computer equipment and computer readable storage medium | |
CN110585726B (en) | User recall method, device, server and computer readable storage medium | |
CN109902659B (en) | Method and apparatus for processing human body image | |
CN109064390A (en) | A kind of image processing method, image processing apparatus and mobile terminal | |
CN109034102A (en) | Human face in-vivo detection method, device, equipment and storage medium | |
CN108898082A (en) | Image processing method, picture processing unit and terminal device | |
CN108898587A (en) | Image processing method, picture processing unit and terminal device | |
CN108765278A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN108961267A (en) | Image processing method, picture processing unit and terminal device | |
CN104200249B (en) | A kind of method of clothing automatic collocation, apparatus and system | |
CN106469302A (en) | A kind of face skin quality detection method based on artificial neural network | |
CN109086742A (en) | scene recognition method, scene recognition device and mobile terminal | |
CN107395958A (en) | Image processing method and device, electronic equipment and storage medium | |
CN110741387B (en) | Face recognition method and device, storage medium and electronic equipment | |
CN111506758A (en) | Method and device for determining article name, computer equipment and storage medium | |
CN113052923B (en) | Tone mapping method, tone mapping apparatus, electronic device, and storage medium | |
CN109151337A (en) | Recognition of face light compensation method, recognition of face light compensating apparatus and mobile terminal | |
CN108961183A (en) | Image processing method, terminal device and computer readable storage medium | |
CN112206541B (en) | Game plug-in identification method and device, storage medium and computer equipment | |
CN109522858A (en) | Plant disease detection method, device and terminal device | |
CN112819767A (en) | Image processing method, apparatus, device, storage medium, and program product | |
CN108898169A (en) | Image processing method, picture processing unit and terminal device | |
CN108932703A (en) | Image processing method, picture processing unit and terminal device | |
CN108805095A (en) | image processing method, device, mobile terminal and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |