CN108961157A - Image processing method, picture processing unit and terminal device - Google Patents
Image processing method, picture processing unit and terminal device Download PDFInfo
- Publication number
- CN108961157A CN108961157A CN201810631027.9A CN201810631027A CN108961157A CN 108961157 A CN108961157 A CN 108961157A CN 201810631027 A CN201810631027 A CN 201810631027A CN 108961157 A CN108961157 A CN 108961157A
- Authority
- CN
- China
- Prior art keywords
- picture
- processed
- testing result
- classification
- foreground target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/20—Linear translation of a whole image or part thereof, e.g. panning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The application is suitable for image processing technology, provide a kind of image processing method, the described method includes: detecting the foreground target in picture to be processed, obtain the first testing result, first testing result is used to indicate in the picture to be processed with the presence or absence of foreground target, and the position of the classification and each foreground target of each foreground target in the picture to be processed is used to indicate when there are foreground target;Scene classification is carried out to the picture to be processed, obtains classification results, the classification results are used to indicate whether to identify the background of the picture to be processed, and are used to indicate the background classification of the picture to be processed when identifying the background of the picture to be processed;Network is fought according to first testing result, the classification results and preset generation to handle the picture to be processed.The application can effectively promote the flexibility of picture style conversion.
Description
Technical field
The application belongs to image processing technology more particularly to image processing method, picture processing unit, terminal device
And computer readable storage medium.
Background technique
Currently, many users like sharing the picture captured by oneself in social common platform, in order to make captured by oneself
Picture more create an aesthetic feeling, usually picture can all be handled.
However, existing image processing method is usual are as follows: first obtain picture, the tupe of picture is selected, according to selection
The tupe of picture the whole picture of acquisition is handled.For example, if the tupe of the picture of selection is that style turns
Mold changing formula, the then style for the whole picture that will acquire are converted to the picture style of selection.
Since existing image processing method can only be handled whole picture, flexibility is poor, it is difficult to meet
User carries out the demand of diversified processing to same picture.
Summary of the invention
In view of this, the embodiment of the present application provides a kind of image processing method, picture processing unit, terminal device and meter
Calculation machine readable storage medium storing program for executing, to solve in the prior art to handle whole picture, the poor problem of flexibility.
The first aspect of the embodiment of the present application provides a kind of image processing method, comprising:
The foreground target in picture to be processed is detected, the first testing result is obtained, first testing result is used to indicate
It whether there is foreground target in the picture to be processed, and be used to indicate the class of each foreground target when there are foreground target
Other and position of each foreground target in the picture to be processed;
Scene classification is carried out to the picture to be processed, obtains classification results, the classification results are used to indicate whether to know
Not Chu the picture to be processed background, and be used to indicate when identifying the background of the picture to be processed described to be processed
The background classification of picture;
Network is fought to the figure to be processed according to first testing result, the classification results and preset generation
Piece is handled.
The second aspect of the application provides a kind of picture processing unit, comprising:
First detection module obtains the first testing result for detecting the foreground target in picture to be processed, and described first
Testing result is used to indicate in the picture to be processed with the presence or absence of foreground target, and is used to indicate when there are foreground target
The position of the classification of each foreground target and each foreground target in the picture to be processed;
Categorization module obtains classification results, the classification results are used for carrying out scene classification to the picture to be processed
In the background for indicating whether to identify the picture to be processed, and when identifying the background of the picture to be processed for referring to
Show the background classification of the picture to be processed;
Processing module, for fighting network according to first testing result, the classification results and preset generation
The picture to be processed is handled.
The third aspect of the application provides a kind of terminal device, including memory, processor and is stored in described deposit
In reservoir and the computer program that can run on the processor, the processor are realized such as when executing the computer program
The step of image processing method.
The fourth aspect of the application provides a kind of computer readable storage medium, and the computer readable storage medium is deposited
Computer program is contained, is realized when the computer program is executed by one or more processors such as the image processing method
Step.
The embodiment of the present application passes through the foreground target detected in picture to be processed first, for example, face, animal etc., obtain
Foreground target testing result;Secondly, carrying out scene classification to the picture to be processed, that is, identify back current in picture to be processed
Scape belongs to which kind of scene, such as seabeach scene, scale Forest Scene, snowfield scene, grassland scene, lit desert scene, blue sky scene etc., obtains
Obtain scene classification result;Finally, fighting network further according to foreground target testing result, scene classification result and preset generation
The picture to be processed is handled, so as to realize the full side to foreground target in picture to be processed and background image
Position processing, such as foreground target such as face is allowed to be converted to a style, background image such as blue sky is converted to another style etc., has
Effect promotes the flexibility of picture style conversion, enhances user experience, has stronger usability and practicality.
Detailed description of the invention
It in order to more clearly explain the technical solutions in the embodiments of the present application, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application
Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is the implementation process schematic diagram for the image processing method that the embodiment of the present application one provides;
Fig. 2 (a) is the image schematic diagram that a kind of scene that the embodiment of the present application one provides is landscape;
Fig. 2 (b) is the image schematic diagram that a kind of scene that the embodiment of the present application one provides is seabeach;
Fig. 2 (c) is the image schematic diagram that a kind of scene that the embodiment of the present application one provides is blue sky;
Fig. 3 is the implementation process schematic diagram for the image processing method that the embodiment of the present application two provides;
Fig. 4 is the implementation process schematic diagram for the image processing method that the embodiment of the present application three provides;
Fig. 5 is the schematic diagram for the picture processing unit that the embodiment of the present application four provides;
Fig. 6 is the schematic diagram for the terminal device that the embodiment of the present application five provides.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed
Body details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specific
The application also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity
The detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " instruction is described special
Sign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step,
Operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment
And be not intended to limit the application.As present specification and it is used in the attached claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is
Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true
It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In the specific implementation, terminal device described in the embodiment of the present application is including but not limited to such as with the sensitive table of touch
Mobile phone, laptop computer or the tablet computer in face (for example, touch-screen display and/or touch tablet) etc it is other
Portable device.It is to be further understood that in certain embodiments, the equipment is not portable communication device, but is had
The desktop computer of touch sensitive surface (for example, touch-screen display and/or touch tablet).
In following discussion, the terminal device including display and touch sensitive surface is described.However, should manage
Solution, terminal device may include that one or more of the other physical User of such as physical keyboard, mouse and/or control-rod connects
Jaws equipment.
Terminal device supports various application programs, such as one of the following or multiple: drawing application program, demonstration application
Program, word-processing application, website creation application program, disk imprinting application program, spreadsheet applications, game are answered
With program, telephony application, videoconference application, email application, instant messaging applications, forging
Refining supports application program, photo management application program, digital camera application program, digital camera application program, web-browsing to answer
With program, digital music player application and/or video frequency player application program.
At least one of such as touch sensitive surface can be used in the various application programs that can be executed on the terminal device
Public physical user-interface device.It can be adjusted among applications and/or in corresponding application programs and/or change touch is quick
Feel the corresponding information shown in the one or more functions and terminal on surface.In this way, terminal public physical structure (for example,
Touch sensitive surface) it can support the various application programs with user interface intuitive and transparent for a user.
In addition, term " first ", " second " etc. are only used for distinguishing description, and should not be understood as in the description of the present application
Indication or suggestion relative importance.
In order to illustrate technical solution described herein, the following is a description of specific embodiments.
Embodiment one:
It is the implementation process schematic diagram for the image processing method that the embodiment of the present application one provides referring to Fig. 1, this method can be with
Include:
Step S101 detects the foreground target in picture to be processed, obtains the first testing result, first testing result
It is used to indicate in the picture to be processed with the presence or absence of foreground target, and is used to indicate each prospect when there are foreground target
The position of the classification of target and each foreground target in the picture to be processed.
In the present embodiment, the picture to be processed can be the picture of current shooting, pre-stored picture, from network
The picture of upper acquisition or the picture extracted from video etc..For example, the picture shot by the camera of terminal device;Alternatively,
The picture that pre-stored wechat good friend sends;Alternatively, the picture downloaded from appointed website;Alternatively, from currently played
The frame picture extracted in video.Preferably, can also be a certain frame picture after terminal device starting camera in preview screen.
In the present embodiment, the testing result includes but is not limited to: whether there is or not foreground targets in the picture to be processed
It indicates information, and is used to indicate the class of each foreground target included in above-mentioned picture to be processed when comprising foreground target
Other and position information.Wherein, the foreground target can refer to the target in the picture to be processed with behavioral characteristics, example
Such as people, animal;The foreground target can also refer to the scenery relatively close and with static nature apart from audience, such as fresh
Flower, cuisines etc..Further, in order to more accurately recognize the position of foreground target, and to the foreground target recognized into
Row is distinguished.The present embodiment can also carry out frame using different selected frames to the foreground target after detecting foreground target
Choosing, such as box frame select animal, round frame face making etc..
Preferably, the present embodiment can using training after scene detection model to the foreground target in picture to be processed into
Row detection.Illustratively, the scene detection model can be include MobileNet and single-point more boxes detection (Single Shot
Multibox Detection, SSD) etc. with foreground target detection function model, the MobileNet be for mobile phone etc. it is embedding
Enter a kind of deep-neural-network of lightweight of formula equipment proposition.It is of course also possible to use other scene detection modes, such as it is logical
It crosses target (such as face) recognizer to detect in the picture to be processed with the presence or absence of predeterminated target (such as face), is detecting to deposit
After the predeterminated target, determine the predeterminated target in the figure to be processed by target location algorithm or target tracking algorism
Position in piece.
It should be noted that those skilled in the art are in the technical scope disclosed by the present invention, can be readily apparent that other
The scheme of detection foreground target should also will not repeat them here within protection scope of the present invention.
Illustrate field for detecting using the scene detection model after training to the foreground target in picture to be processed
The specific training process of scape detection model:
Samples pictures and the corresponding testing result of the samples pictures are obtained in advance, wherein the samples pictures are corresponding
Testing result include the classification of each foreground target and position in the samples pictures;
Using the foreground target in the initial above-mentioned samples pictures of scene detection model inspection, and according to the institute obtained in advance
The corresponding testing result of samples pictures is stated, the Detection accuracy of the initial scene detection model is calculated;
If above-mentioned Detection accuracy is less than preset first detection threshold value, the ginseng of initial scene detection model is adjusted
Number, then by samples pictures described in parameter scene detection model inspection adjusted, until scene detection model adjusted
Detection accuracy is greater than or equal to first detection threshold value, and using the scene detection model as the scene detection mould after training
Type.Wherein, the method for adjusting parameter includes but is not limited to stochastic gradient descent algorithm, power more new algorithm etc..
Step S102 carries out scene classification to the picture to be processed, obtains classification results, the classification results are for referring to
The background for whether identifying the picture to be processed shown, and is used to indicate institute when identifying the background of the picture to be processed
State the background classification of picture to be processed.
In the present embodiment, scene classification is carried out to the picture to be processed, that is, identifies back current in picture to be processed
Scape belongs to which kind of scene, such as landscape scene, seabeach scene, blue sky scene, scale Forest Scene, snowfield scene, grassland scene, desert
Scene etc..It is landscape, seabeach, blue sky as Fig. 2 (a), Fig. 2 (b), Fig. 2 (c) respectively illustrate scene provided by the embodiments of the present application
Image schematic diagram.
Preferably, scene point can be carried out using background of the scene classification model after training to the picture to be processed
Class.Illustratively, which can have the model of background detection function for MobileNet etc..It is of course also possible to
Using other scene classification modes, for example, by foreground detection model inspection go out the foreground target in the picture to be processed it
Afterwards, using the remainder in the picture to be processed as background, and the class of remainder is identified by image recognition algorithm
Not.
It should be noted that those skilled in the art are in the technical scope disclosed by the present invention, can be readily apparent that other
The scheme of detection background should also will not repeat them here within protection scope of the present invention.
Illustrate scene point for detecting using the scene classification model after training to the background in picture to be processed
The specific training process of class model:
Each samples pictures and the corresponding classification results of each samples pictures are obtained in advance;Such as samples pictures 1 are grass
Ground scene, samples pictures 2 are snowfield scene, samples pictures 3 are seabeach scene;
Scene classification is carried out to each samples pictures using initial scene classification model, and each according to what is obtained in advance
The classification results of samples pictures calculate the classification accuracy of the initial scene classification model, i.e., whether identify samples pictures 1
For meadow scene, samples pictures 2 be snowfield scene, samples pictures 3 are seabeach scene, samples pictures 4 are lit desert scene;
If above-mentioned classification accuracy is less than preset classification thresholds (such as 75%, that is, the samples pictures identified are less than 3),
The parameter of above-mentioned initial scene classification model is then adjusted, then passes through sample described in parameter scene classification model inspection adjusted
Picture, until the classification accuracy of scene classification model adjusted divides more than or equal to the classification thresholds, and by the scene
Class model is as the scene classification model after training.Wherein, the method for adjusting parameter includes but is not limited to that stochastic gradient descent is calculated
Method, power more new algorithm etc..
Step S103 fights network to institute according to first testing result, the classification results and preset generation
Picture to be processed is stated to be handled.
Wherein, preset generation confrontation network includes generating network and confrontation network.Wherein generating network G is a generation
The network of picture, it receives a random noise z, generates picture by this noise, is denoted as G (z).Differentiate that network D is used for
Differentiate that a picture is " true ", it is assumed that its input parameter is x, and x represents a picture, then exports D (x) and represent x
For the probability of true picture, if it is 1, just representing the probability that x is true picture is 100%, and if export be 0, just represent x not
It may be true picture.In the training process, the target for generating network G is just to try to generate true picture and go to cheat to differentiate
Network D.And the target of D is just to try to the G picture generated and true picture to be distinguished from.In this way, G and D constitute one
Dynamically " gambling process ".Under optimal state, the picture G (z) for being enough " mixing the spurious with the genuine " is can be generated in G.D is come
It says, it is difficult to determine whether true the picture that G is generated is actually, therefore D (G (z))=0.5.
In the present embodiment, the one or more generations with designated pictures tupe of training in advance fight network, with
Corresponding treated picture is obtained according to the output for generating confrontation network.For example, it is assumed that a generation confrontation trained in advance
Network has the picture tupe that picture format is converted to oil painting format, then picture to be processed is inputted generation confrontation net
After network, generation confrontation network converts the output into oil painting format treated that picture is certainly root in the present embodiment
According to first testing result, the classification results and preset generation confrontation network to the picture to be processed at
Reason, rather than the style of whole picture to be processed is all directly converted into painting style.
It can specifically include, if first testing result indicates that foreground target, institute are not present in the picture to be processed
State the background that classification results instruction identifies the picture to be processed, then:
According to the background classification of the picture to be processed, selection first generates confrontation net from preset generation confrontation network
Network generates confrontation network according to described first and handles the picture to be processed, obtains treated picture;
Alternatively, if first testing result indicates that there are foreground target, the classification results in the picture to be processed
Indicate the background of the unidentified picture to be processed out, then:
It is selected from preset generation confrontation network according to the classification of each foreground target of first testing result instruction
The second generation confrontation network is selected, generates the position of confrontation network and each foreground target to described to be processed according to described second
Picture is handled, and treated picture is obtained;Optionally, when foreground target has it is multiple when, can be selected for different foreground target
It selects different second and generates confrontation network, it is of course also possible to generate confrontation net for different foreground target selections same second
Network.
Alternatively, if first testing result indicates that there are foreground target, the classification results in the picture to be processed
Instruction identifies the background of the picture to be processed, then:
It is selected from preset generation confrontation network according to the classification of each foreground target of first testing result instruction
Select the second generation confrontation network, position and the classification for generating confrontation network and each foreground target according to described second
As a result the background classification of the picture to be processed indicated handles the picture to be processed, obtains treated picture.
Wherein, include but is not limited to foreground target and/or background progress style conversion to the processing of picture to be processed, satisfy
With the adjusting of the image parameters such as degree, brightness and/or contrast.
By the embodiment of the present application, can neatly to each foreground target in picture to be processed and/or background image into
Row is targetedly handled, and enriches tupe, and then enrich the treatment effect of picture entirety.
Embodiment two:
It is the implementation process schematic diagram for the image processing method that the embodiment of the present application two provides referring to Fig. 3, this method can be with
Include:
Step S301 detects the foreground target in picture to be processed, obtains the first testing result, first testing result
It is used to indicate in the picture to be processed with the presence or absence of foreground target, and is used to indicate each prospect when there are foreground target
The position of the classification of target and each foreground target in the picture to be processed;
Step S302 carries out scene classification to the picture to be processed, obtains classification results, the classification results are for referring to
The background for whether identifying the picture to be processed shown, and is used to indicate institute when identifying the background of the picture to be processed
State the background classification of picture to be processed.
Wherein, the specific implementation process of step S301 and S302 can refer to above-mentioned steps S101,102, herein no longer
It repeats.
Step S303 receives the picture process instruction of user's input, according to the picture process instruction from preset generation
Corresponding generation confrontation network is selected in confrontation network.
Optionally, picture tupe option is shown in the display interface of picture to be processed, when the user clicks corresponding figure
Picture process instruction is issued after piece tupe option, for example, it is assumed that picture tupe option is " painting style ", then user
It clicks after being somebody's turn to do the picture tupe option of " painting style ", terminal device receives the picture process instruction of user's input, root
According to the picture process instruction, picture style can be converted to the generation of painting style by selection from preset generation confrontation network
Fight network.
Step S304 fights network to institute according to first testing result, the classification results and the generation of selection
It states picture to be processed to be handled, obtains treated picture.
Specifically, if the first testing result indicates in the picture to be processed that the classification results refer to there are foreground target
Show the background for identifying the picture to be processed, then:
Determine each foreground target in institute according to the position of each foreground target of first testing result instruction
State the picture region of picture to be processed;
Network is fought to each foreground target in the picture region of the picture to be processed according to the selected generation
Domain is handled, and treated picture region is obtained;
By the picture region where each foreground target in the picture to be processed be substituted for it is corresponding treated figure
Panel region obtains treated picture.
Specifically, if first testing result indicates that there are foreground target, the classification knots in the picture to be processed
Fruit indicates to identify the background of the picture to be processed, and second testing result indicates in the background there are target context,
Then:
According to the classification of each foreground target of first testing result instruction, the position, described of each foreground target
The background classification of the picture to be processed of classification results instruction, position of each target context in the picture to be processed with
And the generation of selection fights network, handles the picture to be processed.Preferably: according to the background of the picture to be processed
Classification selection first from preset generation confrontation network generates confrontation network, according to each of second testing result instruction
The position of target context determines each target context in the picture region of the picture to be processed;It is generated according to described first
It fights picture region of the network to each target context in the picture to be processed to handle, obtains that treated first
Picture region;According to the classification of each foreground target indicated by first testing result from preset generation confrontation network
Selection second generates confrontation network, according to each foreground target indicated by first testing result in the picture to be processed
In position determine each foreground target in the picture region of the picture to be processed;Confrontation net is generated according to described second
Picture region of the network to each foreground target in the picture to be processed is handled, and obtains treated second picture area
Domain;Picture region where each foreground target in the picture to be processed is substituted for corresponding treated second picture
Picture region where each target context in the picture to be processed is substituted for corresponding treated the first figure by region
Panel region obtains treated picture.
Wherein, first confrontation network is generated with one or more designated pictures tupes: to background and/or background mesh
Mark carries out the designated pictures tupe of the adjustings of image parameters such as style conversion, saturation degree, brightness and/or contrast;
Wherein, second confrontation network is generated with one or more designated pictures tupes: to foreground target, carrying out wind
The designated pictures tupe of the adjustings of image parameters such as lattice conversion, saturation degree, brightness and/or contrast.
Embodiment three:
It referring to fig. 4, is the implementation process schematic diagram for the image processing method that the embodiment of the present application three provides, this method can be with
Include:
Step S401 detects the foreground target in picture to be processed, obtains the first testing result, first testing result
It is used to indicate in the picture to be processed with the presence or absence of foreground target, and is used to indicate each prospect when there are foreground target
The position of the classification of target and each foreground target in the picture to be processed;
Step S402 carries out scene classification to the picture to be processed, obtains classification results, the classification results are for referring to
The background for whether identifying the picture to be processed shown, and is used to indicate institute when identifying the background of the picture to be processed
State the background classification of picture to be processed.
Wherein, the specific implementation process of step S401 and S402 can refer to above-mentioned steps S101,102, herein no longer
It repeats.
Step S403 detects the background if classification results instruction identifies the background of the picture to be processed
In target context, obtain the second testing result, second testing result is used to indicate in the background with the presence or absence of background
Target, and position of each target context in the picture to be processed is used to indicate when there are target context.
As the preferred embodiment of the application, in order to further enhance the treatment effect of picture, the present embodiment is being identified
Out after the background in picture to be processed, it is also necessary to be detected to the target context in the background, in order to subsequent to institute
State the processing of target context.Wherein, the testing result includes but is not limited to: whether there is or not the instruction of target context letters in the background
Breath, and the classification of each target context and the information of position are used to indicate when comprising target context.Wherein, the background mesh
Mark refers to all kinds of targets, such as blue sky, white clouds, the small grass in greenweed, flower in blue sky etc. of composition background.Further,
In order to more accurately recognize the position of target context, and the target context recognized is distinguished.The present embodiment is being examined
After measuring target context, frame choosing can also be carried out using different selected frame to the target context, for example, dotted line circle select it is white
Cloud, oval circle select flower etc..
Preferably, in order to improve the detection efficiency of target context, the present embodiment can be using the shallow-layer convolution mind after training
The target context in background is detected through network model, wherein the shallow-layer convolutional neural networks model refers to convolutional layer
Number is less than the neural network model of predetermined number (such as 8).Illustratively, which can be
AlexNet etc. has the model of target context detection function.It is of course also possible to use VGGNet model, GoogLeNet model,
ResNet model etc..
It should be noted that those skilled in the art are in the technical scope disclosed by the present invention, can be readily apparent that other
The scheme of detection foreground target should also will not repeat them here within protection scope of the present invention.
Illustrated for being detected using the shallow-layer convolutional neural networks model after training to the target context in background
The specific training process of the model:
The testing result of background in samples pictures and the samples pictures is obtained in advance, wherein the testing result packet
Include classification and the position of each target context;
It is obtained using the target context in the initial above-mentioned background of shallow-layer convolutional neural networks model inspection, and according to preparatory
Testing result, calculate the Detection accuracy of the initial shallow-layer convolutional neural networks model;
If above-mentioned Detection accuracy is less than preset second detection threshold value, initial shallow-layer convolutional neural networks mould is adjusted
The parameter of type, then by the background of samples pictures described in parameter shallow-layer convolutional neural networks model inspection adjusted, until adjusting
The Detection accuracy of shallow-layer convolutional neural networks model after whole is greater than or equal to second detection threshold value, and the shallow-layer is rolled up
Product neural network model is as the shallow-layer convolutional neural networks model after training.
Step S404, according to first testing result, the classification results, second testing result and preset
Confrontation network is generated to handle the picture to be processed.
The specific can be that if first testing result indicates that foreground target, institute are not present in the picture to be processed
The background that classification results instruction identifies the picture to be processed is stated, second testing result indicates to be not present in the background
Target context, then:
According to the background classification of the picture to be processed, selection first generates confrontation net from preset generation confrontation network
Network generates confrontation network according to described first and handles the picture to be processed, obtains treated picture;
Alternatively, if first testing result indicates that there are foreground target, the classification results in the picture to be processed
Instruction identifies that the background of the picture to be processed, second testing result indicate that target context is not present in the background,
Then:
It is selected from preset generation confrontation network according to the classification of each foreground target of first testing result instruction
Select the second generation confrontation network, according to first testing result instruction each foreground target position determine it is described it is each before
Picture region of the scape target in the picture to be processed;
According to it is described second generate confrontation network to each foreground target the picture to be processed picture region
It is handled, obtains treated picture region;
By the picture region where each foreground target in the picture to be processed be substituted for it is corresponding treated figure
Panel region obtains treated picture.
Alternatively, if first testing result indicates that there are foreground target, the classification results in the picture to be processed
Instruction identifies that the background of the picture to be processed, second testing result indicate that there are target contexts in the background, then:
According to the classification of each foreground target of first testing result instruction, the position, described of each foreground target
The background classification of the picture to be processed of classification results instruction, position of each target context in the picture to be processed with
And preset generation fights network, handles the picture to be processed.
Optionally, the classification of each foreground target according to first testing result instruction, each foreground target
Position, the background classification of the picture to be processed of classification results instruction, each target context is in the figure to be processed
Network is fought in position and preset generation in piece, handles the picture to be processed, comprising:
According to the background classification of the picture to be processed, selection first generates confrontation net from preset generation confrontation network
Network determines each target context described wait locate according to the position of each target context of second testing result instruction
Manage the picture region of picture;
According to it is described first generate confrontation network to each target context the picture to be processed picture region
It is handled, obtains treated the first picture region;
According to the classification of each foreground target indicated by first testing result from preset generation confrontation network
Selection second generates confrontation network, according to each foreground target indicated by first testing result in the picture to be processed
In position determine each foreground target in the picture region of the picture to be processed;
According to it is described second generate confrontation network to each foreground target the picture to be processed picture region
It is handled, obtains treated second picture region;
Picture region where each foreground target in the picture to be processed is substituted for corresponding treated
Picture region where each target context in the picture to be processed, is substituted for that corresponding treated by two picture regions
First picture region obtains treated picture.
Wherein, first confrontation network is generated with one or more designated pictures tupes: to background and/or background mesh
Mark carries out the designated pictures tupe of the adjustings of image parameters such as style conversion, saturation degree, brightness and/or contrast;
Wherein, second confrontation network is generated with one or more designated pictures tupes: to foreground target, carrying out wind
The designated pictures tupe of the adjustings of image parameters such as lattice conversion, saturation degree, brightness and/or contrast.
By the embodiment of the present application, foreground target in picture to be processed and background image can not only be handled,
Target context in background image can also be handled, so that the fineness of picture processing is higher, more effectively be mentioned
The treatment effect of picture entirety is risen, user experience is enhanced.
Example IV:
Fig. 5 be the application fourth embodiment provide picture processing unit schematic diagram, for ease of description, only show with
The relevant part of the embodiment of the present application.
The picture processing unit 5 can be the software list being built in the terminal devices such as mobile phone, tablet computer, notebook
Member, hardware cell or the unit of soft or hard combination, can also be used as independent pendant and are integrated into the mobile phone, tablet computer, pen
Remember in this grade terminal device.
The picture processing unit 5 includes:
First detection module 51 obtains the first testing result for detecting the foreground target in picture to be processed, and described the
One testing result is used to indicate in the picture to be processed with the presence or absence of foreground target, and for referring to when there are foreground target
Show the position of the classification and each foreground target of each foreground target in the picture to be processed;
Categorization module 52 obtains classification results, the classification results for carrying out scene classification to the picture to be processed
It is used to indicate whether to identify the background of the picture to be processed, and is used for when identifying the background of the picture to be processed
Indicate the background classification of the picture to be processed;
Processing module 53, for fighting net according to first testing result, the classification results and preset generation
Network handles the picture to be processed.
Optionally, the picture processing unit 5 further include:
Picture process instruction receiving module is handled for receiving the picture process instruction of user's input according to the picture
Instruction selects corresponding generation confrontation network from preset generation confrontation network.
Correspondingly, the processing module 53 is specifically used for according to first testing result, the classification results and choosing
The generation confrontation network selected handles the picture to be processed, obtains treated picture.
Optionally, the picture processing unit 5 further include:
Second detection module, for detecting when classification results instruction identifies the background of the picture to be processed
Target context in the background, obtain the second testing result, second testing result be used to indicate in the background whether
There are target contexts, and the classification for being used to indicate each target context when there are target context and each target context are in institute
State the position in picture to be processed;
Correspondingly, the processing module 53, it is specifically used for according to the first testing result, classification results, described
Second testing result and preset generation confrontation network handle the picture to be processed.
Optionally, the processing module 53 indicates in the picture to be processed if being specifically used for first testing result
There is no foreground target, the classification results instruction identifies that the background of the picture to be processed, second testing result refer to
Show that there is no target contexts in the background, then:
According to the background classification of the picture to be processed, selection first generates confrontation net from preset generation confrontation network
Network generates confrontation network according to described first and handles the picture to be processed.
Optionally, if first testing result indicates that there are foreground target, the classification knots in the picture to be processed
Fruit indicates to identify that the background of the picture to be processed, second testing result indicate that background mesh is not present in the background
Mark, then the processing module 53 includes:
Second generates confrontation network selection unit, each foreground target for being indicated according to first testing result
Classification selection second from preset generation confrontation network generates confrontation network, according to each of first testing result instruction
The position of foreground target determines each foreground target in the picture region of the picture to be processed;
Picture region acquiring unit that treated, for generating confrontation network to each prospect mesh according to described second
The picture region for being marked on the picture to be processed is handled, and treated picture region is obtained;
First treated picture acquiring unit, for by the figure where each foreground target in the picture to be processed
Panel region is substituted for corresponding treated picture region, the picture that obtains that treated.
Optionally, the processing module 53 indicates in the picture to be processed if being specifically used for first testing result
There are foreground target, the classification results instruction identifies the background of the picture to be processed, the second testing result instruction
There are target contexts in the background, then:
According to the classification of each foreground target of first testing result instruction, the position, described of each foreground target
The background classification of the picture to be processed of classification results instruction, position of each target context in the picture to be processed with
And preset generation fights network, handles the picture to be processed.
Optionally, the processing module 53 includes:
Target context area determination unit, for being fought according to the background classification of the picture to be processed from preset generation
Selection first generates confrontation network in network, determines institute according to the position of each target context of second testing result instruction
Each target context is stated in the picture region of the picture to be processed;
First picture region acquiring unit exists to each target context for generating confrontation network according to described first
The picture region of the picture to be processed is handled, and treated the first picture region is obtained;
Foreground target area determination unit, the class for each foreground target according to indicated by first testing result
Selection second does not generate confrontation network from preset generation confrontation network, each according to indicated by first testing result
Position of the foreground target in the picture to be processed determines each foreground target in the picture region of the picture to be processed
Domain;
Second picture area acquisition unit exists to each foreground target for generating confrontation network according to described second
The picture region of the picture to be processed is handled, and treated second picture region is obtained;
Blending image processing unit, for replacing the picture region where each foreground target in the picture to be processed
Corresponding treated second picture region is changed into, by the picture region where each target context in the picture to be processed
It is substituted for corresponding treated the first picture region, the picture that obtains that treated.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing
The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also
To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list
Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above-mentioned dress
Set/unit between the contents such as information exchange, implementation procedure, due to being based on same design, tool with the application embodiment of the method
Body function and bring technical effect, for details, reference can be made to embodiment of the method parts, and details are not described herein again.
Fig. 6 is the schematic diagram for the terminal device that the 5th embodiment of the application provides.As shown in fig. 6, the terminal of the embodiment
Equipment 6 includes: processor 60, memory 61 and is stored in the memory 61 and can run on the processor 60
Computer program 62, such as picture processing program.The processor 60 is realized above-mentioned each when executing the computer program 62
Step in image processing method embodiment, such as step S101 to S103 shown in FIG. 1.Alternatively, the processor 60 executes
Realize the function of each module/unit in above-mentioned each Installation practice when the computer program 62, for example, module 51 shown in Fig. 5 to
53 function.
The terminal device 6 can be the calculating such as desktop PC, notebook, palm PC and cloud server and set
It is standby.The terminal device may include, but be not limited only to, processor 60, memory 61.It will be understood by those skilled in the art that Fig. 6
The only example of terminal device 6 does not constitute the restriction to terminal device 6, may include than illustrating more or fewer portions
Part perhaps combines certain components or different components, such as the terminal device can also include input-output equipment, net
Network access device, bus etc..
Alleged processor 60 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng.
The memory 61 can be the internal storage unit of the terminal device 6, such as the hard disk or interior of terminal device 6
It deposits.The memory 61 is also possible to the External memory equipment of the terminal device 6, such as be equipped on the terminal device 6
Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge
Deposit card (Flash Card) etc..Further, the memory 61 can also both include the storage inside list of the terminal device 6
Member also includes External memory equipment.The memory 61 is for storing needed for the computer program and the terminal device
Other programs and data.The memory 61 can be also used for temporarily storing the data that has exported or will export.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
Scope of the present application.
In embodiment provided herein, it should be understood that disclosed device/terminal device and method, it can be with
It realizes by another way.For example, device described above/terminal device embodiment is only schematical, for example, institute
The division of module or unit is stated, only a kind of logical function partition, there may be another division manner in actual implementation, such as
Multiple units or components can be combined or can be integrated into another system, or some features can be ignored or not executed.Separately
A bit, shown or discussed mutual coupling or direct-coupling or communication connection can be through some interfaces, device
Or the INDIRECT COUPLING or communication connection of unit, it can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or
In use, can store in a computer readable storage medium.Based on this understanding, the application realizes above-mentioned implementation
All or part of the process in example method, can also instruct relevant hardware to complete, the meter by computer program
Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on
The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generation
Code can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium
It may include: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic that can carry the computer program code
Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM,
Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that described
The content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practice
Subtract, such as does not include electric carrier signal and electricity according to legislation and patent practice, computer-readable medium in certain jurisdictions
Believe signal.
Specifically can be as follows, the embodiment of the present application also provides a kind of computer readable storage mediums, this is computer-readable
Storage medium can be computer readable storage medium included in the memory in above-described embodiment;It is also possible to individually deposit
Without the computer readable storage medium in supplying terminal device.The computer-readable recording medium storage have one or
More than one computer program of person, the one or more computer program is by one or more than one processor
The following steps of the image processing method are realized when execution:
The foreground target in picture to be processed is detected, the first testing result is obtained, first testing result is used to indicate
It whether there is foreground target in the picture to be processed, and be used to indicate the class of each foreground target when there are foreground target
Other and position of each foreground target in the picture to be processed;
Scene classification is carried out to the picture to be processed, obtains classification results, the classification results are used to indicate whether to know
Not Chu the picture to be processed background, and be used to indicate when identifying the background of the picture to be processed described to be processed
The background classification of picture;
Network is fought to the figure to be processed according to first testing result, the classification results and preset generation
Piece is handled.
Embodiment described above is only to illustrate the technical solution of the application, rather than its limitations;Although referring to aforementioned reality
Example is applied the application is described in detail, those skilled in the art should understand that: it still can be to aforementioned each
Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified
Or replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all
Comprising within the scope of protection of this application.
Claims (10)
1. a kind of image processing method characterized by comprising
The foreground target in picture to be processed is detected, the first testing result is obtained, first testing result is used to indicate described
Whether there is foreground target in picture to be processed, and be used to indicate when there are foreground target each foreground target classification and
Position of each foreground target in the picture to be processed;
Scene classification is carried out to the picture to be processed, obtains classification results, the classification results are used to indicate whether to identify
The background of the picture to be processed, and the picture to be processed is used to indicate when identifying the background of the picture to be processed
Background classification;
According to first testing result, the classification results and preset generation fight network to the picture to be processed into
Row processing.
2. image processing method as described in claim 1, which is characterized in that described according to first testing result, institute
State classification results and preset generation confrontation network the picture to be processed is handled before, comprising:
The picture process instruction for receiving user's input is selected from preset generation confrontation network according to the picture process instruction
Corresponding generation fights network;
Accordingly, described that network is fought to described according to first testing result, the classification results and preset generation
Picture to be processed is handled specifically:
According to first testing result, the classification results and the generation of selection fight network to the picture to be processed into
Row processing, obtains treated picture.
3. image processing method as described in claim 1, which is characterized in that the image processing method further include:
If the classification results instruction identifies the background of the picture to be processed, the target context in the background is detected,
The second testing result is obtained, second testing result is used to indicate in the background with the presence or absence of target context, and is being deposited
Position of each target context in the picture to be processed is used to indicate in target context;
Correspondingly, described fight network to described according to first testing result, the classification results and preset generation
Picture to be processed carries out processing
Network is fought according to first testing result, the classification results, second testing result and preset generation
The picture to be processed is handled.
4. image processing method as claimed in claim 3, which is characterized in that it is described according to first testing result, it is described
Classification results, second testing result and preset generation confrontation network handle the picture to be processed, comprising:
If first testing result indicates that foreground target is not present in the picture to be processed, the classification results instruction identification
The background of the picture to be processed out, second testing result indicate that there is no target contexts in the background, then:
According to the background classification of the picture to be processed, selection first generates confrontation network, root from preset generation confrontation network
Confrontation network is generated according to described first to handle the picture to be processed.
5. image processing method as claimed in claim 3, which is characterized in that it is described according to first testing result, it is described
Classification results, second testing result and preset generation confrontation network carry out processing to the picture to be processed and include:
If first testing result indicates that, there are foreground target in the picture to be processed, the classification results instruction identifies
The background of the picture to be processed, second testing result indicate that there is no target contexts in the background, then:
It is fought according to the classification of each foreground target of first testing result instruction from preset generations and selects the in network
Two generate confrontation network, determine each prospect mesh according to the position of each foreground target of first testing result instruction
It is marked on the picture region of the picture to be processed;
Picture region of the confrontation network to each foreground target in the picture to be processed is generated according to described second to carry out
Processing obtains treated picture region;
Picture region where each foreground target in the picture to be processed is substituted for corresponding treated picture region
Domain obtains treated picture.
6. image processing method as claimed in claim 3, which is characterized in that it is described according to first testing result, it is described
Classification results, second testing result and preset generation confrontation network carry out processing to the picture to be processed and include:
If first testing result indicates that, there are foreground target in the picture to be processed, the classification results instruction identifies
The background of the picture to be processed, second testing result indicate that there are target contexts in the background, then:
According to the classification of each foreground target of first testing result instruction, the position of each foreground target, the classification
As a result position of the background classification, each target context of the picture to be processed indicated in the picture to be processed and pre-
If generation fight network, the picture to be processed is handled.
7. image processing method as claimed in claim 6, which is characterized in that it is described according to first testing result instruction
The background for the picture to be processed that the classification of each foreground target, the position of each foreground target, the classification results indicate
Network is fought in the position and preset generation of classification, each target context in the picture to be processed, to described to be processed
Picture is handled, comprising:
According to the background classification of the picture to be processed, selection first generates confrontation network, root from preset generation confrontation network
Determine each target context in the figure to be processed according to the position of each target context of second testing result instruction
The picture region of piece;
Picture region of the confrontation network to each target context in the picture to be processed is generated according to described first to carry out
Processing obtains treated the first picture region;
It is selected from preset generation confrontation network according to the classification of each foreground target indicated by first testing result
Second generates confrontation network, according to each foreground target indicated by first testing result in the picture to be processed
Position determines each foreground target in the picture region of the picture to be processed;
Picture region of the confrontation network to each foreground target in the picture to be processed is generated according to described second to carry out
Processing obtains treated second picture region;
Picture region where each foreground target in the picture to be processed is substituted for corresponding treated the second figure
Picture region where each target context in the picture to be processed, is substituted for that corresponding treated first by panel region
Picture region obtains treated picture.
8. a kind of picture processing unit characterized by comprising
First detection module obtains the first testing result, first detection for detecting the foreground target in picture to be processed
As a result it is used to indicate in the picture to be processed with the presence or absence of foreground target, and is used to indicate when there are foreground target each
The position of the classification of foreground target and each foreground target in the picture to be processed;
Categorization module obtains classification results, the classification results are for referring to for carrying out scene classification to the picture to be processed
The background for whether identifying the picture to be processed shown, and is used to indicate institute when identifying the background of the picture to be processed
State the background classification of picture to be processed;
Processing module, for fighting network to institute according to first testing result, the classification results and preset generation
Picture to be processed is stated to be handled.
9. a kind of terminal device, including memory, processor and storage are in the memory and can be on the processor
The computer program of operation, which is characterized in that the processor realizes such as claim 1 to 7 when executing the computer program
The step of any one image processing method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In the step of realization image processing method as described in any one of claim 1 to 7 when the computer program is executed by processor
Suddenly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810631027.9A CN108961157B (en) | 2018-06-19 | 2018-06-19 | Picture processing method, picture processing device and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810631027.9A CN108961157B (en) | 2018-06-19 | 2018-06-19 | Picture processing method, picture processing device and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108961157A true CN108961157A (en) | 2018-12-07 |
CN108961157B CN108961157B (en) | 2021-06-01 |
Family
ID=64491405
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810631027.9A Active CN108961157B (en) | 2018-06-19 | 2018-06-19 | Picture processing method, picture processing device and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108961157B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109739414A (en) * | 2018-12-29 | 2019-05-10 | 努比亚技术有限公司 | A kind of image processing method, mobile terminal, computer readable storage medium |
CN109815997A (en) * | 2019-01-04 | 2019-05-28 | 平安科技(深圳)有限公司 | The method and relevant apparatus of identification vehicle damage based on deep learning |
CN110544287A (en) * | 2019-08-30 | 2019-12-06 | 维沃移动通信有限公司 | Picture matching processing method and electronic equipment |
CN110766638A (en) * | 2019-10-31 | 2020-02-07 | 北京影谱科技股份有限公司 | Method and device for converting object background style in image |
CN111062861A (en) * | 2019-12-13 | 2020-04-24 | 广州市玄武无线科技股份有限公司 | Method and device for generating display image samples |
CN111145430A (en) * | 2019-12-27 | 2020-05-12 | 北京每日优鲜电子商务有限公司 | Method and device for detecting commodity placing state and computer storage medium |
CN111340124A (en) * | 2020-03-03 | 2020-06-26 | Oppo广东移动通信有限公司 | Method and device for identifying entity category in image |
CN111752506A (en) * | 2019-03-27 | 2020-10-09 | 京东方科技集团股份有限公司 | Digital work display method, display device and computer readable medium |
CN112954138A (en) * | 2021-02-20 | 2021-06-11 | 东营市阔海水产科技有限公司 | Aquatic economic animal image acquisition method, terminal equipment and movable material platform |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100169784A1 (en) * | 2008-12-30 | 2010-07-01 | Apple Inc. | Slide Show Effects Style |
CN104156915A (en) * | 2014-07-23 | 2014-11-19 | 小米科技有限责任公司 | Skin color adjusting method and device |
CN105138693A (en) * | 2015-09-18 | 2015-12-09 | 联动优势科技有限公司 | Method and device for having access to databases |
CN105788142A (en) * | 2016-05-11 | 2016-07-20 | 中国计量大学 | Video image processing-based fire detection system and detection method |
CN106101547A (en) * | 2016-07-06 | 2016-11-09 | 北京奇虎科技有限公司 | The processing method of a kind of view data, device and mobile terminal |
CN106296629A (en) * | 2015-05-18 | 2017-01-04 | 富士通株式会社 | Image processing apparatus and method |
US20180012211A1 (en) * | 2016-07-05 | 2018-01-11 | Rahul Singhal | Device for communicating preferences to a computer system |
CN107592517A (en) * | 2017-09-21 | 2018-01-16 | 青岛海信电器股份有限公司 | A kind of method and device of colour of skin processing |
CN107679465A (en) * | 2017-09-20 | 2018-02-09 | 上海交通大学 | A kind of pedestrian's weight identification data generation and extending method based on generation network |
CN107845072A (en) * | 2017-10-13 | 2018-03-27 | 深圳市迅雷网络技术有限公司 | Image generating method, device, storage medium and terminal device |
CN107862658A (en) * | 2017-10-31 | 2018-03-30 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN107944499A (en) * | 2017-12-10 | 2018-04-20 | 上海童慧科技股份有限公司 | A kind of background detection method modeled at the same time for prospect background |
CN108038452A (en) * | 2017-12-15 | 2018-05-15 | 厦门瑞为信息技术有限公司 | A kind of quick detection recognition method of household electrical appliances gesture based on topography's enhancing |
-
2018
- 2018-06-19 CN CN201810631027.9A patent/CN108961157B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100169784A1 (en) * | 2008-12-30 | 2010-07-01 | Apple Inc. | Slide Show Effects Style |
CN104156915A (en) * | 2014-07-23 | 2014-11-19 | 小米科技有限责任公司 | Skin color adjusting method and device |
CN106296629A (en) * | 2015-05-18 | 2017-01-04 | 富士通株式会社 | Image processing apparatus and method |
CN105138693A (en) * | 2015-09-18 | 2015-12-09 | 联动优势科技有限公司 | Method and device for having access to databases |
CN105788142A (en) * | 2016-05-11 | 2016-07-20 | 中国计量大学 | Video image processing-based fire detection system and detection method |
US20180012211A1 (en) * | 2016-07-05 | 2018-01-11 | Rahul Singhal | Device for communicating preferences to a computer system |
CN106101547A (en) * | 2016-07-06 | 2016-11-09 | 北京奇虎科技有限公司 | The processing method of a kind of view data, device and mobile terminal |
CN107679465A (en) * | 2017-09-20 | 2018-02-09 | 上海交通大学 | A kind of pedestrian's weight identification data generation and extending method based on generation network |
CN107592517A (en) * | 2017-09-21 | 2018-01-16 | 青岛海信电器股份有限公司 | A kind of method and device of colour of skin processing |
CN107845072A (en) * | 2017-10-13 | 2018-03-27 | 深圳市迅雷网络技术有限公司 | Image generating method, device, storage medium and terminal device |
CN107862658A (en) * | 2017-10-31 | 2018-03-30 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN107944499A (en) * | 2017-12-10 | 2018-04-20 | 上海童慧科技股份有限公司 | A kind of background detection method modeled at the same time for prospect background |
CN108038452A (en) * | 2017-12-15 | 2018-05-15 | 厦门瑞为信息技术有限公司 | A kind of quick detection recognition method of household electrical appliances gesture based on topography's enhancing |
Non-Patent Citations (3)
Title |
---|
ADITEE SHROTRE等: "Background recovery from multiple images", 《2013 IEEE DIGITAL SIGNAL PROCESSING AND SIGNAL PROCESSING EDUCATION MEETING (DSP/SPE)》 * |
刘玉杰等: "基于条件生成对抗网络的手绘图像检索", 《计算机辅助设计与图形学学报》 * |
吴联坤: "基于TensorFlow分布式与前景背景分离的实时图像风格化算法", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109739414A (en) * | 2018-12-29 | 2019-05-10 | 努比亚技术有限公司 | A kind of image processing method, mobile terminal, computer readable storage medium |
CN109815997A (en) * | 2019-01-04 | 2019-05-28 | 平安科技(深圳)有限公司 | The method and relevant apparatus of identification vehicle damage based on deep learning |
WO2020140371A1 (en) * | 2019-01-04 | 2020-07-09 | 平安科技(深圳)有限公司 | Deep learning-based vehicle damage identification method and related device |
CN111752506A (en) * | 2019-03-27 | 2020-10-09 | 京东方科技集团股份有限公司 | Digital work display method, display device and computer readable medium |
CN111752506B (en) * | 2019-03-27 | 2024-02-13 | 京东方艺云(杭州)科技有限公司 | Digital work display method, display device and computer readable medium |
CN110544287A (en) * | 2019-08-30 | 2019-12-06 | 维沃移动通信有限公司 | Picture matching processing method and electronic equipment |
CN110544287B (en) * | 2019-08-30 | 2023-11-10 | 维沃移动通信有限公司 | Picture allocation processing method and electronic equipment |
CN110766638A (en) * | 2019-10-31 | 2020-02-07 | 北京影谱科技股份有限公司 | Method and device for converting object background style in image |
CN111062861A (en) * | 2019-12-13 | 2020-04-24 | 广州市玄武无线科技股份有限公司 | Method and device for generating display image samples |
CN111145430A (en) * | 2019-12-27 | 2020-05-12 | 北京每日优鲜电子商务有限公司 | Method and device for detecting commodity placing state and computer storage medium |
CN111340124A (en) * | 2020-03-03 | 2020-06-26 | Oppo广东移动通信有限公司 | Method and device for identifying entity category in image |
CN112954138A (en) * | 2021-02-20 | 2021-06-11 | 东营市阔海水产科技有限公司 | Aquatic economic animal image acquisition method, terminal equipment and movable material platform |
Also Published As
Publication number | Publication date |
---|---|
CN108961157B (en) | 2021-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108961157A (en) | Image processing method, picture processing unit and terminal device | |
CN108898082A (en) | Image processing method, picture processing unit and terminal device | |
CN108961267A (en) | Image processing method, picture processing unit and terminal device | |
CN108898587A (en) | Image processing method, picture processing unit and terminal device | |
CN108550107A (en) | A kind of image processing method, picture processing unit and mobile terminal | |
CN110210571A (en) | Image-recognizing method, device, computer equipment and computer readable storage medium | |
CN104200249B (en) | A kind of method of clothing automatic collocation, apparatus and system | |
CN109086742A (en) | scene recognition method, scene recognition device and mobile terminal | |
CN108230010A (en) | A kind of method and server for estimating ad conversion rates | |
CN110741387B (en) | Face recognition method and device, storage medium and electronic equipment | |
CN109309878A (en) | The generation method and device of barrage | |
CN107280693A (en) | Psychoanalysis System and method based on VR interactive electronic sand tables | |
CN108876751A (en) | Image processing method, device, storage medium and terminal | |
CN109522858A (en) | Plant disease detection method, device and terminal device | |
CN109523525A (en) | Malign lung nodules recognition methods, device, equipment and the storage medium of image co-registration | |
CN110414550A (en) | Training method, device, system and the computer-readable medium of human face recognition model | |
CN109118447A (en) | A kind of image processing method, picture processing unit and terminal device | |
CN108932703A (en) | Image processing method, picture processing unit and terminal device | |
CN110287767A (en) | Can attack protection biopsy method, device, computer equipment and storage medium | |
CN112206541A (en) | Game plug-in identification method and device, storage medium and computer equipment | |
CN108805095A (en) | image processing method, device, mobile terminal and computer readable storage medium | |
CN108898169A (en) | Image processing method, picture processing unit and terminal device | |
CN108932704A (en) | Image processing method, picture processing unit and terminal device | |
CN106446969A (en) | User identification method and device | |
CN109241930A (en) | Method and apparatus for handling supercilium image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |