CN108898587A - Image processing method, picture processing unit and terminal device - Google Patents
Image processing method, picture processing unit and terminal device Download PDFInfo
- Publication number
- CN108898587A CN108898587A CN201810630426.3A CN201810630426A CN108898587A CN 108898587 A CN108898587 A CN 108898587A CN 201810630426 A CN201810630426 A CN 201810630426A CN 108898587 A CN108898587 A CN 108898587A
- Authority
- CN
- China
- Prior art keywords
- picture
- target object
- processed
- classification
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 53
- 238000003672 processing method Methods 0.000 title claims abstract description 22
- 238000000034 method Methods 0.000 claims abstract description 26
- 238000012360 testing method Methods 0.000 claims abstract description 22
- 238000013145 classification model Methods 0.000 claims description 37
- 238000001514 detection method Methods 0.000 claims description 33
- 238000004590 computer program Methods 0.000 claims description 23
- 238000012549 training Methods 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 11
- 238000005516 engineering process Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 238000007689 inspection Methods 0.000 description 6
- 230000004044 response Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 241000251468 Actinopterygii Species 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 241001494479 Pecora Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000005242 forging Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000002087 whitening effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G06T3/04—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Abstract
The application is suitable for image processing technology, provides a kind of image processing method, the method includes:Obtain picture to be processed;Detect the target object in the picture to be processed, obtain the first testing result, first testing result is used to indicate in the picture to be processed with the presence or absence of target object, and the position of the classification and each target object of each target object in the picture to be processed is used to indicate when there are target object;If there are at least one target object in the picture to be processed,:According to the position of the classification of each target object and each target object in the picture to be processed, the target object is handled.The application can further determine that the classification of target object after detecting target object, take corresponding processing mode according to the classification of target object, can effectively improve the precision of picture processing.
Description
Technical field
The application belongs to image processing technology more particularly to image processing method, picture processing unit, terminal device
And computer readable storage medium.
Background technique
In existing picture processing mode, identical processing mode is generallyd use to different classes of target object and is carried out
Processing.For example, the mode for being all made of whitening to different types of crowd (including yellow, white people etc.) is handled.
Although existing picture processing mode can meet processing of the user to target object in picture to a certain extent
Demand.But processing accuracy is not high, affects the overall effect of picture.
Summary of the invention
In view of this, the embodiment of the present application provides a kind of image processing method, picture processing unit, terminal device and meter
Calculation machine readable storage medium storing program for executing can effectively improve the precision of picture processing, promote the treatment effect of picture entirety.
The first aspect of the embodiment of the present application provides a kind of image processing method, including:
Obtain picture to be processed;
The target object in the picture to be processed is detected, the first testing result is obtained, first testing result is used for
It indicates with the presence or absence of target object in the picture to be processed, and is used to indicate each target object when there are target object
Position in the picture to be processed of classification and each target object;
If there are at least one target object in the picture to be processed,:
According to the position of the classification of each target object and each target object in the picture to be processed, to the mesh
Mark object is handled.
The second aspect of the embodiment of the present application provides a kind of picture processing unit, including:
Picture obtains module, for obtaining picture to be processed;
First detection module, for detecting the target object in the picture to be processed, the first testing result of acquisition is described
First testing result is used to indicate in the picture to be processed with the presence or absence of target object, and is used for when there are target object
Indicate the position of the classification and each target object of each target object in the picture to be processed;
Processing module, in the picture to be processed there are when at least one target object, according to each object
The position of the classification of body and each target object in the picture to be processed, handles the target object.
The third aspect of the embodiment of the present application provides a kind of terminal device, including including memory, processor and deposits
The computer program that can be run in the memory and on the processor is stored up, the processor executes the computer journey
It realizes when sequence such as the step of the image processing method.
The fourth aspect of the embodiment of the present application provides a kind of computer readable storage medium, the computer-readable storage
Media storage has computer program, realizes that the picture such as is handled when the computer program is executed by one or more processors
The step of method.
5th aspect of the embodiment of the present application provides a kind of computer program product, and the computer program product includes
Computer program realizes the step such as the image processing method when computer program is executed by one or more processors
Suddenly.
Existing beneficial effect is the embodiment of the present application compared with prior art:The embodiment of the present application detect it is to be processed
After target object in picture, the classification of target object can be further determined that, corresponding place is taken according to the classification of target object
Reason mode.Such as when target object is personage, further determine that the classification of personage, for yellow, using increase pixel value
Processing mode using the processing mode etc. for promoting saturation degree, figure can be effectively improved by the embodiment of the present application for white people
The precision of piece processing, promotes the treatment effect of picture entirety, has stronger usability and practicality.
Detailed description of the invention
It in order to more clearly explain the technical solutions in the embodiments of the present application, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application
Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is the implementation process schematic diagram for the image processing method that the embodiment of the present application one provides;
Fig. 2 is the implementation process schematic diagram for the image processing method that the embodiment of the present application two provides;
Fig. 3 is the schematic diagram for the picture processing unit that the embodiment of the present application three provides;
Fig. 4 is the schematic diagram for the terminal device that the embodiment of the present application four provides.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed
Body details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specific
The application also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity
The detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " instruction is described special
Sign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step,
Operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment
And be not intended to limit the application.As present specification and it is used in the attached claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is
Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true
It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In the specific implementation, terminal device described in the embodiment of the present application is including but not limited to such as with the sensitive table of touch
Mobile phone, laptop computer or the tablet computer in face (for example, touch-screen display and/or touch tablet) etc it is other
Portable device.It is to be further understood that in certain embodiments, the equipment is not portable communication device, but is had
The desktop computer of touch sensitive surface (for example, touch-screen display and/or touch tablet).
In following discussion, the terminal device including display and touch sensitive surface is described.However, should manage
Solution, terminal device may include that one or more of the other physical User of such as physical keyboard, mouse and/or control-rod connects
Jaws equipment.
Terminal device supports various application programs, such as one of the following or multiple:Drawing application program, demonstration application
Program, word-processing application, website creation application program, disk imprinting application program, spreadsheet applications, game are answered
With program, telephony application, videoconference application, email application, instant messaging applications, forging
Refining supports application program, photo management application program, digital camera application program, digital camera application program, web-browsing to answer
With program, digital music player application and/or video frequency player application program.
At least one of such as touch sensitive surface can be used in the various application programs that can be executed on the terminal device
Public physical user-interface device.It can be adjusted among applications and/or in corresponding application programs and/or change touch is quick
Feel the corresponding information shown in the one or more functions and terminal on surface.In this way, terminal public physical structure (for example,
Touch sensitive surface) it can support the various application programs with user interface intuitive and transparent for a user.
In addition, term " first ", " second " etc. are only used for distinguishing description, and should not be understood as in the description of the present application
Indication or suggestion relative importance.
In order to illustrate technical solution described herein, the following is a description of specific embodiments.
It is the implementation process schematic diagram for the image processing method that the embodiment of the present application one provides referring to Fig. 1, this method can be with
Including:
Step S101 obtains picture to be processed.
In the present embodiment, the picture to be processed can be the picture of current shooting, pre-stored picture, from network
The picture of upper acquisition or the picture extracted from video etc..For example, the picture shot by the camera of terminal device;Alternatively,
The picture that pre-stored wechat good friend sends;Alternatively, the picture downloaded from appointed website;Alternatively, from currently played
The frame picture extracted in video.Preferably, can also be a certain frame picture after terminal device starting camera in preview screen.
Step S102 detects the target object in the picture to be processed, obtains the first testing result, first detection
As a result it is used to indicate in the picture to be processed with the presence or absence of target object, and is used to indicate when there are target object each
The position of the classification of target object and each target object in the picture to be processed.
In the present embodiment, first testing result includes but is not limited to:Whether there is or not objects in the picture to be processed
The instruction information of body, and each target object included in above-mentioned picture to be processed is used to indicate when comprising target object
Classification and position information.Wherein, the target object can be preset one or more target, such as people, dynamic
Object, fresh flower etc..
It should be noted that the classification of target object described in the present embodiment refers to the fine grit classification to target object,
Such as target object is personage, classification can be yellow, white people etc., be also possible to adult, children etc.;If object
Body is animal, and classification can be dog, bird, fish etc..
Preferably, in order to more accurately recognize the position of target object, and area is carried out to the target object recognized
Point, facilitate subsequent processing.The present embodiment, can also be according to the classification of each target object and each after detecting target object
Position of a target object in the picture to be processed carries out frame using different selected frames to different classes of target object
Choosing, such as box frame select adult, and round frame frame selects children etc..
Preferably, the present embodiment can using training after target detection model to the target object in picture to be processed into
Row detection.Illustratively, which can detect (Single Shot Multibox for the more boxes of single-point
Detection, SSD) etc. with target object detection function model.It is of course also possible to use other scene detection modes, example
Such as being detected by target (such as face) recognizer whether there is predeterminated target in the picture to be processed, detect that there are institutes
After stating predeterminated target, determine the predeterminated target in the picture to be processed by target location algorithm or target tracking algorism
Position.
It should be noted that those skilled in the art are in the technical scope disclosed by the present invention, can be readily apparent that other
The scheme of detection target object should also will not repeat them here within protection scope of the present invention.
Illustrate mesh for detecting using the target detection model after training to the target object in picture to be processed
Mark the specific training process of detection model:
Samples pictures and the corresponding testing result of the samples pictures are obtained in advance, wherein the samples pictures are corresponding
Testing result include the classification of each target object and position in the samples pictures;
Using the target object in the initial above-mentioned samples pictures of target detection model inspection, and according to the institute obtained in advance
The corresponding testing result of samples pictures is stated, the Detection accuracy of the initial target detection model is calculated;
If above-mentioned Detection accuracy is less than preset detection threshold value, the parameter of initial target detection model is adjusted, then
Pass through samples pictures described in parameter target detection model inspection adjusted, the inspection of calculating parameter object-class model adjusted
Survey accuracy rate, loop iteration step, until the Detection accuracy of target detection model adjusted is greater than or equal to the inspection
Threshold value is surveyed, and using the target detection model as the target detection model after training.Wherein, the method for adjusting parameter includes but not
It is limited to stochastic gradient descent algorithm, power more new algorithm etc..
Step S103, if there are at least one target objects in the picture to be processed, according to each target object
The position of classification and each target object in the picture to be processed, handles the target object.
Illustratively, the classification and each target object according to each target object is in the picture to be processed
Position handles the target object, including:
According to the classification of each target object, the picture tupe of each target object is obtained, and according to described
Position of each target object in the picture to be processed, determines the picture region where each target object;
According to the picture tupe of each target object, the picture region where each target object is handled,
Obtain corresponding treated picture region;
By the picture region where each target object in the picture to be processed be substituted for it is corresponding treated figure
Panel region, to complete the processing to each target object.
It wherein, include but is not limited to that saturation degree, brightness and/or comparison are carried out to target object to the processing of picture to be processed
The adjusting of the image parameters such as degree.Such as yellow, using the processing mode for increasing pixel value, for white people, using mentioning
Rise the processing mode etc. of saturation degree.
Optionally, if there are at least one target object in the picture to be processed,:According to the class of each target object
Not with position of each target object in the picture to be processed, the target object is handled, further includes:
If there are children in the picture to be processed, right according to position of the children in the picture to be processed
The children carry out candid photograph processing.
The specific can be that obtain multiframe preview picture, selection meets most pre- of screening conditions in preset condition
The candid photograph of picture of looking at progress target object.The screening conditions include clarity, expression etc..
The embodiment of the present application can take corresponding processing mode according to the classification of target object, so as to effectively improve figure
The precision of piece processing, promotes the treatment effect of picture entirety.
It referring to fig. 2, is the implementation process schematic diagram for the image processing method that the embodiment of the present application two provides, this method can be with
Including:
Step S201 obtains picture to be processed.
In the present embodiment, the picture to be processed can be the picture of current shooting, pre-stored picture, from network
The picture of upper acquisition or the picture extracted from video etc..For example, the picture shot by the camera of terminal device;Alternatively,
The picture that pre-stored wechat good friend sends;Alternatively, the picture downloaded from appointed website;Alternatively, from currently played
The frame picture extracted in video.Preferably, can also be a certain frame picture after terminal device starting camera in preview screen.
Step S202 carries out scene classification to the picture to be processed, obtains classification results, the classification results are for referring to
The scene for whether identifying the picture to be processed shown, and is used to indicate institute when identifying the scene of the picture to be processed
State the scene type of picture to be processed.
In the present embodiment, scene classification is carried out to the picture to be processed, that is, identifies back current in picture to be processed
Scape belongs to which kind scene, such as seabeach scene, scale Forest Scene, snowfield scene, grassland scene, lit desert scene, blue sky scene, people
Object field scape etc..
Preferably, scene classification can be carried out to the picture to be processed using the scene classification model after training.Example
Property, which can have the model of scene detection function for MobileNet etc..It is of course also possible to use its
His scene classification mode, such as the prospect in the picture to be processed is gone out by foreground detection model inspection, pass through background detection
Model inspection goes out the background in the picture to be processed, determines the picture to be processed further according to the foreground and background detected
Scene type.
It should be noted that those skilled in the art are in the technical scope disclosed by the present invention, can be readily apparent that other
The scheme of detection scene should also will not repeat them here within protection scope of the present invention.
Illustrate scene point for detecting using the scene classification model after training to the scene in picture to be processed
The specific training process of class model:
Each samples pictures and the corresponding classification results of each samples pictures are obtained in advance;
Scene classification is carried out to each samples pictures using initial scene classification model, and each according to what is obtained in advance
The classification results of samples pictures calculate the classification accuracy of initial scene classification model;
If the classification accuracy is less than preset classification thresholds, the parameter of initial scene classification model is adjusted, and
Scene classification is carried out to the samples pictures by parameter scene classification model adjusted, according to each sample obtained in advance
The classification results of picture, the classification accuracy of calculating parameter scene classification model adjusted, loop iteration step, until adjusting
The classification accuracy of scene classification model after whole is greater than or equal to the classification thresholds, and the classification accuracy is greater than or
Equal to the classification thresholds scene classification model as training after scene classification model.Wherein, the method packet of adjusting parameter
Include but be not limited to stochastic gradient descent algorithm, power more new algorithm etc..
Step S203 judges the scene when classification results instruction identifies the scene of the picture to be processed
Whether classification includes scheduled scene type.
In the present embodiment, subsequent for convenience that efficiently, efficiently target object is handled, one can be preset
Scene relevant to target object a bit, for example, target object be personage when, corresponding scene be people's object field scape, scene of having a dinner party, work
Dynamic scene etc.;When target object is animal, corresponding scene can be meadow scene (target object is ox, sheep etc.), sea
Scape (target object is fish) etc..After identifying the scene of the picture to be processed, the scene in the picture to be processed is judged
Whether classification includes scheduled scene type.Such as target object be personage when, judge the scene class in the picture to be processed
Not not whether comprising personage's scene, scene of having a dinner party, activity scene etc..
Step S204 obtains the if detecting the target object in the picture to be processed comprising scheduled scene type
One testing result, first testing result is used to indicate in the picture to be processed with the presence or absence of target object, and is being deposited
The position of the classification and each target object of each target object in the picture to be processed is used to indicate in target object;
Step S205, if there are at least one target objects in the picture to be processed, according to each target object
The position of classification and each target object in the picture to be processed, handles the target object.
Wherein, the specific implementation process of step S204 and S205 can refer to above-mentioned steps S102 and S103, herein no longer
It repeats.
In the embodiment of the present application, it in order to improve the treatment effeciency of target object, first detects in picture to be processed and whether deposits
In scene relevant to the target object, when there are associated scenario, then target object is detected, and further recognition detection arrives
Target object classification, according to the position of the classification of each target object and each target object in the picture to be processed
It sets, the target object is handled.The precision that picture processing not only can be improved by the embodiment of the present application, can also mention
The efficiency of high picture processing.
It should be understood that in the above-described embodiments, the size of the serial number of each step is not meant that the order of the execution order, it is each to walk
Rapid execution sequence should be determined by its function and internal logic, and the implementation process without coping with the embodiment of the present invention constitutes any limit
It is fixed.
Fig. 3 be the application 3rd embodiment provide picture processing unit schematic diagram, for ease of description, only show with
The relevant part of the embodiment of the present application.
The picture processing unit 3 can be the software list being built in the terminal devices such as mobile phone, tablet computer, notebook
Member, hardware cell or the unit of soft or hard combination, can also be used as independent pendant and are integrated into the mobile phone, tablet computer, pen
Remember in this grade terminal device.
The picture processing unit 3 includes:
Picture obtains module 31, for obtaining picture to be processed;
First detection module 32 obtains the first testing result, institute for detecting the target object in the picture to be processed
It states the first testing result to be used to indicate in the picture to be processed with the presence or absence of target object, and is used when there are target object
In indicating the position of the classification and each target object of each target object in the picture to be processed;
Processing module 33, in the picture to be processed there are when at least one target object, according to each target
The position of the classification of object and each target object in the picture to be processed, handles the target object.
Optionally, first detection module 32 includes:
Taxon obtains classification results, the classification results are used for carrying out scene classification to the picture to be processed
In the scene for indicating whether to identify the picture to be processed, and when identifying the scene of the picture to be processed for referring to
Show the scene type of the picture to be processed;
Judging unit, for when classification results instruction identifies the scene of the picture to be processed, described in judgement
Whether scene type includes scheduled scene type;
Detection unit, for detecting the target object in the picture to be processed when comprising scheduled scene type.
Optionally, the taxon is specifically used for:
Scene classification is carried out to the picture to be processed using the scene classification model after training, obtains classification results.
Optionally, the picture processing unit 3 further includes training module, and the training module includes:
Acquiring unit, for obtaining each samples pictures and the corresponding classification results of each samples pictures in advance;
Computing unit, for using initial scene classification model to each samples pictures carry out scene classification, and according to
The classification results of each samples pictures obtained in advance calculate the classification accuracy of initial scene classification model;
Processing unit adjusts initial scene classification if being less than preset classification thresholds for the classification accuracy
The parameter of model, and scene classification is carried out to the samples pictures by parameter scene classification model adjusted, according to preparatory
The classification results of each samples pictures obtained, the classification accuracy of calculating parameter scene classification model adjusted, circulation change
For the step, until the classification accuracy of scene classification model adjusted is greater than or equal to the classification thresholds, and will be described
Classification accuracy is greater than or equal to the scene classification model of the classification thresholds as the scene classification model after training.
Optionally, the picture processing unit 3 further includes:
Frame modeling block, for indicating in the picture to be processed in first testing result there are after target object,
According to the position of the classification of each target object and each target object in the picture to be processed, to different classes of target
Object carries out frame choosing using different selected frames.
Optionally, the processing module 33 includes:
First processing units, for the classification according to each target object, at the picture for obtaining each target object
Reason mode, and the position according to each target object in the picture to be processed, where determining each target object
Picture region;
The second processing unit, for the picture tupe according to each target object, to where each target object
Picture region is handled, and corresponding treated picture region is obtained;
Third processing unit, for the picture region where each target object in the picture to be processed to be substituted for
Corresponding treated picture region, to complete the processing to each target object.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing
The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also
To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list
Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above-mentioned dress
Set/unit between the contents such as information exchange, implementation procedure, due to being based on same design, tool with the application embodiment of the method
Body function and bring technical effect, for details, reference can be made to embodiment of the method parts, and details are not described herein again.
Fig. 4 is the schematic diagram for the terminal device that the application fourth embodiment provides.As shown in figure 4, the terminal of the embodiment
Equipment 4 includes:It processor 40, memory 41 and is stored in the memory 41 and can be run on the processor 40
Computer program 42, such as picture processing program.The processor 40 is realized above-mentioned each when executing the computer program 42
Step in image processing method embodiment, such as step 101 shown in FIG. 1 is to 103.Alternatively, the processor 40 executes institute
The function of each module/unit in above-mentioned each Installation practice, such as module 31 to 33 shown in Fig. 3 are realized when stating computer program 42
Function.
The terminal device 4 can be the calculating such as desktop PC, notebook, palm PC and cloud server and set
It is standby.The terminal device may include, but be not limited only to, processor 40, memory 41.It will be understood by those skilled in the art that Fig. 4
The only example of terminal device 4 does not constitute the restriction to terminal device 4, may include than illustrating more or fewer portions
Part perhaps combines certain components or different components, such as the terminal device can also include input-output equipment, net
Network access device, bus etc..
Alleged processor 40 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng.
The memory 41 can be the internal storage unit of the terminal device 4, such as the hard disk or interior of terminal device 4
It deposits.The memory 41 is also possible to the External memory equipment of the terminal device 4, such as be equipped on the terminal device 4
Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge
Deposit card (Flash Card) etc..Further, the memory 41 can also both include the storage inside list of the terminal device 4
Member also includes External memory equipment.The memory 41 is for storing needed for the computer program and the terminal device
Other programs and data.The memory 41 can be also used for temporarily storing the data that has exported or will export.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
Scope of the present application.
In embodiment provided herein, it should be understood that disclosed device/terminal device and method, it can be with
It realizes by another way.For example, device described above/terminal device embodiment is only schematical, for example, institute
The division of module or unit is stated, only a kind of logical function partition, there may be another division manner in actual implementation, such as
Multiple units or components can be combined or can be integrated into another system, or some features can be ignored or not executed.Separately
A bit, shown or discussed mutual coupling or direct-coupling or communication connection can be through some interfaces, device
Or the INDIRECT COUPLING or communication connection of unit, it can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or
In use, can store in a computer readable storage medium.Based on this understanding, the application realizes above-mentioned implementation
All or part of the process in example method, can also instruct relevant hardware to complete, the meter by computer program
Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on
The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generation
Code can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium
May include:Any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic of the computer program code can be carried
Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM,
Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that described
The content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practice
Subtract, such as does not include electric carrier signal and electricity according to legislation and patent practice, computer-readable medium in certain jurisdictions
Believe signal.
Specifically can be as follows, the embodiment of the present application also provides a kind of computer readable storage mediums, this is computer-readable
Storage medium can be computer readable storage medium included in the memory in above-described embodiment;It is also possible to individually deposit
Without the computer readable storage medium in supplying terminal device.The computer-readable recording medium storage have one or
More than one computer program of person, the one or more computer program is by one or more than one processor
The following steps of the image processing method are realized when execution:
Obtain picture to be processed;
The target object in the picture to be processed is detected, the first testing result is obtained, first testing result is used for
It indicates with the presence or absence of target object in the picture to be processed, and is used to indicate each target object when there are target object
Position in the picture to be processed of classification and each target object;
If there are at least one target object in the picture to be processed,:
According to the position of the classification of each target object and each target object in the picture to be processed, to the mesh
Mark object is handled.
Assuming that above-mentioned is the first possible embodiment, then provided based on the first possible embodiment
Second of possible embodiment in, the target object detected in the picture to be processed includes:
Scene classification is carried out to the picture to be processed, obtains classification results, the classification results are used to indicate whether to know
Not Chu the picture to be processed scene, and be used to indicate when identifying the scene of the picture to be processed described to be processed
The scene type of picture;
When classification results instruction identifies the scene of the picture to be processed, judge whether the scene type wraps
Containing scheduled scene type;
If detecting the target object in the picture to be processed comprising scheduled scene type.
Assuming that above-mentioned is second of possible embodiment, then provided based on second of possible embodiment
The third possible embodiment in, scene classification is carried out to the picture to be processed, obtaining classification results includes:
Scene classification is carried out to the picture to be processed using the scene classification model after training, obtains classification results.
In the 4th kind of possible embodiment provided based on the third possible embodiment, the scene
The training process of disaggregated model includes:
Each samples pictures and the corresponding classification results of each samples pictures are obtained in advance;
Scene classification is carried out to each samples pictures using initial scene classification model, and each according to what is obtained in advance
The classification results of samples pictures calculate the classification accuracy of initial scene classification model;
If the classification accuracy is less than preset classification thresholds, the parameter of initial scene classification model is adjusted, and
Scene classification is carried out to the samples pictures by parameter scene classification model adjusted, according to each sample obtained in advance
The classification results of picture, the classification accuracy of calculating parameter scene classification model adjusted, loop iteration step, until adjusting
The classification accuracy of scene classification model after whole is greater than or equal to the classification thresholds, and the classification accuracy is greater than or
Equal to the classification thresholds scene classification model as training after scene classification model.
In the 5th kind of possible embodiment provided based on the first possible embodiment, according to each
The position of the classification of a target object and each target object in the picture to be processed, handles the target object
Before, further include:
According to the position of the classification of each target object and each target object in the picture to be processed, to inhomogeneity
Other target object carries out frame choosing using different selected frames.
In the 6th kind of possible embodiment provided based on the first to five any possible embodiment,
The position according to the classification and each target object of each target object in the picture to be processed, to the object
Body is handled, including:
According to the classification of each target object, the picture tupe of each target object is obtained, and according to described
Position of each target object in the picture to be processed, determines the picture region where each target object;
According to the picture tupe of each target object, the picture region where each target object is handled,
Obtain corresponding treated picture region;
By the picture region where each target object in the picture to be processed be substituted for it is corresponding treated figure
Panel region, to complete the processing to each target object.
Embodiment described above is only to illustrate the technical solution of the application, rather than its limitations;Although referring to aforementioned reality
Example is applied the application is described in detail, those skilled in the art should understand that:It still can be to aforementioned each
Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified
Or replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all
Comprising within the scope of protection of this application.
Claims (10)
1. a kind of image processing method, which is characterized in that including:
Obtain picture to be processed;
The target object in the picture to be processed is detected, the first testing result is obtained, first testing result is used to indicate
It whether there is target object in the picture to be processed, and be used to indicate the class of each target object when there are target object
Other and position of each target object in the picture to be processed;
If there are at least one target object in the picture to be processed,:
According to the position of the classification of each target object and each target object in the picture to be processed, to the object
Body is handled.
2. image processing method as described in claim 1, which is characterized in that the target object in the detection picture to be processed
Including:
Scene classification is carried out to the picture to be processed, obtains classification results, the classification results are used to indicate whether to identify
The scene of the picture to be processed, and the picture to be processed is used to indicate when identifying the scene of the picture to be processed
Scene type;
When classification results instruction identifies the scene of the picture to be processed, judge whether the scene type includes pre-
Fixed scene type;
If detecting the target object in the picture to be processed comprising scheduled scene type.
3. image processing method as claimed in claim 2, which is characterized in that scene classification is carried out to the picture to be processed,
Obtaining classification results includes:
Scene classification is carried out to the picture to be processed using the scene classification model after training, obtains classification results.
4. image processing method as claimed in claim 3, which is characterized in that the training process packet of the scene classification model
It includes:
Each samples pictures and the corresponding classification results of each samples pictures are obtained in advance;
Scene classification is carried out to each samples pictures using initial scene classification model, and according to each sample obtained in advance
The classification results of picture calculate the classification accuracy of initial scene classification model;
If the classification accuracy is less than preset classification thresholds, the parameter of initial scene classification model is adjusted, and pass through
Parameter scene classification model adjusted carries out scene classification to the samples pictures, according to each samples pictures obtained in advance
Classification results, the classification accuracy of calculating parameter scene classification model adjusted, the loop iteration step, until adjustment after
The classification accuracy of scene classification model be greater than or equal to the classification thresholds, and the classification accuracy is greater than or equal to
The scene classification model of the classification thresholds is as the scene classification model after training.
5. image processing method as described in claim 1, which is characterized in that according to the classification of each target object and each
Position of the target object in the picture to be processed further includes before handling the target object:
According to the position of the classification of each target object and each target object in the picture to be processed, to different classes of
Target object carries out frame choosing using different selected frames.
6. such as image processing method described in any one of claim 1 to 5, which is characterized in that described according to each target object
Position in the picture to be processed of classification and each target object, the target object is handled, including:
According to the classification of each target object, the picture tupe of each target object is obtained, and according to described each
Position of the target object in the picture to be processed, determines the picture region where each target object;
According to the picture tupe of each target object, the picture region where each target object is handled, is obtained
Corresponding treated picture region;
Picture region where each target object in the picture to be processed is substituted for corresponding treated picture region
Domain, to complete the processing to each target object.
7. a kind of picture processing unit, which is characterized in that including:
Picture obtains module, for obtaining picture to be processed;
First detection module obtains the first testing result for detecting the target object in the picture to be processed, and described first
Testing result is used to indicate in the picture to be processed with the presence or absence of target object, and is used to indicate when there are target object
The position of the classification of each target object and each target object in the picture to be processed;
Processing module, in the picture to be processed there are when at least one target object, according to each target object
The position of classification and each target object in the picture to be processed, handles the target object.
8. picture processing unit as claimed in claim 7, which is characterized in that the first detection module includes:
Taxon obtains classification results, the classification results are for referring to for carrying out scene classification to the picture to be processed
The scene for whether identifying the picture to be processed shown, and is used to indicate institute when identifying the scene of the picture to be processed
State the scene type of picture to be processed;
Judging unit, for judging the scene when classification results instruction identifies the scene of the picture to be processed
Whether classification includes scheduled scene type;
Detection unit, for detecting the target object in the picture to be processed when comprising scheduled scene type.
9. a kind of terminal device, including memory, processor and storage are in the memory and can be on the processor
The computer program of operation, which is characterized in that the processor realizes such as claim 1 to 6 when executing the computer program
The step of any one image processing method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In the step of realization image processing method as described in any one of claim 1 to 6 when the computer program is executed by processor
Suddenly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810630426.3A CN108898587A (en) | 2018-06-19 | 2018-06-19 | Image processing method, picture processing unit and terminal device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810630426.3A CN108898587A (en) | 2018-06-19 | 2018-06-19 | Image processing method, picture processing unit and terminal device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108898587A true CN108898587A (en) | 2018-11-27 |
Family
ID=64345525
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810630426.3A Pending CN108898587A (en) | 2018-06-19 | 2018-06-19 | Image processing method, picture processing unit and terminal device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108898587A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109597912A (en) * | 2018-12-05 | 2019-04-09 | 上海碳蓝网络科技有限公司 | Method for handling picture |
CN109658501A (en) * | 2018-12-21 | 2019-04-19 | Oppo广东移动通信有限公司 | A kind of image processing method, image processing apparatus and terminal device |
CN110532113A (en) * | 2019-08-30 | 2019-12-03 | 北京地平线机器人技术研发有限公司 | Information processing method, device, computer readable storage medium and electronic equipment |
CN111179218A (en) * | 2019-12-06 | 2020-05-19 | 深圳市派科斯科技有限公司 | Conveyor belt material detection method and device, storage medium and terminal equipment |
CN112115285A (en) * | 2019-06-21 | 2020-12-22 | 杭州海康威视数字技术股份有限公司 | Picture cleaning method and device |
CN113439253A (en) * | 2019-04-12 | 2021-09-24 | 深圳市欢太科技有限公司 | Application cleaning method and device, storage medium and electronic equipment |
Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103337076A (en) * | 2013-06-26 | 2013-10-02 | 深圳市智美达科技有限公司 | Method and device for determining appearing range of video monitoring targets |
CN103390046A (en) * | 2013-07-20 | 2013-11-13 | 西安电子科技大学 | Multi-scale dictionary natural scene image classification method based on latent Dirichlet model |
CN103440501A (en) * | 2013-09-01 | 2013-12-11 | 西安电子科技大学 | Scene classification method based on nonparametric space judgment hidden Dirichlet model |
CN104156915A (en) * | 2014-07-23 | 2014-11-19 | 小米科技有限责任公司 | Skin color adjusting method and device |
CN105138693A (en) * | 2015-09-18 | 2015-12-09 | 联动优势科技有限公司 | Method and device for having access to databases |
CN105635567A (en) * | 2015-12-24 | 2016-06-01 | 小米科技有限责任公司 | Shooting method and device |
CN105825486A (en) * | 2016-04-05 | 2016-08-03 | 北京小米移动软件有限公司 | Beautifying processing method and apparatus |
CN106101547A (en) * | 2016-07-06 | 2016-11-09 | 北京奇虎科技有限公司 | The processing method of a kind of view data, device and mobile terminal |
CN106934401A (en) * | 2017-03-07 | 2017-07-07 | 上海师范大学 | A kind of image classification method based on improvement bag of words |
CN107025629A (en) * | 2017-04-27 | 2017-08-08 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN107592839A (en) * | 2015-01-19 | 2018-01-16 | 电子湾有限公司 | Fine grit classification |
CN107592517A (en) * | 2017-09-21 | 2018-01-16 | 青岛海信电器股份有限公司 | A kind of method and device of colour of skin processing |
CN107730446A (en) * | 2017-10-31 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, computer equipment and computer-readable recording medium |
CN107798653A (en) * | 2017-09-20 | 2018-03-13 | 北京三快在线科技有限公司 | A kind of method of image procossing and a kind of device |
CN107845072A (en) * | 2017-10-13 | 2018-03-27 | 深圳市迅雷网络技术有限公司 | Image generating method, device, storage medium and terminal device |
CN107862663A (en) * | 2017-11-09 | 2018-03-30 | 广东欧珀移动通信有限公司 | Image processing method, device, readable storage medium storing program for executing and computer equipment |
CN107886484A (en) * | 2017-11-30 | 2018-04-06 | 广东欧珀移动通信有限公司 | U.S. face method, apparatus, computer-readable recording medium and electronic equipment |
CN107924579A (en) * | 2015-08-14 | 2018-04-17 | 麦特尔有限公司 | The method for generating personalization 3D head models or 3D body models |
CN107945107A (en) * | 2017-11-30 | 2018-04-20 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
US20180122114A1 (en) * | 2016-08-19 | 2018-05-03 | Beijing Sensetime Technology Development Co., Ltd. | Method and apparatus for processing video image and electronic device |
CN108055501A (en) * | 2017-11-22 | 2018-05-18 | 天津市亚安科技有限公司 | A kind of target detection and the video monitoring system and method for tracking |
CN108121957A (en) * | 2017-12-19 | 2018-06-05 | 北京麒麟合盛网络技术有限公司 | The method for pushing and device of U.S. face material |
CN108140110A (en) * | 2015-09-22 | 2018-06-08 | 韩国科学技术研究院 | Age conversion method based on face's each position age and environmental factor, for performing the storage medium of this method and device |
CN108171250A (en) * | 2016-12-07 | 2018-06-15 | 北京三星通信技术研究有限公司 | Object detection method and device |
-
2018
- 2018-06-19 CN CN201810630426.3A patent/CN108898587A/en active Pending
Patent Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103337076A (en) * | 2013-06-26 | 2013-10-02 | 深圳市智美达科技有限公司 | Method and device for determining appearing range of video monitoring targets |
CN103390046A (en) * | 2013-07-20 | 2013-11-13 | 西安电子科技大学 | Multi-scale dictionary natural scene image classification method based on latent Dirichlet model |
CN103440501A (en) * | 2013-09-01 | 2013-12-11 | 西安电子科技大学 | Scene classification method based on nonparametric space judgment hidden Dirichlet model |
CN104156915A (en) * | 2014-07-23 | 2014-11-19 | 小米科技有限责任公司 | Skin color adjusting method and device |
CN107592839A (en) * | 2015-01-19 | 2018-01-16 | 电子湾有限公司 | Fine grit classification |
CN107924579A (en) * | 2015-08-14 | 2018-04-17 | 麦特尔有限公司 | The method for generating personalization 3D head models or 3D body models |
CN105138693A (en) * | 2015-09-18 | 2015-12-09 | 联动优势科技有限公司 | Method and device for having access to databases |
CN108140110A (en) * | 2015-09-22 | 2018-06-08 | 韩国科学技术研究院 | Age conversion method based on face's each position age and environmental factor, for performing the storage medium of this method and device |
CN105635567A (en) * | 2015-12-24 | 2016-06-01 | 小米科技有限责任公司 | Shooting method and device |
CN105825486A (en) * | 2016-04-05 | 2016-08-03 | 北京小米移动软件有限公司 | Beautifying processing method and apparatus |
CN106101547A (en) * | 2016-07-06 | 2016-11-09 | 北京奇虎科技有限公司 | The processing method of a kind of view data, device and mobile terminal |
US20180122114A1 (en) * | 2016-08-19 | 2018-05-03 | Beijing Sensetime Technology Development Co., Ltd. | Method and apparatus for processing video image and electronic device |
CN108171250A (en) * | 2016-12-07 | 2018-06-15 | 北京三星通信技术研究有限公司 | Object detection method and device |
CN106934401A (en) * | 2017-03-07 | 2017-07-07 | 上海师范大学 | A kind of image classification method based on improvement bag of words |
CN107025629A (en) * | 2017-04-27 | 2017-08-08 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN107798653A (en) * | 2017-09-20 | 2018-03-13 | 北京三快在线科技有限公司 | A kind of method of image procossing and a kind of device |
CN107592517A (en) * | 2017-09-21 | 2018-01-16 | 青岛海信电器股份有限公司 | A kind of method and device of colour of skin processing |
CN107845072A (en) * | 2017-10-13 | 2018-03-27 | 深圳市迅雷网络技术有限公司 | Image generating method, device, storage medium and terminal device |
CN107730446A (en) * | 2017-10-31 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, computer equipment and computer-readable recording medium |
CN107862663A (en) * | 2017-11-09 | 2018-03-30 | 广东欧珀移动通信有限公司 | Image processing method, device, readable storage medium storing program for executing and computer equipment |
CN108055501A (en) * | 2017-11-22 | 2018-05-18 | 天津市亚安科技有限公司 | A kind of target detection and the video monitoring system and method for tracking |
CN107945107A (en) * | 2017-11-30 | 2018-04-20 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN107886484A (en) * | 2017-11-30 | 2018-04-06 | 广东欧珀移动通信有限公司 | U.S. face method, apparatus, computer-readable recording medium and electronic equipment |
CN108121957A (en) * | 2017-12-19 | 2018-06-05 | 北京麒麟合盛网络技术有限公司 | The method for pushing and device of U.S. face material |
Non-Patent Citations (1)
Title |
---|
周云成等: "基于双卷积链Fast R-CNN的番茄关键器官识别方法", 《沈阳农业大学学报》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109597912A (en) * | 2018-12-05 | 2019-04-09 | 上海碳蓝网络科技有限公司 | Method for handling picture |
CN109658501A (en) * | 2018-12-21 | 2019-04-19 | Oppo广东移动通信有限公司 | A kind of image processing method, image processing apparatus and terminal device |
CN113439253A (en) * | 2019-04-12 | 2021-09-24 | 深圳市欢太科技有限公司 | Application cleaning method and device, storage medium and electronic equipment |
CN113439253B (en) * | 2019-04-12 | 2023-08-22 | 深圳市欢太科技有限公司 | Application cleaning method and device, storage medium and electronic equipment |
CN112115285A (en) * | 2019-06-21 | 2020-12-22 | 杭州海康威视数字技术股份有限公司 | Picture cleaning method and device |
CN110532113A (en) * | 2019-08-30 | 2019-12-03 | 北京地平线机器人技术研发有限公司 | Information processing method, device, computer readable storage medium and electronic equipment |
CN111179218A (en) * | 2019-12-06 | 2020-05-19 | 深圳市派科斯科技有限公司 | Conveyor belt material detection method and device, storage medium and terminal equipment |
CN111179218B (en) * | 2019-12-06 | 2023-07-04 | 深圳市燕麦科技股份有限公司 | Conveyor belt material detection method and device, storage medium and terminal equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108898587A (en) | Image processing method, picture processing unit and terminal device | |
CN108961157A (en) | Image processing method, picture processing unit and terminal device | |
CN108898082A (en) | Image processing method, picture processing unit and terminal device | |
CN108961267A (en) | Image processing method, picture processing unit and terminal device | |
CN107633204A (en) | Face occlusion detection method, apparatus and storage medium | |
CN109086742A (en) | scene recognition method, scene recognition device and mobile terminal | |
CN104200249B (en) | A kind of method of clothing automatic collocation, apparatus and system | |
CN109117879A (en) | Image classification method, apparatus and system | |
CN109858384A (en) | Method for catching, computer readable storage medium and the terminal device of facial image | |
CN108174096A (en) | Method, apparatus, terminal and the storage medium of acquisition parameters setting | |
CN110741387B (en) | Face recognition method and device, storage medium and electronic equipment | |
CN109345553A (en) | A kind of palm and its critical point detection method, apparatus and terminal device | |
CN107280693A (en) | Psychoanalysis System and method based on VR interactive electronic sand tables | |
CN107003727A (en) | Run the electronic equipment of multiple applications and the method for control electronics | |
CN110222728A (en) | The training method of article discrimination model, system and article discrimination method, equipment | |
CN109151337A (en) | Recognition of face light compensation method, recognition of face light compensating apparatus and mobile terminal | |
CN110300959A (en) | Task management when dynamic operation | |
CN107463114A (en) | Books management method and system based on bookshelf | |
CN108764139A (en) | A kind of method for detecting human face, mobile terminal and computer readable storage medium | |
CN109657543A (en) | Flow of the people monitoring method, device and terminal device | |
CN112206541A (en) | Game plug-in identification method and device, storage medium and computer equipment | |
CN108932703A (en) | Image processing method, picture processing unit and terminal device | |
CN109993234A (en) | A kind of unmanned training data classification method, device and electronic equipment | |
CN108932704A (en) | Image processing method, picture processing unit and terminal device | |
CN106446969A (en) | User identification method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181127 |
|
RJ01 | Rejection of invention patent application after publication |