CN109144361A - A kind of image processing method and terminal device - Google Patents
A kind of image processing method and terminal device Download PDFInfo
- Publication number
- CN109144361A CN109144361A CN201810746899.XA CN201810746899A CN109144361A CN 109144361 A CN109144361 A CN 109144361A CN 201810746899 A CN201810746899 A CN 201810746899A CN 109144361 A CN109144361 A CN 109144361A
- Authority
- CN
- China
- Prior art keywords
- target
- subregion
- input
- image
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 claims abstract description 31
- 230000004044 response Effects 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 10
- 230000000694 effects Effects 0.000 abstract description 20
- 230000006854 communication Effects 0.000 abstract description 6
- 238000004891 communication Methods 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 24
- 238000010586 diagram Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 238000003709 image segmentation Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 4
- 230000008439 repair process Effects 0.000 description 3
- 238000001914 filtration Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 208000001491 myopia Diseases 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 230000002087 whitening effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G06T5/77—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Abstract
The embodiment of the invention provides a kind of image processing method and terminal devices, are related to field of communication technology, influence the treatment effect of local image region to solve the problem of the existing method for adding filter due to only handling general image.Wherein, described image processing method, comprising: receive first input of the user to the target subregion in N number of subregion of target image;In response to first input, the image procossing for the target type that first input is chosen is carried out to the target subregion;Wherein, N is positive integer.Image processing method in the embodiment of the present invention is applied in terminal device.
Description
Technical field
The present embodiments relate to field of communication technology more particularly to a kind of image processing methods and terminal device.
Background technique
Currently, the function of having become and be most widely used in terminal device of taking pictures.As the requirement of user is continuously improved, clap
According to function carry to repair figure function also more and more perfect.
In general, repairing the most important approach of figure is exactly to add filter, filter can allow photo color to seem more good-looking.Addition
The process of filter can are as follows: terminal provides the option of filter, and user can be based on filter list in filter function, successively choose filter
Mirror checks global effect, determines and is saved after choosing.
But during adding filter, being frequently encountered a filter makes scenery seem more good-looking, and another is filtered
Mirror makes one to seem more good-looking situation.For example, the color saturation of background scenery is lower in general portraiture photography, choosing
The filtering effects for taking color saturation high can be relatively good, and the filtering effects that personage is relatively suitble to color saturation low, to keep away
Exempt from the partially black or partially red phenomenon of the excessively high colour of skin for causing personage of color saturation.As it can be seen that the method for above-mentioned addition filter can not make
Photo reaches optimal modification effect.
Summary of the invention
The embodiment of the present invention provides a kind of image processing method, to solve the method for existing addition filter due to only to whole
The problem of body image is handled, and the treatment effect of local image region is influenced.
In order to solve the above-mentioned technical problem, the present invention is implemented as follows: a kind of image processing method, receives user to mesh
First input of the target subregion in N number of subregion of logo image;In response to first input, to the target subregion
Carry out the image procossing for the target type that first input is chosen;Wherein, N is positive integer.
In a first aspect, the embodiment of the invention also provides a kind of terminal devices, comprising: the first input receiving module is used for
Receive first input of the user to the target subregion in N number of subregion of target image;First input respond module, for ringing
First input described in Ying Yu carries out the image procossing for the target type that first input is chosen to the target subregion;Its
In, N is positive integer.
Second aspect, the embodiment of the invention also provides a kind of terminal device, including processor, memory is stored in institute
The computer program that can be run on memory and on the processor is stated, when the computer program is executed by the processor
The step of realizing described image processing method.
The third aspect, it is described computer-readable to deposit the embodiment of the invention also provides a kind of computer readable storage medium
It is stored with computer program on storage media, the step of described image processing method is realized when the computer program is executed by processor
Suddenly.
In embodiments of the present invention, user can target subregion to target image carry out the first input, so that terminal is set
It is standby to receive and respond to the first input, with the image procossing for the target type chosen according to the first input, to target subregion into
The image procossing of row target type to complete the independent image procossing to target subregion, and then is based on the above method, can divide
Not Wan Cheng N number of subregion in target image image procossing.Because the image procossing of N number of subregion is not synchronous progress, because
This corresponds to different subregions, and the target type for the image procossing that the first input is chosen can be different, so that user can be for difference
Subregion the characteristics of, carry out different image procossings.As it can be seen that the embodiment of the present invention can satisfy the best of local image region
Treatment effect makes target image reach best modification effect.
Detailed description of the invention
Fig. 1 is one of flow chart of image processing method of the embodiment of the present invention;
Fig. 2 is one of schematic diagram of display interface of terminal device of the embodiment of the present invention;
Fig. 3 is the two of the schematic diagram of the display interface of the terminal device of the embodiment of the present invention;
Fig. 4 is the two of the flow chart of the image processing method of the embodiment of the present invention;
Fig. 5 is the three of the flow chart of the image processing method of the embodiment of the present invention;
Fig. 6 is the four of the flow chart of the image processing method of the embodiment of the present invention;
Fig. 7 is the five of the flow chart of the image processing method of the embodiment of the present invention;
Fig. 8 is the six of the flow chart of the image processing method of the embodiment of the present invention;
Fig. 9 is one of block diagram of terminal device of the embodiment of the present invention;
Figure 10 is the two of the block diagram of the terminal device of the embodiment of the present invention;
Figure 11 is the three of the block diagram of the terminal device of the embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
The flow chart of the image processing method of one embodiment of the invention is shown referring to Fig. 1, Fig. 1, comprising:
Step 110: receiving first input of the user to the target subregion in N number of subregion of target image.
It can refer to ground, choosing image to be processed as target image, such as selected picture in Fig. 2 in terminal device is mesh
Logo image clicks editting function, into editing interface (such as Fig. 3).In editing interface, N number of subregion that target image includes
In, any subregion can be used as target subregion, so that user can carry out the first input to target subregion.
Step 120: in response to the first input, at the image that the target type that the first input is chosen is carried out to target subregion
Reason.
First input includes the image procossing of target type chosen, the image procossing of target type include U.S. face processing,
Filter processing etc. a variety of processing methods.
Wherein, N is positive integer.
In embodiments of the present invention, user can target subregion to target image carry out the first input, so that terminal is set
It is standby to receive and respond to the first input, with the image procossing for the target type chosen according to the first input, to target subregion into
The image procossing of row target type to complete the independent image procossing to target subregion, and then is based on the above method, can divide
Not Wan Cheng N number of subregion in target image image procossing.Because the image procossing of N number of subregion is not synchronous progress, because
This corresponds to different subregions, and the target type for the image procossing that the first input is chosen can be different, so that user can be for difference
Subregion the characteristics of, carry out different image procossings.As it can be seen that the embodiment of the present invention can satisfy the best of local image region
Treatment effect makes target image reach best modification effect.
On the basis of embodiment shown in Fig. 1, Fig. 4 shows the stream of the image processing method of another embodiment of the present invention
Cheng Tu, prior to step 110, comprising:
Step 130: receiving the second input of user.
Referring to Fig. 3, ground can refer to, in editing interface, be equipped with function of image segmentation, user clicks function of image segmentation,
Function of image segmentation is activated, and user can carry out the second input in the target image.
The display picture of target image may include personage, landscape etc., and user can carry out the second input, by personage, landscape
Etc. being divided in different subregions, to realize the region division to target image.Preferably, the second input can exist for user
The movement in region is delineated on target image.
Step 140: in response to the second input, target image being divided into N number of subregion.
According to the second input of user manually completed, target image is divided into N number of subregion.
A kind of method that target image divides subregion is present embodiments provided, user can be drawn manually according to demand
Point.Illustratively, user, which can delineate the personage in target image, delineates in a region, landscape in a region, hook in kind
It is drawn in a region;User can also customized division region according to demand.
Illustrative example, target image are the portraiture photographies on a seashore, so that user can be based on the second input, by people, greatly
Sea, blue sky sketch out respectively to be come;Alternatively, target image is a description in kind, thus user can based on the second input, by people,
In kind, background sketches out respectively to be come.
On the basis of embodiment shown in Fig. 1, Fig. 5 shows the stream of the image processing method of another embodiment of the present invention
Cheng Tu, prior to step 110, comprising:
Step 150: Object identifying being carried out to target image, obtains N number of target object.
Referring to Fig. 3, in editing interface, it is equipped with function of image segmentation, user clicks function of image segmentation, so that terminal is set
It is standby to utilize image recognition technology, Object identifying is carried out to target image, to identify N number of target object in target image.
Wherein, target object refers to independent individual, and a such as people can be used as a target object.
Here N is positive integer, and the relationship that the quantity of target object and the quantity of subregion is not inevitable.
Step 160: according to object type belonging to each target object, target image being divided into N number of subregion.
Preferably, the target object in same target type can be divided in a sub-regions, such as: by all people's object
Target object is divided in the subregion of personage's type, in order to which the target object to same type carries out unified image procossing.
Wherein, each subregion corresponds at least one object type.It may include one or more that is, in a sub-regions
A object type.
It should be noted that target object and object type are subordinate relation.Such as, target object is a cup, then right
The object type answered can be material object.Object type in the present embodiment can be greatly classified into personage, material object and scenery, but be not limited to
The above type.
A kind of method that target image divides subregion is present embodiments provided, terminal device can be carried out according to target image
It is automatic to divide, to reduce the manual operation of user.
On the basis of embodiment shown in Fig. 1, Fig. 6 shows the stream of the image processing method of another embodiment of the present invention
Cheng Tu, step 120 include:
Step 1201: filter at least once being carried out to target subregion and is handled.
In image processing method, filter processing is common method.In filter processing, user's is easy to operate, and
It is obvious to repair figure effect.Therefore the present embodiment explains the image processing method in the present embodiment so that filter is handled as an example.
Such as a seashore portrait camera shooting, filter can be added to people first, selection is some to highlight flesh with skin whitening
The filter of skin;Then filter, the filter for selecting some color saturations high are added to sea;Filter, choosing finally are added to blue sky
Select the soft filter of some colors.
Wherein, according to the demand of user, multiple filter processing can be superimposed on same target subregion.
On the basis of embodiment shown in Fig. 1, Fig. 7 shows the stream of the image processing method of another embodiment of the present invention
Cheng Tu, step 110 include:
Step 1101: receiving user and choose target subregion and the first input of at least one target filter options.
Referring to Fig. 3, in editing interface, user can click filter function, and filter function is activated, and filter is shown in interface
Mirror option list, filter options list include several filter options, and " 1 ", " 2 ", " 3 " in Fig. 3 respectively indicate 3 as filter
Option.User can choose target subregion and target filter options in the first input.
Step 120 includes:
Step 1202: according at least one target filter options, filter processing being carried out to target subregion.
When user chooses a target filter options in the first input, in this step, target filter options are added
Add to target subregion.
When user chooses multiple target filter options in the first input, it may include a variety of situations:
Such as, one of target filter options are added to target subregion, user slides in target subregion
The operation of screen, the target filter options in target subregion automatically switch to another target filter options.User can be more
Quick comparison is carried out in a target filter options, to save the target filter options of final choice.
For another example, one of target filter options being added to target subregion, user saves current goal filter options,
Another target filter options is added to target subregion again, meets the effect of multiple target filter options superpositions.
Wherein, user carries out the operation of sliding screen in target subregion, saves the relevant operations such as operation, can be used as
Content in first input.
On the basis of embodiment shown in Fig. 1, Fig. 8 shows the stream of the image processing method of another embodiment of the present invention
Cheng Tu, step 110 include:
Step 1102: receiving slidably inputing for user, the sliding initial position slidably inputed is located at the filter of at least one target
On mirror option, and the sliding end position slidably inputed is located in target subregion.
First input includes slidably inputing, and at least one target filter options in choosing, so that user will choose
At least one target filter options slides into target subregion, and accordingly, target filter options follow finger to slide into target
In subregion, the movement of the first input is completed, and then terminal device can respond the first input, by least one target filter options
It is added in target subregion.
Wherein, at least one target filter options can be used for realizing switching between multiple target filter options, multiple mesh
Mark the superposition, etc. between filter options.
Slidably inputing can also drag at least one target filter options to target sub-district for dragging input, the i.e. finger of user
In domain.
Referring to Fig. 3, in the present embodiment, terminal device is completed to the image of any subregion in response to the first input
After reason, user can click preservation function, to save the image processing effect of the first input.
Wherein, for multiple target filter options the case where, at the image that each target filter options can also be saved respectively
Manage effect.
After completing the image procossing of a sub-regions, the step in above-described embodiment is repeated, to complete other sub-districts
The image procossing in domain.
Preferably, editting function, target image enter editing interface when the user clicks, and terminal device can be in N number of subregion
In, identify the subregion without any image procossing;Or identify which subregion is completed at the image of a certain type
Reason, which subregion does not complete the image procossing of the type, to prompt the user with subregion to be chosen.
Further, also sub-district to be chosen can be prompted the user with after current target subregion completes image procossing
Domain.
It can refer to ground, the target image in above embodiments can be the image in picture, can be also the image in image.It is right
Image in image can carry out image procossing according to the image of every frame.
Preferably, picture can be the photo in photograph album;Image can be short-sighted frequency.
In conclusion the image processing method in the embodiment of the present invention is by carrying out figure to the subregion of target image respectively
As processing, the local processing method of image is realized, improve image repairs plot quality, promotes the usage experience of user.
Referring to Fig. 9, the block diagram of the terminal device of another embodiment of the present invention is shown, comprising:
First input receiving module 10, for receiving user to the target subregion in N number of subregion of target image
First input;
First input respond module 20, for carrying out what the first input was chosen to target subregion in response to the first input
The image procossing of target type;
Wherein, N is positive integer.
In embodiments of the present invention, user can target subregion to target image carry out the first input, so that terminal is set
It is standby to receive and respond to the first input, with the image procossing for the target type chosen according to the first input, to target subregion into
The image procossing of row target type to complete the independent image procossing to target subregion, and then is based on the above method, can divide
Not Wan Cheng N number of subregion in target image image procossing.Because the image procossing of N number of subregion is not synchronous progress, because
This corresponds to different subregions, and the target type for the image procossing that the first input is chosen can be different, so that user can be for difference
Subregion the characteristics of, carry out different image procossings.As it can be seen that the embodiment of the present invention can satisfy the best of local image region
Treatment effect makes target image reach best modification effect
Terminal device provided in an embodiment of the present invention can be realized terminal device in the embodiment of the method for Fig. 1 realize it is each
Process, to avoid repeating, which is not described herein again.
On the basis of embodiment shown in Fig. 9, Figure 10 shows the block diagram of the terminal device of another embodiment of the present invention,
Further include:
Second input receiving module 30, for receiving the second input of user;
Second input respond module 40, in response to the second input, target image to be divided into N number of subregion.
Preferably, terminal device further include:
Object Identification Module 50 obtains N number of target object for carrying out Object identifying to target image;
Target image is divided into N number of by region division module 60 for the object type according to belonging to each target object
Subregion;
Wherein, each subregion corresponds at least one object type.
Preferably, the first input respond module 20 includes:
Filter processing unit 21 is handled for carrying out filter at least once to target subregion.
Preferably, the first input receiving module 10 includes:
Target input unit 11 chooses target subregion and the first of at least one target filter options for receiving user
Input;
First, which inputs respond module 20, includes:
Object processing unit 22, for carrying out filter processing to target subregion according at least one target filter options.
Preferably, the first input receiving module 10 includes:
Unit 12 is slidably inputed, for receiving slidably inputing for user, the sliding initial position slidably inputed is located at least
On one target filter options, and the sliding end position slidably inputed is located in target subregion.
Terminal device provided in an embodiment of the present invention can be realized terminal device in the embodiment of the method for Fig. 1 to Fig. 8 and realize
Each process, to avoid repeating, which is not described herein again.
A kind of hardware structural diagram of Figure 11 terminal device of each embodiment to realize the present invention, the terminal device
100 include but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor
105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 1010 and power supply 111
Equal components.It will be understood by those skilled in the art that terminal device structure shown in Figure 11 does not constitute the limit to terminal device
Fixed, terminal device may include perhaps combining certain components or different component cloth than illustrating more or fewer components
It sets.In embodiments of the present invention, terminal device includes but is not limited to mobile phone, tablet computer, laptop, palm PC, vehicle
Mounted terminal, wearable device and pedometer etc..
Wherein, processor 1010, for receiving user to first of the target subregion in N number of subregion of target image
Input;In response to first input, the image for the target type that first input is chosen is carried out to the target subregion
Processing.
In embodiments of the present invention, user can target subregion to target image carry out the first input, so that terminal is set
It is standby to receive and respond to the first input, with the image procossing for the target type chosen according to the first input, to target subregion into
The image procossing of row target type to complete the independent image procossing to target subregion, and then is based on the above method, can divide
Not Wan Cheng N number of subregion in target image image procossing.Because the image procossing of N number of subregion is not synchronous progress, because
This corresponds to different subregions, and the target type for the image procossing that the first input is chosen can be different, so that user can be for difference
Subregion the characteristics of, carry out different image procossings.As it can be seen that the embodiment of the present invention can satisfy the best of local image region
Treatment effect makes target image reach best modification effect.
It should be understood that the embodiment of the present invention in, radio frequency unit 101 can be used for receiving and sending messages or communication process in, signal
Send and receive, specifically, by from base station downlink data receive after, to processor 1010 handle;In addition, by uplink
Data are sent to base station.In general, radio frequency unit 101 includes but is not limited to antenna, at least one amplifier, transceiver, coupling
Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 101 can also by wireless communication system and network and other set
Standby communication.
Terminal device provides wireless broadband internet by network module 102 for user and accesses, and such as user is helped to receive
It sends e-mails, browse webpage and access streaming video etc..
Audio output unit 103 can be received by radio frequency unit 101 or network module 102 or in memory 109
The audio data of storage is converted into audio signal and exports to be sound.Moreover, audio output unit 103 can also provide and end
The relevant audio output of specific function that end equipment 100 executes is (for example, call signal receives sound, message sink sound etc.
Deng).Audio output unit 103 includes loudspeaker, buzzer and receiver etc..
Input unit 104 is for receiving audio or video signal.Input unit 104 may include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode
Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out
Reason.Treated, and picture frame may be displayed on display unit 106.Through graphics processor 1041, treated that picture frame can be deposited
Storage is sent in memory 109 (or other storage mediums) or via radio frequency unit 101 or network module 102.Mike
Wind 1042 can receive sound, and can be audio data by such acoustic processing.Treated audio data can be
The format output that mobile communication base station can be sent to via radio frequency unit 101 is converted in the case where telephone calling model.
Terminal device 100 further includes at least one sensor 105, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 1061, and proximity sensor can close when terminal device 100 is moved in one's ear
Display panel 1061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axis) size of acceleration, it can detect that size and the direction of gravity when static, can be used to identify terminal device posture (ratio
Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);It passes
Sensor 105 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, wet
Meter, thermometer, infrared sensor etc. are spent, details are not described herein.
Display unit 106 is for showing information input by user or being supplied to the information of user.Display unit 106 can wrap
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 1061.
User input unit 107 can be used for receiving the number or character information of input, and generate the use with terminal device
Family setting and the related key signals input of function control.Specifically, user input unit 107 include touch panel 1071 and
Other input equipments 1072.Touch panel 1071, also referred to as touch screen collect the touch operation of user on it or nearby
(for example user uses any suitable objects or attachment such as finger, stylus on touch panel 1071 or in touch panel 1071
Neighbouring operation).Touch panel 1071 may include both touch detecting apparatus and touch controller.Wherein, touch detection
Device detects the touch orientation of user, and detects touch operation bring signal, transmits a signal to touch controller;Touch control
Device processed receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 1010, receiving area
It manages the order that device 1010 is sent and is executed.Furthermore, it is possible to using resistance-type, condenser type, infrared ray and surface acoustic wave etc.
Multiple types realize touch panel 1071.In addition to touch panel 1071, user input unit 107 can also include that other inputs are set
Standby 1072.Specifically, other input equipments 1072 can include but is not limited to physical keyboard, function key (for example press by volume control
Key, switch key etc.), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 1071 can be covered on display panel 1061, when touch panel 1071 is detected at it
On or near touch operation after, send processor 1010 to determine the type of touch event, be followed by subsequent processing 1010 basis of device
The type of touch event provides corresponding visual output on display panel 1061.Although in Figure 11, touch panel 1071 with
Display panel 1061 is the function that outputs and inputs of realizing terminal device as two independent components, but in certain implementations
In example, touch panel 1071 and display panel 1061 can be integrated and be realized the function that outputs and inputs of terminal device, specifically
Herein without limitation.
Interface unit 108 is the interface that external device (ED) is connect with terminal device 100.For example, external device (ED) may include having
Line or wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, storage card end
Mouth, port, the port audio input/output (I/O), video i/o port, earphone end for connecting the device with identification module
Mouthful etc..Interface unit 108 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and
By one or more elements that the input received is transferred in terminal device 100 or can be used in 100 He of terminal device
Data are transmitted between external device (ED).
Memory 109 can be used for storing software program and various data.Memory 109 can mainly include storing program area
The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function
Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as
Audio data, phone directory etc.) etc..In addition, memory 109 may include high-speed random access memory, it can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 1010 is the control centre of terminal device, utilizes each of various interfaces and the entire terminal device of connection
A part by running or execute the software program and/or module that are stored in memory 109, and calls and is stored in storage
Data in device 109 execute the various functions and processing data of terminal device, to carry out integral monitoring to terminal device.Place
Managing device 1010 may include one or more processing units;Preferably, processor 1010 can integrate application processor and modulation /demodulation
Processor, wherein the main processing operation system of application processor, user interface and application program etc., modem processor master
Handle wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 1010.
Terminal device 100 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply 111
Can be logically contiguous by power-supply management system and processor 1010, to realize management charging by power-supply management system, put
The functions such as electricity and power managed.
In addition, terminal device 100 includes some unshowned functional modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of terminal device, including processor 1010, and memory 109 is stored in
On memory 109 and the computer program that can run on the processor 1010, the computer program are held by processor 1010
Each process of above-mentioned image processing method embodiment is realized when row, and can reach identical technical effect, to avoid repeating, this
In repeat no more.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium
Calculation machine program, the computer program realize each process of above-mentioned image processing method embodiment, and energy when being executed by processor
Reach identical technical effect, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, such as only
Read memory (Read-Only Memory, abbreviation ROM), random access memory (RandomAccess Memory, abbreviation
RAM), magnetic or disk etc..
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service
Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form belongs within protection of the invention.
Claims (14)
1. a kind of image processing method characterized by comprising
Receive first input of the user to the target subregion in N number of subregion of target image;
In response to first input, at the image that the target type that first input is chosen is carried out to the target subregion
Reason;
Wherein, N is positive integer.
2. the method according to claim 1, wherein the reception user is in N number of subregion of target image
Target subregion first input before, further includes:
Receive the second input of user;
In response to second input, target image is divided into N number of subregion.
3. the method according to claim 1, wherein the reception user is in N number of subregion of target image
Target subregion first input before, further includes:
Object identifying is carried out to target image, obtains N number of target object;
According to object type belonging to each target object, target image is divided into N number of subregion;
Wherein, each subregion corresponds at least one object type.
4. the method according to claim 1, wherein described carry out first input to the target subregion
The image procossing for the target type chosen, comprising:
Filter at least once is carried out to the target subregion to handle.
5. the method according to claim 1, wherein the reception user is in N number of subregion of target image
Target subregion first input, comprising:
It receives user and chooses target subregion and the first input of at least one target filter options;
The image procossing that the target type that first input is chosen is carried out to the target subregion, comprising:
According at least one described target filter options, filter processing is carried out to the target subregion.
6. the method according to claim 1, wherein the reception user is in N number of subregion of target image
Target subregion first input, comprising:
Slidably inputing for user is received, the sliding initial position slidably inputed is located at least one target filter options,
And the sliding end position slidably inputed is located in the target subregion.
7. a kind of terminal device characterized by comprising
First input receiving module, it is defeated to first of the target subregion in N number of subregion of target image for receiving user
Enter;
First input respond module, for carrying out first input to the target subregion in response to first input
The image procossing for the target type chosen;
Wherein, N is positive integer.
8. terminal device according to claim 7, which is characterized in that further include:
Second input receiving module, for receiving the second input of user;
Second input respond module, in response to second input, target image to be divided into N number of subregion.
9. terminal device according to claim 7, which is characterized in that further include:
Object Identification Module obtains N number of target object for carrying out Object identifying to target image;
Target image is divided into N number of sub-district for the object type according to belonging to each target object by region division module
Domain;
Wherein, each subregion corresponds at least one object type.
10. terminal device according to claim 7, which is characterized in that described first, which inputs respond module, includes:
Filter processing unit is handled for carrying out filter at least once to the target subregion.
11. terminal device according to claim 7, which is characterized in that described first, which inputs receiving module, includes:
Target input unit chooses target subregion and the first input of at least one target filter options for receiving user;
Described first, which inputs respond module, includes:
Object processing unit, for being carried out at filter to the target subregion according at least one described target filter options
Reason.
12. terminal device according to claim 7, which is characterized in that described first, which inputs receiving module, includes:
Slidably input unit, for receiving slidably inputing for user, the sliding initial position slidably inputed be located at it is described extremely
On few target filter options, and the sliding end position slidably inputed is located in the target subregion.
13. a kind of terminal device, which is characterized in that including processor, memory is stored on the memory and can be described
The computer program run on processor is realized when the computer program is executed by the processor as in claim 1 to 6
The step of described in any item image processing methods.
14. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium
Program realizes such as image processing method described in any one of claims 1 to 6 when the computer program is executed by processor
The step of.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810746899.XA CN109144361A (en) | 2018-07-09 | 2018-07-09 | A kind of image processing method and terminal device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810746899.XA CN109144361A (en) | 2018-07-09 | 2018-07-09 | A kind of image processing method and terminal device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109144361A true CN109144361A (en) | 2019-01-04 |
Family
ID=64800091
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810746899.XA Pending CN109144361A (en) | 2018-07-09 | 2018-07-09 | A kind of image processing method and terminal device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109144361A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110070497A (en) * | 2019-03-08 | 2019-07-30 | 维沃移动通信(深圳)有限公司 | A kind of image processing method and terminal device |
CN110232174A (en) * | 2019-04-22 | 2019-09-13 | 维沃移动通信有限公司 | A kind of content chooses method and terminal device |
CN110598027A (en) * | 2019-09-10 | 2019-12-20 | Oppo广东移动通信有限公司 | Image processing effect display method and device, electronic equipment and storage medium |
CN111402271A (en) * | 2020-03-18 | 2020-07-10 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
WO2020192298A1 (en) * | 2019-03-25 | 2020-10-01 | 维沃移动通信有限公司 | Image processing method and terminal device |
CN112887537A (en) * | 2021-01-18 | 2021-06-01 | 维沃移动通信有限公司 | Image processing method and electronic device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103593828A (en) * | 2013-11-13 | 2014-02-19 | 厦门美图网科技有限公司 | Image processing method capable of carrying out partial filter adding |
CN104322050A (en) * | 2012-05-22 | 2015-01-28 | 株式会社尼康 | Electronic camera, image display device, and image display program |
CN106201242A (en) * | 2016-06-27 | 2016-12-07 | 北京金山安全软件有限公司 | Image processing method and device and electronic equipment |
CN106651761A (en) * | 2016-12-27 | 2017-05-10 | 维沃移动通信有限公司 | Method for adding filters to pictures, and mobile terminal |
CN106971165A (en) * | 2017-03-29 | 2017-07-21 | 武汉斗鱼网络科技有限公司 | The implementation method and device of a kind of filter |
US20170329501A1 (en) * | 2016-05-10 | 2017-11-16 | Konica Minolta, Inc. | Image analysis system |
CN108010037A (en) * | 2017-11-29 | 2018-05-08 | 腾讯科技(深圳)有限公司 | Image processing method, device and storage medium |
-
2018
- 2018-07-09 CN CN201810746899.XA patent/CN109144361A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104322050A (en) * | 2012-05-22 | 2015-01-28 | 株式会社尼康 | Electronic camera, image display device, and image display program |
CN103593828A (en) * | 2013-11-13 | 2014-02-19 | 厦门美图网科技有限公司 | Image processing method capable of carrying out partial filter adding |
US20170329501A1 (en) * | 2016-05-10 | 2017-11-16 | Konica Minolta, Inc. | Image analysis system |
CN106201242A (en) * | 2016-06-27 | 2016-12-07 | 北京金山安全软件有限公司 | Image processing method and device and electronic equipment |
CN106651761A (en) * | 2016-12-27 | 2017-05-10 | 维沃移动通信有限公司 | Method for adding filters to pictures, and mobile terminal |
CN106971165A (en) * | 2017-03-29 | 2017-07-21 | 武汉斗鱼网络科技有限公司 | The implementation method and device of a kind of filter |
CN108010037A (en) * | 2017-11-29 | 2018-05-08 | 腾讯科技(深圳)有限公司 | Image processing method, device and storage medium |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110070497A (en) * | 2019-03-08 | 2019-07-30 | 维沃移动通信(深圳)有限公司 | A kind of image processing method and terminal device |
WO2020192298A1 (en) * | 2019-03-25 | 2020-10-01 | 维沃移动通信有限公司 | Image processing method and terminal device |
CN110232174A (en) * | 2019-04-22 | 2019-09-13 | 维沃移动通信有限公司 | A kind of content chooses method and terminal device |
CN110598027A (en) * | 2019-09-10 | 2019-12-20 | Oppo广东移动通信有限公司 | Image processing effect display method and device, electronic equipment and storage medium |
CN111402271A (en) * | 2020-03-18 | 2020-07-10 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN112887537A (en) * | 2021-01-18 | 2021-06-01 | 维沃移动通信有限公司 | Image processing method and electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109144361A (en) | A kind of image processing method and terminal device | |
CN104135609B (en) | Auxiliary photo-taking method, apparatus and terminal | |
CN108540724A (en) | A kind of image pickup method and mobile terminal | |
CN107566728A (en) | A kind of image pickup method, mobile terminal and computer-readable recording medium | |
CN109639970A (en) | A kind of image pickup method and terminal device | |
CN108495029A (en) | A kind of photographic method and mobile terminal | |
CN107592459A (en) | A kind of photographic method and mobile terminal | |
CN110248254A (en) | Display control method and Related product | |
CN107948498B (en) | A kind of elimination camera Morie fringe method and mobile terminal | |
CN108600647A (en) | Shooting preview method, mobile terminal and storage medium | |
CN108989672A (en) | A kind of image pickup method and mobile terminal | |
CN108989678A (en) | A kind of image processing method, mobile terminal | |
CN107864336B (en) | A kind of image processing method, mobile terminal | |
CN107682639B (en) | A kind of image processing method, device and mobile terminal | |
CN107845363B (en) | A kind of display control method and mobile terminal | |
CN107959795A (en) | A kind of information collecting method, equipment and computer-readable recording medium | |
CN108320263A (en) | A kind of method, device and mobile terminal of image procossing | |
CN109218626A (en) | A kind of photographic method and terminal | |
CN110072012A (en) | A kind of based reminding method and mobile terminal for screen state switching | |
CN108833709A (en) | A kind of the starting method and mobile terminal of camera | |
CN109474787A (en) | A kind of photographic method, terminal device and storage medium | |
CN108038825A (en) | A kind of image processing method and mobile terminal | |
CN109544486A (en) | A kind of image processing method and terminal device | |
CN109639996A (en) | High dynamic scene imaging method, mobile terminal and computer readable storage medium | |
CN108965710A (en) | Method, photo taking, device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190104 |