CN107832397A - A kind of image processing method, device and computer-readable recording medium - Google Patents
A kind of image processing method, device and computer-readable recording medium Download PDFInfo
- Publication number
- CN107832397A CN107832397A CN201711052438.4A CN201711052438A CN107832397A CN 107832397 A CN107832397 A CN 107832397A CN 201711052438 A CN201711052438 A CN 201711052438A CN 107832397 A CN107832397 A CN 107832397A
- Authority
- CN
- China
- Prior art keywords
- image
- content element
- component identification
- type
- content
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
- G06V10/225—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
Abstract
The invention discloses a kind of image processing method, including:Image recognition is carried out to pending image, the content element of the pending image is identified, the content element is marked by component identification;It is determined that component identification corresponding to the selection operation received identifies for object element, object content element corresponding to the object element mark is determined, to image-region progress image procossing corresponding to the object content element.The embodiment of the present invention also provides the device and computer-readable recording medium for realizing the above method.The present invention improves image processing efficiency while user's actual need is met, improves the image procossing experience of user.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of image processing method, device and computer-readable
Storage medium.
Background technology
At present, when handling image, a key optimization is carried out to image by image processing software, or pass through user's
It is manually operated that operating area is handled.Carry out a key optimization when, user click directly on a key optimization control, by system according to
The tupe of one key optimization carries out automatic identification and optimization processing to image, and the treatment effeciency automatically processed is high, but this place
Reason is general fairly simple, such as:Face is beautified, adjusts color saturation, style of image etc., it is difficult to meet user's
Image processing requirements;When by user it is manually operated according to the actual requirements to image procossing when, touch where handle where, place
The region of reason by scope that the operation of user covers, it is necessary to user is manual that each pending region is covered,
Such as:, could be to footpath between fields, it is necessary to touch all areas corresponding to stranger when needing to carry out image procossing to the stranger in image
Stranger's mosaic.Manual handle can be handled according to the actual demand of user, but the efficiency handled is very low.
Therefore it provides a kind of technical scheme of image procossing, is improved at image while disclosure satisfy that user's actual need
Efficiency is managed, improves the image procossing experience of user.
The content of the invention
In view of this, the embodiment of the present invention provides a kind of image processing method, device and computer-readable recording medium, energy
Enough meet to improve image processing efficiency while user's actual need, improve the image procossing experience of user.
What the technical scheme of the embodiment of the present invention was realized in:
On the one hand, the embodiment of the present invention provides a kind of image processing method, and image recognition, identification are carried out to pending image
Go out the content element of the pending image, the content element is marked by component identification;It is determined that the choosing received
Select component identification corresponding to operation to identify for object element, determine object content element corresponding to the object element mark, it is right
Image-region corresponding to the object content element carries out image procossing.
On the other hand, the embodiment of the present invention provides the image processing apparatus for realizing above-mentioned image processing method, including:Storage
Device, processor and the computer program that can be run on the memory and on the processor is stored in, the processor is held
The row computer program, to realize:Image recognition is carried out to pending image, identifies the content member of the pending image
Element, the content element is marked by component identification;It is determined that component identification corresponding to the selection operation received is mesh
Component identification is marked, object content element corresponding to the object element mark is determined, to figure corresponding to the object content element
As region carries out image procossing.
On the other hand, there is provided a kind of computer-readable recording medium for realizing above-mentioned image processing method.
Image processing method, device and computer-readable recording medium provided in an embodiment of the present invention, in pending image
In each content element of image is marked by component identification, and according to selected by selection of the user to component identification determines user
Object content element, directly object content element is handled, here, only need user click on object content element element
It mark, can be determined to need the target area for carrying out image procossing according to the actual demand of user, and not need the touch-control of user
Region overlay meets to improve image processing efficiency while user's actual need, improves user to pending all regions
Image procossing experience.
Brief description of the drawings
Fig. 1 is the hardware architecture diagram for realizing each optional mobile terminal of embodiment one of the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the schematic flow sheet of the image processing method in the embodiment of the present invention one;
Fig. 4 is the interfacial effect schematic diagram one that the component identification in the embodiment of the present invention one is marked;
Fig. 5 is the interfacial effect schematic diagram two that the component identification in the embodiment of the present invention one is marked;
Fig. 6 is the schematic flow sheet of the image processing method in the embodiment of the present invention two;
Fig. 7 is the interfacial effect schematic diagram that the component identification in the embodiment of the present invention two is marked;
Fig. 8 is the interface schematic diagram of the pending image in the embodiment of the present invention two;
Fig. 9 is the schematic flow sheet of the image processing method in the embodiment of the present invention three;
Figure 10 is the interfacial effect schematic diagram of the image procossing in the embodiment of the present invention three;
Figure 11 is the structural representation of the image processing apparatus in the embodiment of the present invention six.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In follow-up description, the suffix using such as " module ", " part " or " unit " for representing element is only
Be advantageous to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " part " or " unit " can mix
Ground uses.
Terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as mobile phone, flat board
Computer, notebook computer, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable
Media player (Portable Media Player, PMP), guider, wearable device, Intelligent bracelet, pedometer etc. move
Dynamic terminal, and the fixed terminal such as digital TV, desktop computer.
It will be illustrated in subsequent descriptions by taking mobile terminal as an example, it will be appreciated by those skilled in the art that except special
For moving outside purpose element, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, its hardware architecture diagram for a kind of mobile terminal of each embodiment of the realization present invention, the shifting
Dynamic terminal 100 can include:RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit
103rd, A/V (audio/video) input block 104, sensor 105, display unit 106, user input unit 107, interface unit
108th, the part such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1
Mobile terminal structure does not form the restriction to mobile terminal, and mobile terminal can be included than illustrating more or less parts,
Either combine some parts or different parts arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, the reception and transmission of signal, specifically, by base station
Downlink information receive after, handled to processor 110;In addition, up data are sent to base station.Generally, radio frequency unit 101
Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, penetrate
Frequency unit 101 can also be communicated by radio communication with network and other equipment.Above-mentioned radio communication can use any communication
Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications
System), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code
Division Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code Division
Multiple Access, WCDMA), TD-SCDMA (Time Division-Synchronous Code
Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division
Duplexing-Long Term Evolution, FDD Long Term Evolution) and TDD-LTE (Time Division
Duplexing-Long Term Evolution, time division duplex Long Term Evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user to receive and dispatch electricity by WiFi module 102
Sub- mail, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 1 shows
Go out WiFi module 102, but it is understood that, it is simultaneously not belonging to must be configured into for mobile terminal, completely can be according to need
To be omitted in the essential scope for do not change invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 100
When under the isotypes such as formula, speech recognition mode, broadcast reception mode, by radio frequency unit 101 or WiFi module 102 it is receiving or
It is sound that the voice data stored in memory 109, which is converted into audio signal and exported,.Moreover, audio output unit 103
The audio output related to the specific function that mobile terminal 100 performs can also be provided (for example, call signal receives sound, disappeared
Breath receives sound etc.).Audio output unit 103 can include loudspeaker, buzzer etc..
A/V input blocks 104 are used to receive audio or video signal.A/V input blocks 104 can include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode
Or the static images or the view data of video obtained in image capture mode by image capture apparatus (such as camera) are carried out
Reason.Picture frame after processing may be displayed on display unit 106.Picture frame after the processing of graphics processor 1041 can be deposited
Storage is transmitted in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike
Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042
Quiet down sound (voice data), and can be voice data by such acoustic processing.Audio (voice) data after processing can
To be converted to the form output that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model.
Microphone 1042 can implement various types of noises and eliminate (or suppression) algorithm to eliminate (or suppression) in reception and send sound
Caused noise or interference during frequency signal.
Mobile terminal 100 also includes at least one sensor 105, such as optical sensor, motion sensor, color temperature sensor
And other sensors.Specifically, optical sensor includes ambient light sensor and proximity transducer, wherein, ambient light sensor
The brightness of display panel 1061 can be adjusted according to the light and shade of ambient light, proximity transducer can be moved in mobile terminal 100
When in one's ear, display panel 1061 and/or backlight are closed.As one kind of motion sensor, accelerometer sensor can detect each
The size of (generally three axles) acceleration, can detect that size and the direction of gravity on direction when static, available for identifying mobile phone
The application (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating) of posture, Vibration identification correlation function (such as meter step
Device, percussion) etc.;Color temperature sensor is used for the colour temperature for detecting ambient light;The fingerprint sensor that can also configure as mobile phone, pressure pass
Other sensings such as sensor, iris sensor, molecule sensor, gyroscope, barometer, hygrometer, thermometer, infrared ray sensor
Device, it will not be repeated here.
Display unit 106 is used for the information for showing the information inputted by user or being supplied to user.Display unit 106 can wrap
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configures display panel 1061.
User input unit 107 can be used for the numeral or character information for receiving input, and produce the use with mobile terminal
The key signals input that family is set and function control is relevant.Specifically, user input unit 107 may include contact panel 1071 with
And other input equipments 1072.Contact panel 1071, also referred to as touch-screen, collect touch operation of the user on or near it
(for example user uses any suitable objects or annex such as finger, stylus on contact panel 1071 or in contact panel 1071
Neighbouring operation), and corresponding attachment means are driven according to formula set in advance.Contact panel 1071 may include touch detection
Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band
The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it
Contact coordinate is converted into, then gives processor 110, and the order sent of reception processing device 110 and can be performed.In addition, can
To realize contact panel 1071 using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic waves.Except contact panel
1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can wrap
Include but be not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc.
One or more, do not limit herein specifically.
Further, contact panel 1071 can cover display panel 1061, detect thereon when contact panel 1071 or
After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing
The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, contact panel 1071 and display panel
1061 be the part independent as two to realize the input of mobile terminal and output function, but in certain embodiments, can
Input and the output function of mobile terminal are realized so that contact panel 1071 and display panel 1061 is integrated, is not done herein specifically
Limit.
Interface unit 108 is connected the interface that can pass through as at least one external device (ED) with mobile terminal 100.For example,
External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing
Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number
It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 100 or can be with
For transmitting data between mobile terminal 100 and external device (ED).
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program area
And storage data field, wherein, storing program area can storage program area, application program (such as the sound needed at least one function
Sound playing function, image player function etc.) etc.;Storage data field can store according to mobile phone use created data (such as
Voice data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the whole mobile terminal of connection
Individual part, by running or performing the software program and/or module that are stored in memory 109, and call and be stored in storage
Data in device 109, the various functions and processing data of mobile terminal are performed, so as to carry out integral monitoring to mobile terminal.Place
Reason device 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulatedemodulate is mediated
Device is managed, wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor is main
Handle radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 100 can also include the power supply 111 (such as battery) to all parts power supply, it is preferred that power supply 111
Can be logically contiguous by power-supply management system and processor 110, so as to realize management charging by power-supply management system, put
The function such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 100 can also will not be repeated here including bluetooth module etc..
For the ease of understanding the embodiment of the present invention, the communications network system being based on below to the mobile terminal of the present invention enters
Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention, the communication network system
Unite as the LTE system of universal mobile communications technology, the UE that the LTE system includes communicating connection successively (User Equipment, is used
Family equipment) 201, E-UTRAN (Evolved UMTS Terrestrial Radio Access Network, evolved UMTS lands
Ground wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation
204。
Specifically, UE201 can be above-mentioned terminal 100, and here is omitted.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning
Journey (backhaul) (such as X2 interface) is connected with other eNodeB2022, and eNodeB2021 is connected to EPC203,
ENodeB2021 can provide UE201 to EPC203 access.
EPC203 can include MME (Mobility Management Entity, mobility management entity) 2031, HSS
(Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way,
Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy and
Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and
The control node of signaling between EPC203, there is provided carrying and connection management.HSS2032 is all to manage for providing some registers
Such as the function of attaching position register (not shown) etc, and preserve some and used about service features, data rate etc.
The special information in family.All customer data can be transmitted by SGW2034, and PGW2035 can provide UE 201 IP
Address is distributed and other functions, and PCRF2036 is strategy and the charging control strategic decision-making of business data flow and IP bearing resources
Point, it selects and provided available strategy and charging control decision-making with charge execution function unit (not shown) for strategy.
IP operation 204 can include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia
System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art it is to be understood that the present invention not only
Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with
And following new network system etc., do not limit herein.
Based on above-mentioned mobile terminal hardware configuration and communications network system, each embodiment of the inventive method is proposed.
Embodiment one
Based on foregoing embodiment, the embodiment of the present invention provides a kind of image processing method, and this method is applied to terminal, should
The function that method is realized can realize that certain program code can preserve by the processor caller code in terminal
In computer-readable storage medium, it is seen then that the terminal comprises at least processor and storage medium.
Fig. 3 is the schematic flow sheet of the image processing method in the embodiment of the present invention one, as shown in figure 3, this method includes:
S301, image recognition is carried out to pending image, identify the content element of the pending image, pass through element
The content element is marked mark;
Pending image can be the image collected by camera, be received by meanss of communication such as chat applications
Image that network side is sent, the image such as screenshot capture, to the image sources of pending image without limiting.Carried out when to image
Before image procossing, edit pattern can be entered, image procossing is carried out to pending image under edit pattern, can be pressed by editor
Key, long-press operation, click right etc. are operated into edit pattern.
Image recognition is carried out to pending image, identifies the content element of pending image, here, content element is to treat
The feature object in image is handled, content element may include various types of objects such as text, personage, animal, picture.Such as Fig. 4
In shown image, pictorial element includes desk, book, pen and USB flash drive.
In actual applications, after user's identification goes out the object in pending image, image is except figure corresponding to content element
, can be using these background areas as a content element as may also include some background areas beyond region.
Here, can be by carrying out lookup edge treated to pending figure, according to pair between the color of each content element
Than determining different content elements.Also different content elements can be identified by image recognition, to the specific of image recognition
Method is without limiting.
After the content element for determining pending image, the content element determined by component identification pair is marked, specifically
's:Obtain the display parameters of the component identification;The image-region according to corresponding to the display parameters in the content element shows
Show the component identification.It is determined that after content element, image-region corresponding to each content element in pending image is determined, is passed through
The display parameters of the component identification of acquisition show element mark corresponding to the content element in image-region corresponding to each content element
Know.Display parameters include corresponding to component identification:The parameters such as pattern, color, size.Here, identical component identification can be used
Different content elements is marked, it is possible to use different content elements is marked different component identifications, wherein,
The display parameters such as the color of different component identifications, size, shape can be different.
, can be as shown in figure 4, component identification be when different content elements is marked using identical component identification
Point-like icon, respectively in figure corresponding to content element desk, content element book, content element pen and content element USB flash drive
As region display point-like icon, by point-like icon respectively to content element desk, content element book, content element pen and content
Element USB flash drive is marked.
When different content elements is marked using different component identifications, and pass through different size of rubidium marking
, can size progress of the profile based on content element to component identification corresponding to each content element when content element is marked
Adjust, the size in display parameters determines according to the size of the profile of content element, and as described in Figure 5, component identification is in
Hold the rectangle frame that the size of element determines, as shown in dash-dot lines in fig. 5, respectively in content element desk, content element book, content member
Image-region corresponding to plain pen and content element USB flash drive shows the size rectangle frame corresponding with each content element size,
Content element desk, content element book, content element pen and content element USB flash drive are marked respectively, and rectangle frame
Size determined according to the size of the profile of each content element.
When content element is marked, partial content element can be marked, the content element compared can be according to choosing
Rule is taken to be chosen, selection rule can be configured according to demand.For example determined according to current tupe, currently
When tupe is the processing such as mosaic, erasing rubber, magic pen, the content element that given threshold can be less than to surface area is carried out
Mark;It is currently processed when being U.S. face, personage is marked.Again such as:It is marked according to the type of content element, such as:
Text is marked, to image without mark, or image is marked, to text without mark.
Here, after content element being marked by component identification, each content element and corresponding element mark can be generated
Corresponding relation between knowledge, binding relationship is formed between content element and the component identification for marking the content element.
Component identification corresponding to the selection operation that S302, determination receive identifies for object element, determines the target element
Object content element corresponding to element mark, image procossing is carried out to image-region corresponding to the object content element.
After content element is marked by component identification, user can intuitively determine the content in pending image
Element, and selection target content element, specific selection course are from the content element of mark according to the actual demand of oneself,
By selection operation selection target component identifications such as touch operation, phonetic entry, mouse inputs, it is determined that object element identifies
Afterwards, the content element according to corresponding to identifying object element is object content element.
When the selection operation of user is touch operation, the touch operation received is parsed to obtain touch operation existed
Positional information on display screen, touch area is determined according to the positional information of obtained touch operation on a display screen, this is touched
Touch component identification corresponding to region and be defined as object element mark, it is in target to determine content element corresponding to object element mark
Hold element.
Here, also can by way of phonetic entry selection target component identification, wherein, can be set component identification element
Title and component identification are shown simultaneously, or the title identified as component identification using content element corresponding to component identification, when
When user is by phonetic entry input voice information, the voice messaging of reception is parsed, determines the content of voice messaging, root
Identified according to component identification corresponding to the content selection of voice messaging for object element.Such as:In image shown in Fig. 5, work as user
When inputting voice " book ", identified by object element of the component identification of content element book, then object content element is content element
Book.
Before user's selection target component identification, the component identification that can be checked to user carries out prompting and shown, prompts display
Component identification for user is checked is marked, and to be made a distinction with the component identification that user does not check, here, prompts display
Mode may include high brightness, color is carried out to component identification the mode such as show.Such as:When user carries out selection behaviour by mouse
When making, by click by right key determine object element identify, it is determined that object element mark before, by mouse position in element mark
Move back and forth between knowledge, here, the color of component identification is black, when mouse is moved to the position of a component identification by user
When, the color of the component identification is shown in red, to highlight the component identification where current institute's mouse.
It should be noted that when selection operation is touch operation, the specifically chosen mode of selection target content element can
To click, double-clicking, the mode such as long-press.
In embodiments of the present invention, the image-region beyond image-region corresponding to the content element of user's selected marker
When, it is believed that user selects unlabelled image-region, such as:Image includes content element A and content element B, by content member
Region beyond plain A and content element B is defined as image-region corresponding to content element C, when corresponding to the selection operation of user
When position is the image-region beyond content element A component identification and content element B component identification, it is believed that user selects
The image-region for carrying out image procossing is image-region corresponding to content element C.
After determining object content element according to the selection operation of reception, it is determined that the image handled object content element
Tupe, image processing mode include:The enhancings such as light filling, sharpening, saturation degree, colour temperature handle, may also comprise nature, film,
The special effect processing of the various styles of wax crayon, older picture, sketch, it may also include mosaic, virtualization, magic pen, skin-whitening, face weight
The tupes such as modeling.In embodiments of the present invention, to the specific tupe of image procossing without limiting.
Here, can select to carry out the object content element at the image of image procossing before selection target content element
Reason pattern, it can also select to carry out the object content element image procossing mould of image procossing after selection target content element
Formula.Wherein, image processing mode is selected also to be automatically selected by the manual selection of user according to pending image.When manual
During selection, it is possible to provide the mode option of image processing mode is to allow user to be selected.Automatically selected when according to pending image
When, it can be selected according to the picture material of pending image, the content of object content element.
It is described corresponding to the object content element when according to the content selection image processing mode of object content element
Image-region carry out image procossing may include:Judge the element type of the object content element, determine the element type
Corresponding image processing mode;Image-region corresponding to the object content element is carried out at image according to described image pattern
Reason.Here, image processing mode can be determined according to the content of the object content element of selection, such as:When the object content of selection
When element is personage, it may be determined that image processing mode is U.S. face, when the object content element that user selects is the footpath between fields of image border
During stranger, it may be determined that image processing mode is mosaic, when the object content of user's selection is sets greatly, it may be determined that image procossing
Pattern is light filling etc..Here, can be accustomed to according to the image procossing of user come image processing mode corresponding to counting each element type,
The corresponding relation of element type and image processing mode also can be according to the actual requirements set, determined according to the corresponding relation of setting
Image processing mode corresponding to object content element.When automatically determining image processing mode, current figure can be shown to user
As tupe, to allow graphics process pattern that user determines to automatically select whether to be image processing mode that user wants, when
When not being the image processing mode that user wants, there can be the image procossing that user oneself selection is handled object content element
Pattern.
It is determined that after object content element and image processing mode, according to image processing mode to object content element pair
The image-region answered carries out image procossing.
Here, image processing method provided in an embodiment of the present invention, when carrying out image procossing to image, image is carried out certainly
Dynamic identification, identifies the content element that image includes, is marked by component identification and to each content element, according to user
The object element mark of selection determines to carry out the object content element of image procossing, enables a user to intuitively view
Content element in currently pending image, component identification selects directly according to corresponding to each content element according to the actual requirements
Object content element, it is not necessary to select to carry out the region of image procossing by constantly touching.
Embodiment two
Based on foregoing embodiment, the embodiment of the present invention provides a kind of image processing method, and this method is applied to terminal, should
The function that method is realized can realize that certain program code can preserve by the processor caller code in terminal
In computer-readable storage medium, it is seen then that the terminal comprises at least processor and storage medium.
Fig. 6 is the schematic flow sheet of the image processing method in the embodiment of the present invention two, as shown in fig. 6, this method includes:
S601, image recognition is carried out to pending image, identify the content element of the pending image;
S602, the element type for judging the content element, determine type corresponding to the element type;
After identifying the content element in pending image, the element type of each content element is judged, as shown in fig. 7, treating
The content element of processing image includes:Content element 1, content element 2, content element 3 and content element 4:Determine each content member
The element type of element, the element type of content element 1 is book, the element type of content element 2 is pen, the element of content element 3
Type is USB flash drive, and the element type of content element 4 is desk.Now, by selection rule to book, pen and USB flash drive
It is marked, determines book, pen and type corresponding to USB flash drive, wherein, type corresponding to book is cloud phenotypic marker
Frame, type corresponding to pen are triangular marker frame, and type corresponding to USB flash drive is circular indicia framing.
It should be noted that in the example shown in Fig. 7, element type is determined according to content element object itself, in reality
In border, element type is alternatively the types such as personage, plant, fruit, landscape, animal, or the type such as goer, static thing,
Privacy content, important content, unimportant content are may also include, or class is carried out according to the priority of the content of content element
The judgement of type, including one-level priority, two level priority, three-level priority etc..Element type is divided in the embodiment of the present invention
Class form does not do specific restriction.
S603, by component identification the content element is marked;
When it is determined that need mark each element content element type after, rower is entered to content element according to the element type
Note, specifically, the content element is marked by component identification corresponding to the type.With above-mentioned example, such as
Shown in Fig. 7, cloud phenotypic marker frame is shown in image-region corresponding to content element book, by cloud phenotypic marker frame to content element
Book is marked;Triangular marker frame is shown in image-region corresponding to content element pen, by triangular marker frame to content
Element pen is marked;Circular indicia framing is shown in image-region corresponding to content element USB flash drive, passes through circular indicia framing
Content element USB flash drive is marked.
When user A as shown in Figure 8 and user B chat record sectional drawing are pending image, by pending
After image is identified, recognizing content element includes:Temporal information, user A name identification, multiple user A head portrait, more
Bar user A is sent to user B chat content, user B head portrait, and a user B is sent to user A chat content.
According to the privacy class of each content element determine the element type of temporal information be one-level privacy, user A name identification be two
Level privacy, multiple user A head portrait are that the chat content that two level privacy, a plurality of user A are sent to user B is three-level privacy, one
Individual user B head portrait is two level privacy, and the chat content that a user B is sent to user A is three-level privacy, therefore, to Fig. 8
When content element in shown sectional drawing is marked, rower can be entered to the privacy of different stage by the component identification of different colours
Note.
Component identification corresponding to the selection operation that S604, determination receive identifies for object element, determines the target element
Object content element corresponding to element mark, image procossing is carried out to image-region corresponding to the object content element.
, can when content element being marked by component identification in image processing method provided in an embodiment of the present invention
It is marked according to the element type of content element, wherein, the element mark of different types can be used in different element types
Knowledge is marked so that user intuitively can make a distinction to different content elements.
Embodiment three
Based on foregoing embodiment, the embodiment of the present invention provides a kind of image processing method, and this method is applied to terminal, should
The function that method is realized can realize that certain program code can preserve by the processor caller code in terminal
In computer-readable storage medium, it is seen then that the terminal comprises at least processor and storage medium.
Fig. 9 is the schematic flow sheet of the image processing method in the embodiment of the present invention three, as shown in figure 9, including:
S901, image recognition is carried out to pending image, identify the content element of the pending image, pass through element
The content element is marked mark;
Component identification corresponding to the selection operation that S902, determination receive identifies for object element, determines the target element
Object content element corresponding to element mark, image procossing is carried out to image-region corresponding to the object content element;
S903, the element type for judging the object content element, determine in the content element element type with it is described
The element type identical association content element of object content element;
, can be according to the element class of object content element when carrying out image procossing to image-region corresponding to object content element
Type determines whether also include the element type identical content element with object content element, i.e. association member in pending image
Element.
Here, element type is alternatively the types such as personage, plant, fruit, landscape, animal, or goer, static thing
Etc. type, privacy content, important content, unimportant content may also include, or according to the preferential of the content of content element
Level carries out the judgement of type, including one-level priority, two level priority, three-level priority etc..To element in the embodiment of the present invention
The classification form of type does not do specific restriction.
In Fig. 10, the content element of the entitled identical element type of user A head portrait and user A user, user's instruction
User A identity information.
S904, image processing mode associates figure corresponding to content element to described according to corresponding to the object content element
As region carries out image procossing.
As shown in Figure 10, pending image is the sectional drawing of user A and user B chat record, by pending image
After being identified, recognizing content element includes:Temporal information, user A name identification, multiple user A head portrait, a plurality of use
Family A is sent to user B chat content, user B head portrait, and a user B is sent to user A chat content.When logical
Cross finger selection finger corresponding to user A head portrait for object content element when, it is true according to the element type of object content element
The head portrait for determining others another user A in image is association content element, and user A user name is also object content member
The association content element of element.As shown in Figure 10, when carrying out mosaic processing to object content element, while to including another use
Family A head portrait and the association content element of user A user name carry out mosaic processing simultaneously.
It should be noted that the sequencing of S902 and S903 execution without limit, can be it is determined that object content be first
After element, association content element is determined according to the object content element of determination, after it is determined that associating content element, to object content member
Element and associate content element while carries out image procossing, also can to object content element progress image procossing after, it is determined that associate
Content element simultaneously carries out image procossing to association content element.When user selects an object content element, while to identical
Or similar content element carries out identical image procossing.
In embodiments of the present invention, can be by selecting a content member when user is handled object content element
Element, image procossing is carried out simultaneously to all identical or associated content element, improves the efficiency of image procossing.
Example IV
Based on foregoing embodiment of the method, the embodiment of the present invention provides a kind of image processing apparatus, as shown in figure 11, described
Device includes:Memory 1101, processor 1102 and it is stored in the meter that can be run on memory 1102 and on processor 1101
Calculation machine program, processor 1102 perform the computer program, to realize:
Image recognition is carried out to pending image, the content element of the pending image is identified, passes through component identification
The content element is marked;It is determined that component identification corresponding to the selection operation received identifies for object element, it is determined that
Object content element corresponding to the object element mark, is carried out at image to image-region corresponding to the object content element
Reason.
When processor 1102 performs the computer program, the content element is marked by component identification for realization
Including:Obtain the display parameters of the component identification;The image-region according to corresponding to the display parameters in the content element
Show the component identification.
When processor 1102 performs the computer program, also realize:Judge the element type of the content element, it is determined that
Type corresponding to the element type;Correspondingly, the content element is marked by component identification described in realizing
Including:The content element is marked by component identification corresponding to the type.
When processor 1102 performs the computer program, realize described to image district corresponding to the object content element
Domain, which carries out image procossing, to be included:Judge the element type of the object content element, determine image corresponding to the element type
Tupe;Image procossing is carried out to image-region corresponding to the object content element according to described image pattern.
When processor 1102 performs the computer program, also realize:Judge the element type of the object content element,
Determine that element type associates content element with the element type identical of the object content element in the content element;According to
Image processing mode corresponding to the object content element is carried out at image to image-region corresponding to the association content element
Reason.
It should be noted that memory 1101 in the embodiment of the present invention can be in corresponding diagram 1 memory 109, processor
1102 can be in corresponding diagram 1 processor 110.
Embodiment five
To realize the above method, the embodiment of the present invention also provides a kind of computer-readable recording medium, and the computer can
Read to be stored with computer program in storage medium, the computer program is realized when being executed by processor:Pending image is entered
Row image recognition, the content element of the pending image is identified, the content element is marked by component identification;
It is determined that component identification corresponding to the selection operation received identifies for object element, mesh corresponding to the object element mark is determined
Content element is marked, image procossing is carried out to image-region corresponding to the object content element.
When the computer program is executed by processor, realization is described to enter rower by component identification to the content element
Note includes:Obtain the display parameters of the component identification;The image district according to corresponding to the display parameters in the content element
Domain shows the component identification.
When the computer program is executed by processor, also realize:Judge the element type of the content element, determine institute
State type corresponding to element type;Correspondingly, bag is marked to the content element by component identification described in realizing
Include:The content element is marked by component identification corresponding to the type.
When the computer program is executed by processor, realize described to image-region corresponding to the object content element
Carrying out image procossing includes:Judge the element type of the object content element, determine corresponding to the element type at image
Reason pattern;Image procossing is carried out to image-region corresponding to the object content element according to described image pattern.
When the computer program is executed by processor, also realize:Judge the element type of the object content element, really
Element type associates content element with the element type identical of the object content element in the fixed content element;According to institute
State image processing mode corresponding to object content element and image procossing is carried out to image-region corresponding to the association content element.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property includes, so that process, method, article or device including a series of elements not only include those key elements, and
And also include the other element being not expressly set out, or also include for this process, method, article or device institute inherently
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Other identical element also be present in the process of key element, method, article or device.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on such understanding, technical scheme is substantially done to prior art in other words
Going out the part of contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), including some instructions to cause a station terminal (can be mobile phone, computer, service
Device, air conditioner, or network equipment etc.) perform method described in each embodiment of the present invention.
Embodiments of the invention are described above in conjunction with accompanying drawing, but the invention is not limited in above-mentioned specific
Embodiment, above-mentioned embodiment is only schematical, rather than restricted, one of ordinary skill in the art
Under the enlightenment of the present invention, in the case of present inventive concept and scope of the claimed protection is not departed from, it can also make a lot
Form, these are belonged within the protection of the present invention.
Claims (10)
1. a kind of image processing method, it is characterised in that methods described includes:
Image recognition is carried out to pending image, the content element of the pending image is identified, by component identification to institute
Content element is stated to be marked;
It is determined that component identification corresponding to the selection operation received identifies for object element, determine that the object element mark is corresponding
Object content element, to corresponding to the object content element image-region carry out image procossing.
2. according to the method for claim 1, it is characterised in that described that rower is entered to the content element by component identification
Note includes:
Obtain the display parameters of the component identification;
The image-region according to corresponding to the display parameters in the content element shows the component identification.
3. according to the method for claim 1, it is characterised in that methods described also includes:
Judge the element type of the content element, determine type corresponding to the element type;
It is described the content element is marked by component identification including:
The content element is marked by component identification corresponding to the type.
4. according to the method for claim 1, it is characterised in that described to image-region corresponding to the object content element
Carrying out image procossing includes:
Judge the element type of the object content element, determine image processing mode corresponding to the element type;
Image procossing is carried out to image-region corresponding to the object content element according to described image pattern.
5. according to the method for claim 1, it is characterised in that methods described also includes:
Judge the element type of the object content element, determine element type and object content member in the content element
The element type identical association content element of element;
Image-region corresponding to the association content element is entered according to image processing mode corresponding to the object content element
Row image procossing.
6. a kind of image processing apparatus, it is characterised in that described device includes:Memory, processor and it is stored in the storage
On device and the computer program that can run on the processor, computer program described in the computing device, to realize:
Image recognition is carried out to pending image, the content element of the pending image is identified, by component identification to institute
Content element is stated to be marked;
It is determined that component identification corresponding to the selection operation received identifies for object element, determine that the object element mark is corresponding
Object content element, to corresponding to the object content element image-region carry out image procossing.
7. device according to claim 6, it is characterised in that also real described in the computing device during computer program
It is existing:Judge the element type of the content element, determine type corresponding to the element type;
Realize described in the content element is marked by component identification including:
The content element is marked by component identification corresponding to the type.
8. device according to claim 6, it is characterised in that described in the computing device during computer program, realize
It is described that image-region progress image procossing corresponding to the object content element is included:
Judge the element type of the object content element, determine image processing mode corresponding to the element type;
Image procossing is carried out to image-region corresponding to the object content element according to described image pattern.
9. device according to claim 6, it is characterised in that also real described in the computing device during computer program
It is existing:
Judge the element type of the object content element, determine element type and object content member in the content element
The element type identical association content element of element;
Image-region corresponding to the association content element is entered according to image processing mode corresponding to the object content element
Row image procossing.
10. a kind of computer-readable recording medium, it is characterised in that be stored with computer on the computer-readable recording medium
Program, the image processing method as any one of claim 1 to 5 is realized when the computer program is executed by processor
Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711052438.4A CN107832397A (en) | 2017-10-30 | 2017-10-30 | A kind of image processing method, device and computer-readable recording medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711052438.4A CN107832397A (en) | 2017-10-30 | 2017-10-30 | A kind of image processing method, device and computer-readable recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107832397A true CN107832397A (en) | 2018-03-23 |
Family
ID=61651350
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711052438.4A Withdrawn CN107832397A (en) | 2017-10-30 | 2017-10-30 | A kind of image processing method, device and computer-readable recording medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107832397A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108550127A (en) * | 2018-04-19 | 2018-09-18 | 北京小米移动软件有限公司 | image processing method, device, terminal and storage medium |
CN109583514A (en) * | 2018-12-19 | 2019-04-05 | 成都西纬科技有限公司 | A kind of image processing method, device and computer storage medium |
CN109634703A (en) * | 2018-12-13 | 2019-04-16 | 北京旷视科技有限公司 | Image processing method, device, system and storage medium based on canvas label |
CN109816406A (en) * | 2019-02-26 | 2019-05-28 | 北京理工大学 | A kind of article marking method, apparatus, equipment and medium |
CN109815854A (en) * | 2019-01-07 | 2019-05-28 | 亮风台(上海)信息科技有限公司 | It is a kind of for the method and apparatus of the related information of icon to be presented on a user device |
CN111046215A (en) * | 2019-12-25 | 2020-04-21 | 惠州Tcl移动通信有限公司 | Image processing method and device, storage medium and mobile terminal |
CN111126388A (en) * | 2019-12-20 | 2020-05-08 | 维沃移动通信有限公司 | Image recognition method and electronic equipment |
CN115022268A (en) * | 2022-06-24 | 2022-09-06 | 深圳市六度人和科技有限公司 | Session identification method and device, readable storage medium and computer equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103577788A (en) * | 2012-07-19 | 2014-02-12 | 华为终端有限公司 | Augmented reality realizing method and augmented reality realizing device |
CN105513098A (en) * | 2014-09-26 | 2016-04-20 | 腾讯科技(北京)有限公司 | Image processing method and image processing device |
CN107071321A (en) * | 2017-04-14 | 2017-08-18 | 努比亚技术有限公司 | A kind of processing method of video file, device and terminal |
-
2017
- 2017-10-30 CN CN201711052438.4A patent/CN107832397A/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103577788A (en) * | 2012-07-19 | 2014-02-12 | 华为终端有限公司 | Augmented reality realizing method and augmented reality realizing device |
CN105513098A (en) * | 2014-09-26 | 2016-04-20 | 腾讯科技(北京)有限公司 | Image processing method and image processing device |
CN107071321A (en) * | 2017-04-14 | 2017-08-18 | 努比亚技术有限公司 | A kind of processing method of video file, device and terminal |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108550127A (en) * | 2018-04-19 | 2018-09-18 | 北京小米移动软件有限公司 | image processing method, device, terminal and storage medium |
CN109634703A (en) * | 2018-12-13 | 2019-04-16 | 北京旷视科技有限公司 | Image processing method, device, system and storage medium based on canvas label |
CN109583514A (en) * | 2018-12-19 | 2019-04-05 | 成都西纬科技有限公司 | A kind of image processing method, device and computer storage medium |
CN109815854A (en) * | 2019-01-07 | 2019-05-28 | 亮风台(上海)信息科技有限公司 | It is a kind of for the method and apparatus of the related information of icon to be presented on a user device |
CN109816406A (en) * | 2019-02-26 | 2019-05-28 | 北京理工大学 | A kind of article marking method, apparatus, equipment and medium |
CN109816406B (en) * | 2019-02-26 | 2021-01-22 | 北京理工大学 | Article marking method, device, equipment and medium |
CN111126388A (en) * | 2019-12-20 | 2020-05-08 | 维沃移动通信有限公司 | Image recognition method and electronic equipment |
CN111126388B (en) * | 2019-12-20 | 2024-03-29 | 维沃移动通信有限公司 | Image recognition method and electronic equipment |
CN111046215A (en) * | 2019-12-25 | 2020-04-21 | 惠州Tcl移动通信有限公司 | Image processing method and device, storage medium and mobile terminal |
CN115022268A (en) * | 2022-06-24 | 2022-09-06 | 深圳市六度人和科技有限公司 | Session identification method and device, readable storage medium and computer equipment |
CN115022268B (en) * | 2022-06-24 | 2023-05-12 | 深圳市六度人和科技有限公司 | Session identification method and device, readable storage medium and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107832397A (en) | A kind of image processing method, device and computer-readable recording medium | |
CN107390972A (en) | A kind of terminal record screen method, apparatus and computer-readable recording medium | |
CN107864357A (en) | Video calling special effect controlling method, terminal and computer-readable recording medium | |
CN107358227A (en) | A kind of mark recognition method, mobile terminal and computer-readable recording medium | |
CN107517405A (en) | The method, apparatus and computer-readable recording medium of a kind of Video processing | |
CN108234295A (en) | Display control method, terminal and the computer readable storage medium of group's functionality controls | |
CN107748645A (en) | Reading method, mobile terminal and computer-readable recording medium | |
CN107678654A (en) | A kind of Application Program Interface display methods, equipment and computer-readable recording medium | |
CN107682627A (en) | A kind of acquisition parameters method to set up, mobile terminal and computer-readable recording medium | |
CN107145385A (en) | A kind of multitask interface display methods, mobile terminal and computer-readable storage medium | |
CN107844231A (en) | A kind of interface display method, mobile terminal and computer-readable recording medium | |
CN107678650A (en) | A kind of image identification method, mobile terminal and computer-readable recording medium | |
CN107347115A (en) | Method, equipment and the computer-readable recording medium of information input | |
CN107659729A (en) | A kind of previewing file method, apparatus and computer-readable recording medium | |
CN107347011A (en) | A kind of group message processing method, equipment and computer-readable recording medium | |
CN107818459A (en) | Red packet sending method, terminal and storage medium based on augmented reality | |
CN107463326A (en) | A kind of recognition methods of mobile terminal touch control gesture, mobile terminal and storage medium | |
CN107346200A (en) | One kind interval screenshot method and terminal | |
CN107566605A (en) | A kind of interactive interface processing method, equipment and computer-readable recording medium | |
CN107272906A (en) | Using many account display control methods and mobile terminal | |
CN107463324A (en) | A kind of image display method, mobile terminal and computer-readable recording medium | |
CN107809534A (en) | A kind of control method, terminal and computer-readable storage medium | |
CN107181865A (en) | Processing method, terminal and the computer-readable recording medium of unread short messages | |
CN107526493A (en) | A kind of small tool display methods, equipment and computer-readable recording medium | |
CN107168626A (en) | A kind of information processing method, equipment and computer-readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20180323 |
|
WW01 | Invention patent application withdrawn after publication |