CN108229389A - Facial image processing method, apparatus and computer readable storage medium - Google Patents

Facial image processing method, apparatus and computer readable storage medium Download PDF

Info

Publication number
CN108229389A
CN108229389A CN201711497876.1A CN201711497876A CN108229389A CN 108229389 A CN108229389 A CN 108229389A CN 201711497876 A CN201711497876 A CN 201711497876A CN 108229389 A CN108229389 A CN 108229389A
Authority
CN
China
Prior art keywords
information
image
image processing
human body
facial image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711497876.1A
Other languages
Chinese (zh)
Inventor
孙怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201711497876.1A priority Critical patent/CN108229389A/en
Publication of CN108229389A publication Critical patent/CN108229389A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

The invention discloses a kind of facial image processing method, including:When getting shooting image, the first figure information in the shooting image is detected;When receiving area image process instruction, determine in first figure information with matched second figure information of default figure information;Image procossing is carried out to second figure information, obtains processing image.The invention also discloses a kind of facial image processing device and computer readable storage mediums.The present invention realizes the image procossing pair with matched second figure information of default figure information, so that when carrying out image procossing to the picture for including multiple portraits, it being capable of automatic identification and the matched figure information of default figure information, and automatic or manual processing is carried out according to demand so that image processing effect more meets the actual demand of user.

Description

Facial image processing method, apparatus and computer readable storage medium
Technical field
The present invention relates to image processing field more particularly to a kind of facial image processing method, apparatus and computer-readable storages Medium.
Background technology
With the development of photography and vedio recording technology, people can whenever and wherever possible shoot things, and when shooting completion, It is even more to be widely applied in each terminal with filming apparatus to carry out image procossing to captured obtained picture.Mesh Before, when user carries out image procossing to the picture of shooting, often to one, whole pictures are indistinguishably handled, that is, are being adjusted Often the processing parameter of a whole pictures can all change when saving picture;Especially taking the picture that includes multiple portraits When, it is even more impossible to carry out specific image procossing just for one or more of portraits.
The above is only used to facilitate the understanding of the technical scheme, and is not represented and is recognized that the above is existing skill Art.
Invention content
It is a primary object of the present invention to provide a kind of facial image processing method, apparatus and computer readable storage medium, purport Solve at present can not to the picture with multiple portraits carry out differentially image procossing the technical issues of.
To achieve the above object, the present invention provides a kind of facial image processing method, and the facial image processing method includes following step Suddenly:
When getting shooting image, the first figure information in the shooting image is detected;
When receiving area image process instruction, determine to match with default figure information in first figure information The second figure information;
Image procossing is carried out to second figure information, obtains processing image.
In one embodiment, it is described determine in first figure information with matched second people of default figure information As the step of information includes:
Obtain the portrait matching degree of first figure information and default figure information;
When the portrait matching degree is spent more than preset matching, determine in first figure information exist with it is described pre- If matched second figure information of figure information.
In one embodiment, described the step of carrying out image procossing to second figure information, includes:
Identify the corresponding human body information of second figure information;
Image procossing is carried out to the human body information.
In one embodiment, the step of identification second figure information corresponding human body information includes:
Obtain the corresponding characteristics of human body of second figure information;
The corresponding human body information of second figure information is identified based on the characteristics of human body.
In one embodiment, described the step of carrying out image procossing to the human body information, includes:
Detect skin area and non-skin region in the human body information;
Image procossing is carried out to the skin area, image protection is carried out to the non-skin region.
In one embodiment, described the step of carrying out image procossing to the skin area, further includes:
Image Fuzzy Processing is carried out to the skin area, obtains blurred picture;
According to the blurred picture and the shooting image generation processing image.
In one embodiment, it is described detection it is described shooting image in the first figure information the step of include:
Detect the portrait characteristic information in the shooting image;
Based on the portrait characteristic information, the first figure information in the shooting image is detected.
In one embodiment, described the step of image procossing is carried out to second figure information, obtains handling image Later, the facial image processing method further includes:
When receiving adjust instruction, adjusting parameter is obtained, the processing image is adjusted based on the adjusting parameter It is whole.
In addition, to achieve the above object, the present invention also provides a kind of facial image processing device, the facial image processing device packet It includes:Memory, processor and the facial image processing program that can be run on the memory and on the processor is stored in, it is described Facial image processing program realizes the step of facial image processing method described in any one of the above embodiments when being performed by the processor.
In addition, to achieve the above object, it is described computer-readable the present invention also provides a kind of computer readable storage medium Facial image processing program is stored on storage medium, is realized described in any of the above-described when the facial image processing program is executed by processor Facial image processing method the step of.
The present invention proposes a kind of facial image processing method, by when getting shooting image, detecting in the shooting image The first figure information, then when receiving area image process instruction, determine in first figure information with it is default Matched second figure information of figure information then carries out image procossing to second figure information, obtains processing image, real Showed the image procossing pair with matched second figure information of default figure information so as to include the pictures of multiple portraits into During row image procossing, can automatic identification and the matched figure information of default figure information, and carry out automatic or hand according to demand Dynamic processing so that image processing effect more meets the actual demand of user.
Description of the drawings
A kind of hardware architecture diagram of Fig. 1 mobile terminals of each embodiment to realize the present invention;
Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention;
Fig. 3 is the present inventor as the flow diagram of processing method first embodiment;
Fig. 4 is the present inventor as the flow diagram of processing method second embodiment;
Fig. 5 is the present inventor as the flow diagram of processing method 3rd embodiment;
Fig. 6 is the present inventor as the flow diagram of processing method fourth embodiment;
Fig. 7 is the present inventor as the flow diagram of the 5th embodiment of processing method;
Fig. 8 is the present inventor as the flow diagram of processing method sixth embodiment;
Fig. 9 is the present inventor as the flow diagram of the 7th embodiment of processing method;
Figure 10 is the present inventor as the flow diagram of the 8th embodiment of processing method.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In subsequent description, using for representing that the suffix of such as " module ", " component " or " unit " of element is only Be conducive to the explanation of the present invention, itself there is no a specific meaning.Therefore, " module ", " component " or " unit " can mix Ground uses.
Terminal can be implemented in a variety of manners.For example, terminal described in the present invention can include such as mobile phone, tablet Computer, laptop, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable The shiftings such as media player (Portable Media Player, PMP), navigation device, wearable device, Intelligent bracelet, pedometer The dynamic fixed terminals such as terminal and number TV, desktop computer.
It will be illustrated by taking mobile terminal as an example in subsequent descriptions, it will be appreciated by those skilled in the art that in addition to special For moving except the element of purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, a kind of hardware architecture diagram of its mobile terminal of each embodiment to realize the present invention, the shifting Dynamic terminal 100 can include:RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit 103rd, A/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108th, the components such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1 Mobile terminal structure does not form the restriction to mobile terminal, and mobile terminal can be included than illustrating more or fewer components, Either combine certain components or different components arrangement.
The all parts of mobile terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receive and send messages or communication process in, signal sends and receivees, specifically, by base station Downlink information receive after, handled to processor 110;In addition, the data of uplink are sent to base station.In general, radio frequency unit 101 Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, it penetrates Frequency unit 101 can also communicate with network and other equipment by radio communication.Above-mentioned wireless communication can use any communication Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications System), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code Division Multiple Access2000, CDMA 2000), WCDMA (Wideband Code Division Multiple Access, wideband code division multiple access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division Duplexing-Long Term Evolution, frequency division duplex long term evolution) and TDD-LTE (Time Division Duplexing-Long Term Evolution, time division duplex long term evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user to receive and dispatch electricity by WiFi module 102 Sub- mail, browsing webpage and access streaming video etc., it has provided wireless broadband internet to the user and has accessed.Although Fig. 1 shows Go out WiFi module 102, but it is understood that, and must be configured into for mobile terminal is not belonging to, it completely can be according to need It to be omitted in the range for the essence for not changing invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 100 Formula, speech recognition mode, broadcast reception mode when under isotypes, it is that radio frequency unit 101 or WiFi module 102 are received or The audio data stored in memory 109 is converted into audio signal and exports as sound.Moreover, audio output unit 103 The relevant audio output of specific function performed with mobile terminal 100 can also be provided (for example, call signal receives sound, disappears Breath receives sound etc.).Audio output unit 103 can include loud speaker, buzzer etc..
A/V input units 104 are used to receive audio or video signal.A/V input units 104 can include graphics processor (Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode Or the static images or the image data of video obtained in image capture mode by image capture apparatus (such as camera) carry out Reason.Treated, and picture frame may be displayed on display unit 106.Through graphics processor 1041, treated that picture frame can be deposited Storage is sent in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042 Quiet down sound (audio data), and can be audio data by such acoustic processing.Audio that treated (voice) data can To be converted to the form output that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model. Microphone 1042 can implement various types of noises elimination (or inhibition) algorithms and send and receive sound to eliminate (or inhibition) The noise generated during frequency signal or interference.
Mobile terminal 100 further includes at least one sensor 105, such as optical sensor, motion sensor and other biographies Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein, ambient light sensor can be according to environment The light and shade of light adjusts the brightness of display panel 1061, and proximity sensor can close when mobile terminal 100 is moved in one's ear Display panel 1061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general For three axis) size of acceleration, size and the direction of gravity are can detect that when static, can be used to identify the application of mobile phone posture (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.; The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, The other sensors such as hygrometer, thermometer, infrared ray sensor, details are not described herein.
Display unit 106 is used to show by information input by user or be supplied to the information of user.Display unit 106 can wrap Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode may be used Display panel 1061 is configured in forms such as (Organic Light-Emitting Diode, OLED).
User input unit 107 can be used for receiving the number inputted or character information and generation and the use of mobile terminal The key signals input that family is set and function control is related.Specifically, user input unit 107 may include touch panel 1071 with And other input equipments 1072.Touch panel 1071, also referred to as touch screen collect user on it or neighbouring touch operation (for example user uses any suitable objects such as finger, stylus or attachment on touch panel 1071 or in touch panel 1071 Neighbouring operation), and corresponding attachment device is driven according to preset formula.Touch panel 1071 may include touch detection Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it Contact coordinate is converted into, then gives processor 110, and the order that processor 110 is sent can be received and performed.It in addition, can To realize touch panel 1071 using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touch panel 1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can wrap It includes but is not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, operating lever etc. It is one or more, do not limit herein specifically.
Further, touch panel 1071 can cover display panel 1061, when touch panel 1071 detect on it or After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, touch panel 1071 and display panel 1061 be the component independent as two to realize the function that outputs and inputs of mobile terminal, but in certain embodiments, it can The function that outputs and inputs of mobile terminal is realized so that touch panel 1071 and display panel 1061 is integrated, is not done herein specifically It limits.
Interface unit 108 be used as at least one external device (ED) connect with mobile terminal 100 can by interface.For example, External device (ED) can include wired or wireless head-band earphone port, external power supply (or battery charger) port, wired or nothing Line data port, memory card port, the port for device of the connection with identification module, audio input/output (I/O) end Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 100 or can be with For transmitting data between mobile terminal 100 and external device (ED).
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program area And storage data field, wherein, storing program area can storage program area, application program (such as the sound needed at least one function Sound playing function, image player function etc.) etc.;Storage data field can store according to mobile phone use created data (such as Audio data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, can also include non-easy The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection A part is stored in storage by running or performing the software program being stored in memory 109 and/or module and call Data in device 109 perform the various functions of mobile terminal and processing data, so as to carry out integral monitoring to mobile terminal.Place Reason device 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulatedemodulate is mediated Device is managed, wherein, the main processing operation system of application processor, user interface and application program etc., modem processor is main Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 100 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply 111 Can be logically contiguous by power-supply management system and processor 110, so as to realize management charging by power-supply management system, put The functions such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 100 can also be including bluetooth module etc., and details are not described herein.
For the ease of understanding the embodiment of the present invention, below to the communications network system that is based on of mobile terminal of the present invention into Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system Organization Chart provided in an embodiment of the present invention, the communication network system The LTE system united as universal mobile communications technology, the LTE system include the UE (User Equipment, the use that communicate connection successively Family equipment) 201, E-UTRAN (Evolved UMTS Terrestrial RadioAccess Network, evolved UMTS lands Ground wireless access network) 202, EPC (EvolvedPacket Core, evolved packet-based core networks) 203 and operator IP operation 204。
Specifically, UE201 can be above-mentioned terminal 100, and details are not described herein again.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning Journey (backhaul) (such as X2 interface) is connect with other eNodeB2022, and eNodeB2021 is connected to EPC203, ENodeB2021 can provide the access of UE201 to EPC203.
EPC203 can include MME (Mobility Management Entity, mobility management entity) 2031, HSS (Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving GateWay, Gateway) 2034, PGW (PDN GateWay, grouped data network gateway) 2035 and PCRF (Policy and Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and EPC203 between The control node of signaling, provides carrying and connection management.HSS2032 is used to provide some registers to manage such as homing position The function of register (not shown) etc, and some are preserved in relation to the dedicated letter of the users such as service features, data rate Breath.All customer data can be sent by SGW2034, PGW2035 can provide UE 201 IP address distribute with And other functions, PCRF2036 are business data flow and the strategy of IP bearing resources and charging control policy decision point, it is plan Available strategy and charging control decision are slightly selected and provided with charge execution function unit (not shown).
IP operation 204 can include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art it is to be understood that the present invention not only Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with And following new network system etc., it does not limit herein.
Based on above-mentioned mobile terminal hardware configuration and communications network system, each embodiment of the present invention is proposed.
The present invention further provides a kind of facial image processing methods, and with reference to Fig. 3, Fig. 3 is the present inventor as processing method first The flow diagram of embodiment.
In the present embodiment, which includes:
Step S1000 when getting shooting image, detects the first figure information in the shooting image;
It is to shoot by shooting obtained image when user carries out target shooting by the terminal with filming apparatus Image.It may include multiple figure informations in the shooting image, a figure information may be also only included, alternatively, at this It shoots and does not include any figure information in image;Therefore, it when getting the shooting image, needs in the shooting image Figure information is detected.There are during figure information in the shooting image, all figure informations in the shooting image are First figure information.
In the present embodiment, when getting the shooting image, then the portrait characteristic information in the shooting image, root are detected According to the portrait characteristic information, the first figure information in the shooting image is detected.It specifically, can be right by Principal Component Analysis The first figure information in the shooting image is detected identification;The Principal Component Analysis specifically by determining iris, the wing of nose, The attributes such as size, position, the distance of the image surfaces face profile such as corners of the mouth, calculate their geometric feature, by the geometric properties Vector is combined into the eigenface of identification face.Principal component subspace is constructed according to lineup's face training image, since pivot has face Shape, also referred to as eigenface, when identification projects to test image on principal component subspace, obtains one group of projection coefficient;The throwing Shadow coefficient and the shooting image are compared identification, if face characteristic information can be recognized in the shooting image, it is determined that should There are figure informations in shooting image;If face characteristic information cannot be recognized in the shooting image, it is determined that the shooting figure There is no figure information as in, wherein, which is the first figure information in the shooting image.
In one shoots image, therefore the possible more than one of obtained first figure information, the bat is obtained in detection When taking the photograph the first figure information in image, each first figure information into row label is marked and is stored.For example, getting one When shooting image, the first figure information in the shooting image is detected, as can recognizing face characteristic letter in the shooting image Breath, it is determined that there are figure informations in the shooting image;Obtain each figure information recognized in the shooting image, the portrait Information is first figure information, and each first figure information got is marked, such as A, B, C symbol mark into row label Note, storage is associated by corresponding each first figure information of the label information of label.
Step S2000, when receiving area image process instruction, determine in first figure information with default people As the second figure information of information matches;
Display interface of the user based on the terminal with filming apparatus can carry out image procossing to the shooting image, When receiving the area image process instruction, it is determined that whether there is in first figure information and matched with default figure information Figure information, which is second figure information;Exist in first figure information and preset portrait letter with this When ceasing matched second figure information, then second figure information is handled, wherein, which is pair The shooting image carries out the control instruction of Local treatment.
Specifically, there are first figure information during the first figure information, is extracted in the shooting image is determined, judging should Whether the first figure information with default figure information reaches certain matching degree.Wherein, which can be by a The number that people sets or is detected reaches the figure information of preset times threshold value.Believe when shooting first portrait in image When the portrait matching degree of breath and the default figure information is spent more than preset matching, it is determined that exist and this in first figure information Matched second figure information of default figure information;When shooting first figure information in image figure information is preset with this When portrait matching degree is spent less than the preset matching, it is determined that there is no preset figure information with this to match in first figure information The second figure information.Second figure information is to preset the matched figure information of figure information with this.
For example, a certain user obtains a shooting picture by the terminal taking with filming apparatus, determining in the bat It takes the photograph there are the first figure information in image, to first figure information and the progress matching degree detection of default figure information.If this One figure information is corresponding with the first figure information of multiple and different label informations, then a pair of each label information is corresponding respectively First figure information carries out matching degree detection with the default figure information, to obtain the default portrait letter of first figure information and this The portrait matching degree of breath;If detecting the obtained portrait matching degree is more than preset matching degree, it is determined that in first figure information Matched second figure information of figure information is preset in the presence of with this.
Before step S2000, user can be in advance used in the terminal of filming apparatus, and typing is wanted individually The automatic figure information for carrying out image procossing, i.e., default figure information.If user not want on the terminal in advance by typing The individually automatic figure information for carrying out image procossing, then time that the terminal can accordingly occur by detecting a certain figure information Number;When the terminal detects that the number that the figure information occurs is more than preset frequency threshold value (i.e. preset times threshold value) When, which can be updated to default figure information.Specifically, as user is carried out with a certain figure information for the first time Equipment is unlocked or is paid when operations, and the terminal used by a user with filming apparatus just records the figure information; User unlocks or pays the access times that the figure information is then updated when operations using the figure information for the second time, then goes out The number that the figure information is detected now is updated during the figure information successively.Exist in the number being detected of the figure information When more than preset times threshold value, it is default figure information to update the figure information.
Step S3000 carries out image procossing to second figure information, obtains processing image.
In the present embodiment, carry out image procossing to second figure information, process object be not only only limited to this Two figure informations are handled, and are further included the corresponding human body information to second figure information and are carried out image procossing.
Specifically, when getting second figure information, while the corresponding human body information of the second figure information is identified, The human body information then includes second figure information.When getting second figure information, second figure information is obtained Corresponding characteristics of human body can recognize the corresponding human body information of the second figure information according to the characteristics of human body.It is getting During the human body information, image procossing is carried out to the human body information, which can be special effect processing, cutting processing and U.S. face The image-editing operations such as processing.
Wherein, which includes skin area and non-skin region, which is the human body in image is shot The exposed part of skin, such as face, hand and neck region;The non-skin region is then the human skin in the shooting image Non- exposed part, such as the non-exposed region of hair.It, should be to the people during image procossing is carried out to the human body information Body information carries out region recognition division, and for skin area and non-skin region, the current human body can be believed by thermal image Skin area and non-skin region in breath are divided.The temperature of each position of human body surface be it is different, it is especially right Between the exposed region of human body and non-exposed region, the difference of temperature is even more apparent;Therefore, by obtaining the human body information Corresponding thermal image can judge current location temperature according to the shade of pixel each in the thermal image, from And further determine that the region belonging to current location is skin area or non-skin region.
When determining skin area and non-skin region, then the non-skin region is protected, such as area locking The operation protection non-skin region;With this, avoid when carrying out image procossing to skin area, to the non-skin region also simultaneously Relevant accidentally processing is carried out.When handling the skin area, corresponding image processing commands may be selected in user, the image Process instruction includes automatically processing and manual handle.Specifically, it is touched based on the affiliated terminal interface of camera arrangement receiving user When automatically processing instruction of hair carries out image Fuzzy Processing to the skin area according to instruction is automatically processed, obtains blurred picture; The blurred picture and the shooting image are merged, obtains blending image, processing is sharpened to the blending image, you can handled Image.When receiving manual handle instruction of the user based on the affiliated terminal interface triggering of camera arrangement, then according to the manual place Reason instruction obtains corresponding processing parameter, manages parameter according to this and carries out corresponding image processing operations to the human body information.With Family also can carry out the further tune of effect and degree for the treatment of on the basis of based on the processing image to treated the effect It is whole.
In addition to this, user can also be based on the affiliated terminal interface of camera arrangement, and the manual of region is carried out to the shooting picture It chooses;Specifically, in the region determine instruction for receiving user's selection, corresponding firstth area of the region determine instruction is obtained Domain, and the state of the first area is set as adjustable state;Second area in addition to the first area is then set as not Adjustable state.Further receiving image processing commands, then according to the image processing commands only to the first area into The corresponding image processing operations of row.
The facial image processing method that the present embodiment proposes, by when getting shooting image, detecting in the shooting image The first figure information, then when receiving area image process instruction, determine in first figure information with it is default Matched second figure information of figure information then carries out image procossing to second figure information, obtains processing image, real Showed the image procossing pair with matched second figure information of default figure information so as to include the pictures of multiple portraits into During row image procossing, can automatic identification and the matched figure information of default figure information, and carry out automatic or hand according to demand Dynamic processing so that image processing effect more meets the actual demand of user.
Based on first embodiment, the present inventor is proposed as the second embodiment of processing method, with reference to Fig. 4, in the present embodiment In, step S2000 includes:
Step S2100 obtains the portrait matching degree of first figure information and default figure information;
Step S2200 when the portrait matching degree is spent more than preset matching, determines to deposit in first figure information With default matched second figure information of figure information.
In the present embodiment, the first figure information is all figure informations shot in image, is detecting the shooting When image includes figure information, detected all figure informations are first figure information.And in first portrait In information, it is understood that there may be have with the matched figure information of default figure information, also may be not present matched with default figure information Figure information;Therefore, in the area image process instruction for receiving display interface triggering, then first portrait letter is obtained The portrait matching degree of breath and the default figure information.
According to the portrait matching degree, judge whether first figure information with default figure information reaches certain matching Degree, when the portrait matching degree for shooting first figure information in image and the default figure information is spent more than preset matching, It then determines to exist in first figure information to preset matched second figure information of figure information with this;When shoot in image should When the portrait matching degree of first figure information and the default figure information is spent less than the preset matching, it is determined that first portrait is believed It is not present in breath and presets matched second figure information of figure information with this.
The facial image processing method that the present embodiment proposes, by the people for obtaining first figure information and default figure information As matching degree, then when the portrait matching degree is spent more than preset matching, determine to exist in first figure information with Matched second figure information of default figure information, realize in the first figure information the second figure information it is automatic really It is fixed so that when handling shooting image, can automatic identification arrive with the matched figure information of user preset storage, into one Individually adjusting to determining figure information is realized on step ground, avoids and carries out all integrally carrying out the image during image procossing every time Uniformly image procossing.
Based on first embodiment, the present inventor is proposed as the 3rd embodiment of processing method, with reference to Fig. 5, in the present embodiment In, step S3000 includes:
Step S3100 identifies the corresponding human body information of second figure information;
Step S3200 carries out image procossing to the human body information.
In the present embodiment, when determining with matched second figure information of default figure information, at the same identify this second The corresponding human body information of figure information, the human body information then include second figure information.Getting second portrait letter During breath, the corresponding characteristics of human body of second figure information is obtained, which can be recognized according to the characteristics of human body Corresponding human body information.When getting the human body information, image procossing is carried out to the human body information, which can be Special effect processing cuts the image-editing operations such as processing and U.S. face processing.
Specifically, which includes skin area and non-skin region, which is the people in image is shot The exposed part of body skin, such as face, hand and neck region;The non-skin region is then the human body skin in the shooting image The non-exposed part of skin, such as the non-exposed region of hair.It, should be to this during image procossing is carried out to the human body information Human body information carries out region recognition division, can be to the current human body by thermal image for skin area and non-skin region Skin area and non-skin region in information are divided.By obtaining the corresponding thermal image of the human body information, according to the heat The shade of each pixel can judge current location temperature in image, so as to further determine current location Affiliated region is skin area or non-skin region.
When determining skin area and non-skin region, then the non-skin region is protected, such as area locking The operation protection non-skin region;With this, avoid when carrying out image procossing to skin area, to the non-skin region also simultaneously Relevant accidentally processing is carried out.When handling the skin area, corresponding image processing commands may be selected in user, the image Process instruction includes automatically processing and manual handle.Specifically, it is touched based on the affiliated terminal interface of camera arrangement receiving user When automatically processing instruction of hair carries out image Fuzzy Processing to the skin area according to instruction is automatically processed, obtains blurred picture; The blurred picture and the shooting image are merged, obtains blending image, processing is sharpened to the blending image, you can handled Image.When receiving manual handle instruction of the user based on the affiliated terminal interface triggering of camera arrangement, then according to the manual place Reason instruction obtains corresponding processing parameter, manages parameter according to this and carries out corresponding image processing operations to the human body information.
The facial image processing method that the present embodiment proposes, by identifying the corresponding human body information of second figure information, connects It and image procossing is carried out to the human body information, realize the processing to the determining corresponding human body information of the second figure information, So that in image processing process, the region of human body can be more fully handled, and be not limited solely to face point, bigger The demand of user is met in degree.
Based on 3rd embodiment, the present inventor is proposed as the fourth embodiment of processing method, with reference to Fig. 6, in the present embodiment In, step S3100 includes:
Step S3110 obtains the corresponding characteristics of human body of second figure information;
Step S3120 identifies the corresponding human body information of second figure information based on the characteristics of human body.
In the present embodiment, which is the characteristic information for identifying second figure information, according to the characteristics of human body The corresponding human body contour outline of second figure information can be detected, so as to detect that current second figure information is corresponding Human body information.Specifically, by extracting HOG (the Histogramof Oriented Gradient direction gradient Nogatas of sample Figure) feature, this feature is put into SVM (SupportVector Machine) grader and is trained, so as to be corresponded to Training pattern;Further sample training is carried out to the training pattern, obtains final detection model, get this second During the corresponding characteristics of human body of figure information, it is corresponding that current second figure information is can detect by built-in detection model Human body information.
The facial image processing method that the present embodiment proposes, by obtaining the corresponding characteristics of human body of second figure information, connects It and the corresponding human body information of second figure information is identified based on the characteristics of human body, realize and the second figure information is corresponded to Human body information identification so that corresponding human body information can accurately be recognized by second figure information, so as into The image procossing to human body information is realized to one step, expands corresponding process range so as to figure information Uniformly the corresponding human body parts of the figure information uniformly can be handled during reason.
Based on 3rd embodiment, the present inventor is proposed as the 5th embodiment of processing method, with reference to Fig. 7, in the present embodiment In, step S3200 includes:
Step S3210 detects skin area and non-skin region in the human body information;
Step S3220, image procossing is carried out to the skin area, and image protection is carried out to the non-skin region.
In the present embodiment, which includes skin area and non-skin region, which is in shooting figure The exposed part of human skin as in, such as face, hand and neck region;The non-skin region is then in the shooting image The non-exposed part of human skin, such as the non-exposed region of hair.It, should during image procossing is carried out to the human body information It, can be to current by thermal image for skin area and non-skin region when carrying out region recognition division to the human body information Skin area and non-skin region in the human body information are divided.The temperature of each position of human body surface be it is different, Especially between the exposed region of human body and non-exposed region, the difference of temperature is even more apparent;Therefore, it is somebody's turn to do by obtaining The corresponding thermal image of human body information can carry out current location temperature according to the shade of pixel each in the thermal image Judge, so as to further determine that the region belonging to current location is skin area or non-skin region.
When determining skin area and non-skin region, then the non-skin region is protected, such as area locking The operation protection non-skin region;With this, avoid when carrying out image procossing to skin area, to the non-skin region also simultaneously Relevant accidentally processing is carried out.When handling the skin area, corresponding image processing commands may be selected in user, the image Process instruction includes automatically processing and manual handle.Specifically, it is touched based on the affiliated terminal interface of camera arrangement receiving user When automatically processing instruction of hair carries out image Fuzzy Processing to the skin area according to instruction is automatically processed, obtains blurred picture; The blurred picture and the shooting image are merged, obtains blending image, processing is sharpened to the blending image, you can handled Image.When receiving manual handle instruction of the user based on the affiliated terminal interface triggering of camera arrangement, then according to the manual place Reason instruction obtains corresponding processing parameter, manages parameter according to this and carries out corresponding image processing operations to the human body information.
The facial image processing method that the present embodiment proposes, by detecting skin area and non-skin area in the human body information Domain then carries out image procossing to the skin area, carries out image protection to the non-skin region, realizes and recognizing Only skin area is handled during human body information, the protection to non-skin region in human body so that the skin area energy of human body Enough portraits with processing unify effect, and will not feed through to non-skin region, so that the image that processing obtains is more certainly So.
Based on the 5th embodiment, the present inventor is proposed as the sixth embodiment of processing method, with reference to Fig. 8, in the present embodiment In, step S3220 includes:
Step S3221 carries out image Fuzzy Processing to the skin area, obtains blurred picture;
Step S3222, according to the blurred picture and the shooting image generation processing image.
In the present embodiment, when handling the skin area, corresponding image processing commands may be selected in user, the figure As process instruction includes automatically processing and manual handle.Receive user based on the triggering of camera arrangement affiliated terminal interface from During dynamic process instruction, image Fuzzy Processing is carried out to the skin area according to instruction is automatically processed, obtains blurred picture;Fusion should Blurred picture and the shooting image, obtain blending image, processing are sharpened to the blending image, you can obtain processing image.
The facial image processing method that the present embodiment proposes by carrying out image Fuzzy Processing to the skin area, obtains mould Image is pasted, then according to the blurred picture and the shooting image generation processing image, is realized through the side such as Fuzzy Processing Formula realizes the automatic U.S. face processing to human body skin area, saves the time of user oneself adjustment parameter so that user can be fast The effect picture that obtains that treated of speed.
Based on first embodiment, the present inventor is proposed as the 7th embodiment of processing method, with reference to Fig. 9, in the present embodiment In, step S1000 includes:
Step S1100 detects the portrait characteristic information in the shooting image;
Step S1200 based on the portrait characteristic information, detects the first figure information in the shooting image.
In the present embodiment, when getting the shooting image, then the portrait characteristic information in the shooting image, root are detected According to the portrait characteristic information, the first figure information in the shooting image is detected.It specifically, can be by Principal Component Analysis, several The first figure information in shooting image is identified in the modes such as what feature, if face spy can be recognized in the shooting image Reference ceases, it is determined that there are figure informations in the shooting image;If face characteristic information cannot be recognized in the shooting image, It then determines there is no figure information in the shooting image, wherein, which is the first portrait letter in the shooting image Breath.
The facial image processing method that the present embodiment proposes, by detecting the portrait characteristic information in the shooting image, then Based on the portrait characteristic information, the first figure information in the shooting image is detected, realizes and is getting shooting image When, the first figure information in shooting image is determined, the picture for not including figure information has tentatively been screened, has saved subsequent figure As the time of processing, the efficiency of image procossing is improved.
Based on first embodiment, the present inventor is proposed as the 8th embodiment of processing method, with reference to Figure 10, in the present embodiment In, after step S3000, which further includes:
Step S4000 when receiving adjust instruction, obtains adjusting parameter, based on the adjusting parameter to the processing Image is adjusted.
In the present embodiment, user, can be to each in processing procedure before not preserved to the image handled well A parameter is again adjusted, more satisfied so as to achieve the effect that.Specifically, user is being received based on filming apparatus During the adjust instruction of affiliated terminal interface triggering, then the corresponding adjustment region in the processing image is determined with the adjust instruction. The adjustment region can be divided into three regions, the skin area in a human body information being to determine, second is that user choose manually it is dynamic State region, third, the fixed area other than the skin area and the dynamic area.Different adjust instructions corresponds to different Adjustment region, the adjust instruction being analyzed and acquired by obtain corresponding adjustment region, while obtain the corresponding adjustment of the adjust instruction Parameter further makes corresponding adjustment region by oneself according to the adjusting parameter operations such as free burial ground for the destitute adjustment and modification.
The facial image processing method that the present embodiment proposes, by when receiving adjust instruction, adjusting parameter being obtained, based on institute It states adjusting parameter to be adjusted the processing image, realize according to the adjusting parameter that User Defined is set to handling image Self-defined adjustment so that the picture that final adjustment obtains is more in line with the requirement of user.
In addition, the embodiment of the present invention also proposes a kind of facial image processing device, the facial image processing device includes:Memory, Processor and the facial image processing program that can be run on the memory and on the processor is stored in, the facial image processing journey Following operation is realized when sequence is performed by the processor:
When getting shooting image, the first figure information in the shooting image is detected;
When receiving area image process instruction, determine to match with default figure information in first figure information The second figure information;
Image procossing is carried out to second figure information, obtains processing image.
Further, following operation is also realized when the facial image processing program is executed by processor:
Obtain the portrait matching degree of first figure information and default figure information;
When the portrait matching degree is spent more than preset matching, determine in first figure information exist with it is described pre- If matched second figure information of figure information.
Further, following operation is also realized when the facial image processing program is executed by processor:
Identify the corresponding human body information of second figure information;
Image procossing is carried out to the human body information.
Further, following operation is also realized when the facial image processing program is executed by processor:
Obtain the corresponding characteristics of human body of second figure information;
The corresponding human body information of second figure information is identified based on the characteristics of human body.
Further, following operation is also realized when the facial image processing program is executed by processor:
Detect skin area and non-skin region in the human body information;
Image procossing is carried out to the skin area, image protection is carried out to the non-skin region.
Further, following operation is also realized when the facial image processing program is executed by processor:
Image Fuzzy Processing is carried out to the skin area, obtains blurred picture;
According to the blurred picture and the shooting image generation processing image.
Further, following operation is also realized when the facial image processing program is executed by processor:
Detect the portrait characteristic information in the shooting image;
Based on the portrait characteristic information, the first figure information in the shooting image is detected.
Further, following operation is also realized when the facial image processing program is executed by processor:
When receiving adjust instruction, adjusting parameter is obtained, the processing image is adjusted based on the adjusting parameter It is whole.
In addition, to achieve the above object, the present invention also proposes a kind of computer readable storage medium, described computer-readable Facial image processing program is stored on storage medium, following operation is realized when the facial image processing program is executed by processor:
When getting shooting image, the first figure information in the shooting image is detected;
When receiving area image process instruction, determine to match with default figure information in first figure information The second figure information;
Image procossing is carried out to second figure information, obtains processing image.
Further, following operation is also realized when the facial image processing program is executed by processor:
Obtain the portrait matching degree of first figure information and default figure information;
When the portrait matching degree is spent more than preset matching, determine in first figure information exist with it is described pre- If matched second figure information of figure information.
Further, following operation is also realized when the facial image processing program is executed by processor:
Identify the corresponding human body information of second figure information;
Image procossing is carried out to the human body information.
Further, following operation is also realized when the facial image processing program is executed by processor:
Obtain the corresponding characteristics of human body of second figure information;
The corresponding human body information of second figure information is identified based on the characteristics of human body.
Further, following operation is also realized when the facial image processing program is executed by processor:
Detect skin area and non-skin region in the human body information;
Image procossing is carried out to the skin area, image protection is carried out to the non-skin region.
Further, following operation is also realized when the facial image processing program is executed by processor:
Image Fuzzy Processing is carried out to the skin area, obtains blurred picture;
According to the blurred picture and the shooting image generation processing image.
Further, following operation is also realized when the facial image processing program is executed by processor:
Detect the portrait characteristic information in the shooting image;
Based on the portrait characteristic information, the first figure information in the shooting image is detected.
Further, following operation is also realized when the facial image processing program is executed by processor:
When receiving adjust instruction, adjusting parameter is obtained, the processing image is adjusted based on the adjusting parameter It is whole.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row His property includes, so that process, method, article or system including a series of elements not only include those elements, and And it further includes other elements that are not explicitly listed or further includes intrinsic for this process, method, article or system institute Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including this Also there are other identical elements in the process of element, method, article or system.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on such understanding, technical scheme of the present invention substantially in other words does the prior art Going out the part of contribution can be embodied in the form of software product, which is stored in one as described above In storage medium (such as ROM/RAM, magnetic disc, CD), including some instructions use so that a station terminal equipment (can be mobile phone, Computer, server, air conditioner or network equipment etc.) perform method described in each embodiment of the present invention.
It these are only the preferred embodiment of the present invention, be not intended to limit the scope of the invention, it is every to utilize this hair The equivalent structure or equivalent flow shift that bright specification and accompanying drawing content are made directly or indirectly is used in other relevant skills Art field, is included within the scope of the present invention.

Claims (10)

  1. A kind of 1. facial image processing method, which is characterized in that the facial image processing method includes:
    When getting shooting image, the first figure information in the shooting image is detected;
    When receiving area image process instruction, determine in first figure information with default figure information matched Two figure informations;
    Image procossing is carried out to second figure information, obtains processing image.
  2. 2. facial image processing method as described in claim 1, which is characterized in that it is described determine in first figure information with The step of default figure information matched second figure information, includes:
    Obtain the portrait matching degree of first figure information and default figure information;
    When the portrait matching degree is spent more than preset matching, determine exist and the default people in first figure information As the second figure information of information matches.
  3. 3. facial image processing method as described in claim 1, which is characterized in that described that image is carried out to second figure information The step of processing, includes:
    Identify the corresponding human body information of second figure information;
    Image procossing is carried out to the human body information.
  4. 4. facial image processing method as claimed in claim 3, which is characterized in that identification second figure information is corresponding The step of human body information, includes:
    Obtain the corresponding characteristics of human body of second figure information;
    The corresponding human body information of second figure information is identified based on the characteristics of human body.
  5. 5. facial image processing method as claimed in claim 3, which is characterized in that described that image procossing is carried out to the human body information The step of include:
    Detect skin area and non-skin region in the human body information;
    Image procossing is carried out to the skin area, image protection is carried out to the non-skin region.
  6. 6. facial image processing method as claimed in claim 5, which is characterized in that described that image procossing is carried out to the skin area The step of further include:
    Image Fuzzy Processing is carried out to the skin area, obtains blurred picture;
    According to the blurred picture and the shooting image generation processing image.
  7. 7. facial image processing method as described in claim 1, which is characterized in that the first in the detection shooting image As the step of information includes:
    Detect the portrait characteristic information in the shooting image;
    Based on the portrait characteristic information, the first figure information in the shooting image is detected.
  8. 8. facial image processing method as described in claim 1, which is characterized in that described that image is carried out to second figure information After the step of handling, obtaining processing image, the facial image processing method further includes:
    When receiving adjust instruction, adjusting parameter is obtained, the processing image is adjusted based on the adjusting parameter.
  9. 9. a kind of facial image processing device, which is characterized in that the facial image processing device includes:It memory, processor and is stored in On the memory and the facial image processing program that can run on the processor, the facial image processing program is by the processor It is realized during execution such as the step of facial image processing method described in any item of the claim 1 to 8.
  10. 10. a kind of computer readable storage medium, which is characterized in that be stored at portrait on the computer readable storage medium Program is managed, such as facial image processing described in any item of the claim 1 to 8 is realized when the facial image processing program is executed by processor The step of method.
CN201711497876.1A 2017-12-29 2017-12-29 Facial image processing method, apparatus and computer readable storage medium Pending CN108229389A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711497876.1A CN108229389A (en) 2017-12-29 2017-12-29 Facial image processing method, apparatus and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711497876.1A CN108229389A (en) 2017-12-29 2017-12-29 Facial image processing method, apparatus and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN108229389A true CN108229389A (en) 2018-06-29

Family

ID=62642429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711497876.1A Pending CN108229389A (en) 2017-12-29 2017-12-29 Facial image processing method, apparatus and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108229389A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740522A (en) * 2018-12-29 2019-05-10 广东工业大学 A kind of personnel's detection method, device, equipment and medium
CN112036310A (en) * 2020-08-31 2020-12-04 北京字节跳动网络技术有限公司 Picture processing method, device, equipment and storage medium
CN112102360A (en) * 2020-08-17 2020-12-18 深圳数联天下智能科技有限公司 Action type identification method and device, electronic equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110280471A1 (en) * 2006-04-20 2011-11-17 Kabushiki Kaisha Toshiba Image processing apparatus, image processing method, defect detection method, semiconductor device manufacturing method, and program
CN105913389A (en) * 2016-04-07 2016-08-31 广东欧珀移动通信有限公司 Image processing method and device for skin abnormity
CN107274355A (en) * 2017-05-22 2017-10-20 奇酷互联网络科技(深圳)有限公司 image processing method, device and mobile terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110280471A1 (en) * 2006-04-20 2011-11-17 Kabushiki Kaisha Toshiba Image processing apparatus, image processing method, defect detection method, semiconductor device manufacturing method, and program
CN105913389A (en) * 2016-04-07 2016-08-31 广东欧珀移动通信有限公司 Image processing method and device for skin abnormity
CN107274355A (en) * 2017-05-22 2017-10-20 奇酷互联网络科技(深圳)有限公司 image processing method, device and mobile terminal

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740522A (en) * 2018-12-29 2019-05-10 广东工业大学 A kind of personnel's detection method, device, equipment and medium
CN112102360A (en) * 2020-08-17 2020-12-18 深圳数联天下智能科技有限公司 Action type identification method and device, electronic equipment and medium
CN112102360B (en) * 2020-08-17 2023-12-12 深圳数联天下智能科技有限公司 Action type identification method and device, electronic equipment and medium
CN112036310A (en) * 2020-08-31 2020-12-04 北京字节跳动网络技术有限公司 Picture processing method, device, equipment and storage medium
WO2022042680A1 (en) * 2020-08-31 2022-03-03 北京字节跳动网络技术有限公司 Picture processing method and apparatus, device, and storage medium
US11900726B2 (en) 2020-08-31 2024-02-13 Beijing Bytedance Network Technology Co., Ltd. Picture processing method and apparatus, device, and storage medium

Similar Documents

Publication Publication Date Title
CN108495056A (en) Photographic method, mobile terminal and computer readable storage medium
CN108269230A (en) Certificate photo generation method, mobile terminal and computer readable storage medium
CN108900778A (en) A kind of image pickup method, mobile terminal and computer readable storage medium
CN108600647A (en) Shooting preview method, mobile terminal and storage medium
CN108900780A (en) A kind of screen light compensation method, mobile terminal and storage medium
CN108540641A (en) Combination picture acquisition methods, flexible screen terminal and computer readable storage medium
CN107948430A (en) A kind of display control method, mobile terminal and computer-readable recording medium
CN107124552A (en) A kind of image pickup method, terminal and computer-readable recording medium
CN109672822A (en) A kind of method for processing video frequency of mobile terminal, mobile terminal and storage medium
CN108055463A (en) Image processing method, terminal and storage medium
CN108172161A (en) Display methods, mobile terminal and computer readable storage medium based on flexible screen
CN108257097A (en) U.S. face effect method of adjustment, terminal and computer readable storage medium
CN108197206A (en) Expression packet generation method, mobile terminal and computer readable storage medium
CN107979727A (en) A kind of document image processing method, mobile terminal and computer-readable storage medium
CN108200332A (en) A kind of pattern splicing method, mobile terminal and computer readable storage medium
CN108196777A (en) A kind of flexible screen application process, equipment and computer readable storage medium
CN108229389A (en) Facial image processing method, apparatus and computer readable storage medium
CN107992824A (en) Take pictures processing method, mobile terminal and computer-readable recording medium
CN107613206A (en) A kind of image processing method, mobile terminal and computer-readable recording medium
CN107241504A (en) A kind of image processing method, mobile terminal and computer-readable recording medium
CN109005354A (en) Image pickup method, mobile terminal and computer readable storage medium
CN108900765A (en) A kind of shooting based reminding method, mobile terminal and computer readable storage medium
CN107203278A (en) One-handed performance input method, mobile terminal and storage medium
CN107368253A (en) Picture Zoom display method, mobile terminal and storage medium
CN110177206A (en) Image pickup method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180629