CN106210521A - A kind of photographic method and terminal - Google Patents
A kind of photographic method and terminal Download PDFInfo
- Publication number
- CN106210521A CN106210521A CN201610562358.2A CN201610562358A CN106210521A CN 106210521 A CN106210521 A CN 106210521A CN 201610562358 A CN201610562358 A CN 201610562358A CN 106210521 A CN106210521 A CN 106210521A
- Authority
- CN
- China
- Prior art keywords
- face
- feature information
- parameter
- facial feature
- terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses a kind of photographic method and terminal, wherein method includes: obtain the facial feature information of the target face needing U.S. face in preview image, obtain the U.S. face parameter that the facial feature information of described target face is corresponding, according to described U.S. face parameter shooting photograph.Embodiment of the present invention terminal obtains corresponding U.S. face parameter to different target face so that many people carry out U.S. face when taking pictures together, can reach the U.S. face effect that everyone is best, thus improve the whole structure of clapped photograph.
Description
Technical field
The present invention relates to electronic technology field, particularly relate to a kind of photographic method and terminal.
Background technology
Along with the development of intelligent terminal, can be used for the intelligent terminal that U.S. face takes pictures and get more and more, such as smart mobile phone, flat
Plate computer etc..
But, by existing intelligent terminal carry out U.S. face take pictures time, when after user setup accordingly U.S. face grade, terminal
Can be for everyone in preview image according to same U.S. face parameter shooting photograph, when skin tone value or face ratio differ bigger
Many individuals when taking a group photo, carry out U.S. face according to same U.S. face parameter and take pictures and can not reach the U.S. face effect that everyone is best
Really, the whole structure causing clapped photograph is unsatisfactory.
Summary of the invention
The embodiment of the present invention provides a kind of photographic method and terminal, it is possible to many people carry out together U.S. face take pictures time, reach
The U.S. face effect that everyone is best.
First aspect, embodiments provides a kind of photographic method, and the method includes:
Obtain the facial feature information of the target face needing U.S. face in preview image;
Obtain the U.S. face parameter that the facial feature information of described target face is corresponding;
According to described U.S. face parameter shooting photograph.
On the other hand, embodiments providing a kind of terminal, this terminal includes:
First acquiring unit, for obtaining the facial feature information of the target face needing U.S. face in preview image;
Second acquisition unit, for obtaining the U.S. face parameter that the facial feature information of described target face is corresponding;
Shooting unit, for according to described U.S. face parameter shooting photograph.
Such scheme, terminal obtains the facial feature information of the target face needing U.S. face in preview image, and obtains institute
State the U.S. face parameter that the facial feature information of target face is corresponding, and according to described U.S. face parameter shooting photograph.Due to terminal pair
Different target face obtains corresponding U.S. face parameter so that many people carry out U.S. face when taking pictures together, can reach everyone
Best U.S. face effect, thus improve the whole structure of clapped photograph.
Accompanying drawing explanation
In order to be illustrated more clearly that embodiment of the present invention technical scheme, required use in embodiment being described below
Accompanying drawing is briefly described, it should be apparent that, the accompanying drawing in describing below is some embodiments of the present invention, general for this area
From the point of view of logical technical staff, on the premise of not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the schematic flow diagram of a kind of photographic method that the embodiment of the present invention provides;
Fig. 2 is the schematic flow diagram of a kind of photographic method that another embodiment of the present invention provides;
Fig. 3 is the schematic block diagram of a kind of terminal that the embodiment of the present invention provides;
Fig. 4 is a kind of terminal schematic block diagram that another embodiment of the present invention provides.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Describe, it is clear that described embodiment is a part of embodiment of the present invention rather than whole embodiments wholely.Based on this
Embodiment in bright, the every other enforcement that those of ordinary skill in the art are obtained under not making creative work premise
Example, broadly falls into the scope of protection of the invention.
Should be appreciated that when using in this specification and in the appended claims, term " includes " and " comprising " instruction
Described feature, entirety, step, operation, element and/or the existence of assembly, but it is not precluded from one or more further feature, whole
Body, step, operation, element, assembly and/or the existence of its set or interpolation.
It is also understood that the term used in this description of the invention is merely for the sake of the mesh describing specific embodiment
And be not intended to limit the present invention.As used in description of the invention and appended claims, unless on
Hereafter clearly indicating other situation, otherwise " ", " " and " being somebody's turn to do " of singulative is intended to include plural form.
It will be further appreciated that, the term "and/or" used in description of the invention and appended claims is
Refer to the one or more any combination being associated in the item listed and likely combine, and including that these combine.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determining " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if be detected that [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to really
Fixed " or " [described condition or event] once being detected " or " in response to [described condition or event] being detected ".
In implementing, the terminal described in the embodiment of the present invention includes but not limited to such as have touch sensitive surface
Mobile phone, laptop computer or the tablet PC of (such as, touch-screen display and/or touch pad) etc other just
Portable device.It is to be further understood that in certain embodiments, described equipment not portable communication device, but have tactile
Touch the desk computer of sensing surface (such as, touch-screen display and/or touch pad).
In discussion below, describe the terminal including display and touch sensitive surface.It is, however, to be understood that
It is that terminal can include such as physical keyboard, mouse and/or control other physical user-interface device one or more of bar.
Terminal supports various application programs, such as following in one or more: drawing application program, demonstration application journey
Sequence, word-processing application, website create application program, dish imprinting application program, spreadsheet applications, game application
Program, telephony application, videoconference application, email application, instant messaging applications, exercise
Support the application of application program, photo management application program, digital camera application program, digital camera application program, web-browsing
Program, digital music player application and/or video frequency player application program.
The various application programs that can perform in terminal can use at least one of such as touch sensitive surface public
Physical user-interface device.Among applications and/or can adjust in corresponding application programs and/or change and touch sensitive table
The corresponding information of display in one or more functions in face and terminal.So, the public physical structure of terminal (such as, touches
Sensing surface) the various application programs with the most directly perceived and transparent user interface can be supported.
Refer to the schematic flow diagram that Fig. 1, Fig. 1 are a kind of photographic methods that the embodiment of the present invention provides.In the present embodiment
The executive agent of photographic method is the terminal with U.S. face camera function.Terminal can be camera, it is also possible to for smart mobile phone, put down
The mobile terminals such as plate computer.As it is shown in figure 1, photographic method can comprise the following steps that
S101: obtain the facial feature information of the target face needing U.S. face in preview image.
User use have the terminal of beautiful face camera function carry out U.S.'s face take pictures time, the preview that can show at terminal screen
Image selects the face feature letter needing the target face of U.S. face, terminal to obtain the target face needing U.S. face in preview image
Breath.Wherein it is desired to the target face of U.S. face can be one or more, determine with specific reference to user's actual need.
Further, step S101 may include that and determines the mesh needing U.S. face from multiple faces of described preview image
Mark face, and obtain the facial feature information of described target face.
Such as, when many individuals carry out together U.S. face take pictures time, if user selects whole to the multiple faces in preview image
Carrying out U.S. face to process, the target face i.e. needing U.S. face is the whole faces in preview image, and terminal obtains in preview image complete
The facial feature information of portion's face;Process if user selects that a face in multiple faces of preview image carries out U.S. face,
The target face i.e. needing U.S. face is a face in preview image, and terminal determines from multiple faces of described preview image
Need the target face of U.S. face, and obtain the facial feature information of described target face.
Wherein, the facial feature information of different faces is different, facial feature information include but not limited to skin information,
Face information and face mask information etc..Skin information includes but not limited to skin tone value, skin smoothness etc..Implementing
Cheng Zhong, can be first passed through the camera collection facial image of terminal, then be obtained the face feature of face by face recognition technology
Information.
S102: obtain the U.S. face parameter that the facial feature information of described target face is corresponding.
Terminal obtains each self-corresponding U.S. face parameter of facial feature information of target face, different facial feature information pair
The U.S. face parameter answered is the most different.U.S. face parameter includes but not limited to whitening parameter, mill skin parameter, thinning face parameter, anti-acne parameter
Deng.
Further, step S102 may include that from the facial feature information prestored with the corresponding table of U.S. face parameter, obtains
Take the U.S. face parameter that the facial feature information of described target face is corresponding.
Terminal can obtain the face of described target face from the facial feature information prestored with the corresponding table of U.S. face parameter
The U.S. face parameter that portion's characteristic information is corresponding.Wherein, facial feature information is stored in advance in terminal with the corresponding table of U.S. face parameter
The U.S. face parameter that facial feature information in memory element and different is corresponding is the most different.
Concrete, user can single carry out U.S. face take pictures time, single U.S. face parameter is configured, terminal obtains
Facial feature information single in preview image and the U.S. face parameter of user setup, and set up single facial feature information with beautiful
Mapping relations between face parameter, store single facial feature information with corresponding U.S. face parameter simultaneously, and are formed
The corresponding table of facial feature information and U.S. face parameter, afterwards, many people carry out together U.S. face take pictures time, terminal is directly from prestoring
Facial feature information, with the corresponding table of U.S. face parameter, obtains the U.S. face ginseng that the facial feature information of described target face is corresponding
Number.
During implementing, if facial feature information does not prestore some mesh in the corresponding table of U.S. face parameter
The facial feature information of mark face and corresponding U.S. face parameter, then during this U.S. face is taken pictures, not to these target faces
Do any process.
Further, step S102 can also include: obtains the U.S. face parameter of user's current setting;Wherein, U.S. face parameter
Corresponding with the facial feature information of target face.
Terminal can obtain the U.S. face parameter of user's current setting, wherein, described U.S. face parameter and described target face
Facial feature information is corresponding.
During current U.S. face is taken pictures, if the U.S. face parameter that user is not desired to arrange before using carries out U.S. face and takes pictures, then
The U.S. face parameter of the target face needing U.S. face in current preview picture can be arranged by real time, concrete, Yong Huke
With in multiple faces of present preview image, choose successively and need the target face of U.S. face, and U.S. to target face successively
Face parameter is configured, and can only choose a target face every time, and be configured the U.S. face parameter of a target face, eventually
End obtains the U.S. face parameter of user's current setting, wherein, the face feature letter of described U.S. face parameter and described target face successively
Breath correspondence, the U.S. face parameter that different target face is corresponding can be identical, it is also possible to different, with specific reference to hobby and the demand of user
Determine, do not limit.
S103: according to described U.S. face parameter shooting photograph.
Terminal is according to each self-corresponding U.S. face parameter shooting photo of facial feature information of described target face.
During implementing, when many individuals carry out together U.S. face take pictures time, if user selects in preview image
Multiple faces all carry out U.S. face and process, then terminal is clapped according to each self-corresponding U.S. face parameter of facial feature information of whole faces
Taking the photograph photograph, i.e. terminal is when shooting photograph, implements different U.S. face parameters respectively for different faces;If user selects in advance
A face in multiple faces of image of looking at carries out U.S. face and processes, then the face feature of the face that terminal is selected according to user
The U.S. face parameter that information is corresponding, the face selecting user carries out U.S. face and processes, and other faces are carried out normal process or ugly
Change processes, with prominent leading role's ring of light.
Such scheme, terminal obtains the facial feature information of the target face needing U.S. face in preview image, and obtains institute
State the U.S. face parameter that the facial feature information of target face is corresponding, and according to described U.S. face parameter shooting photograph.Due to terminal pair
Different target face obtains corresponding U.S. face parameter so that many people carry out U.S. face when taking pictures together, can reach everyone
Best U.S. face effect, thus improve the whole structure of clapped photograph.
Refer to the schematic flow diagram that Fig. 2, Fig. 2 are a kind of photographic methods that another embodiment of the present invention provides.This enforcement
In example, the executive agent of photographic method is the terminal with U.S. face camera function.Terminal can be camera, it is also possible to for intelligence hands
The mobile terminal such as machine, panel computer.As in figure 2 it is shown, photographic method can comprise the following steps that
S201: obtain the facial feature information of the target face needing U.S. face in preview image.
User use have the terminal of beautiful face camera function carry out U.S.'s face take pictures time, the preview that can show at terminal screen
Image selects the face feature letter needing the target face of U.S. face, terminal to obtain the target face needing U.S. face in preview image
Breath.Wherein it is desired to the target face of U.S. face can be one or more, determine with specific reference to user's actual need.
Further, step S201 may include that and determines the mesh needing U.S. face from multiple faces of described preview image
Mark face, and obtain the facial feature information of described target face.
Such as, when many individuals carry out together U.S. face take pictures time, if user selects whole to the multiple faces in preview image
Carrying out U.S. face to process, the target face i.e. needing U.S. face is the whole faces in preview image, and terminal obtains in preview image complete
The facial feature information of portion's face;Process if user selects that a face in multiple faces of preview image carries out U.S. face,
The target face i.e. needing U.S. face is a face in preview image, and terminal determines from multiple faces of described preview image
Need the target face of U.S. face, and obtain the facial feature information of described target face.
Wherein, the facial feature information of different faces is different, facial feature information include but not limited to skin information,
Face information and face mask information etc..Skin information includes but not limited to skin tone value, skin smoothness etc..Implementing
Cheng Zhong, can be first passed through the camera collection facial image of terminal, then be obtained the face feature of face by face recognition technology
Information.
S202: obtain the U.S. face skin tone value of user's current setting, and according to described U.S. face skin tone value and described target face
Skin tone value, determine the U.S. face parameter that the facial feature information of described target face is corresponding.
U.S. face skin tone value refers to during current U.S. face takes pictures, the desired value of the U.S. face colour of skin of user setup.As many people
Carrying out U.S. face together when taking pictures, user can take pictures for this U.S. face, arranges specific U.S. face skin tone value, and terminal obtains user
The U.S. face skin tone value of current setting, and according to the skin tone value of U.S. face skin tone value with target face, determine that the face of target face is special
The each self-corresponding U.S. face parameter of reference breath.Wherein, U.S. face parameter include but not limited to whitening parameter, mill skin parameter, thinning face parameter,
Anti-acne parameter etc..
The skin tone value of target face is contrasted by terminal respectively with U.S. face skin tone value, when the skin tone value of target face is than U.S.
When face skin tone value is low, terminal determines that whitening parameter corresponding to the skin tone value of target face is on the occasion of, and the skin tone value of target face
The lowest relative to U.S. face skin tone value, terminal determine whitening parameter corresponding to the skin tone value of target face on the occasion of the highest;Work as mesh
When the skin tone value of mark face is higher than U.S. face skin tone value, terminal determines that whitening parameter corresponding to the skin tone value of target face is negative value,
And the skin tone value of target face is the highest relative to U.S. face skin tone value, terminal determines the whitening parameter that the skin tone value of target face is corresponding
Negative value the highest, and other U.S. face parameters outside each self-corresponding whitening parameter of the facial feature information of target face, eventually
End can obtain with the corresponding table of U.S. face parameter from the facial feature information prestored, and does not limits.
S203: according to described U.S. face parameter shooting photograph.
Terminal is according to each self-corresponding U.S. face parameter shooting phase of skin tone value of the target face needing U.S. face in preview image
Sheet.I.e. terminal is according to the U.S. face skin tone value of user's current setting and the respective skin tone value of target face that needs U.S. face, to preview
The target face needing U.S. face in image all carries out U.S. face with this U.S.'s face skin tone value and processes.
Such scheme, terminal obtains the facial feature information of the target face needing U.S. face in preview image, and obtains institute
State the U.S. face parameter that the facial feature information of target face is corresponding, and according to described U.S. face parameter shooting photograph.Due to terminal pair
Different target face obtains corresponding U.S. face parameter so that many people carry out U.S. face when taking pictures together, can reach everyone
Best U.S. face effect, thus improve the whole structure of clapped photograph.
Terminal is according to the U.S. face skin tone value of user's current setting and the respective skin tone value of target face that needs U.S. face, in advance
Looking at needs in image the target face of U.S. face all to carry out U.S. face with this U.S. face skin tone value to process so that the most individual in clapped photograph
The colour of skin reaches unanimity, thus improves the whole structure of clapped photograph.
See Fig. 3, be the schematic block diagram of a kind of terminal that the embodiment of the present invention provides.Terminal 300 can be intelligence hands
The mobile terminal such as machine, panel computer, but it is not limited to this, it is also possible to for other-end, do not limit.The end of the present embodiment
Each module that end 300 includes, for performing each step in embodiment corresponding to Fig. 1, specifically refers to Fig. 1 and Fig. 1 corresponding
Associated description in embodiment, does not repeats.The terminal 300 of the present embodiment includes: the first acquiring unit 301, second obtains
Unit 302 and shooting unit 303.
First acquiring unit 301 is for obtaining the facial feature information of the target face needing U.S. face in preview image.
User use have the terminal of beautiful face camera function carry out U.S.'s face take pictures time, the preview that can show at terminal screen
Selecting to need the target face of U.S. face in image, the first acquiring unit 301 obtains the target face needing U.S. face in preview image
Facial feature information.Wherein it is desired to the target face of U.S. face can be one or more, need with specific reference to user is actual
Ask and determine.
Further, the first acquiring unit 301 needs U.S. specifically for determining from multiple faces of described preview image
The target face of face, and obtain the facial feature information of described target face.
Such as, when many individuals carry out together U.S. face take pictures time, if user selects whole to the multiple faces in preview image
Carrying out U.S. face to process, the target face i.e. needing U.S. face is the whole faces in preview image, and the first acquiring unit 301 obtains pre-
Look in image the facial feature information of whole faces;If user selects to carry out a face in multiple faces of preview image
U.S. face processes, and the target face i.e. needing U.S. face is a face in preview image, and the first acquiring unit 301 is from described preview
Multiple faces of image determine the target face needing U.S. face, and obtains the facial feature information of described target face.
Wherein, the facial feature information of different faces is different, facial feature information include but not limited to skin information,
Face information and face mask information etc..Skin information includes but not limited to skin tone value, skin smoothness etc..Implementing
Cheng Zhong, can be first passed through the camera collection facial image of terminal, then be obtained the face feature of face by face recognition technology
Information.
Second acquisition unit 302 is for U.S. face parameter corresponding to the facial feature information obtaining described target face.
Second acquisition unit 302 obtains each self-corresponding U.S. face parameter of facial feature information of target face, different faces
The U.S. face parameter that portion's characteristic information is corresponding is the most different.U.S. face parameter includes but not limited to whitening parameter, mill skin parameter, thinning face ginseng
Number, anti-acne parameter etc..
Further, second acquisition unit 302 specifically can be corresponding with U.S. face parameter from the facial feature information prestored
In table, obtain the U.S. face parameter that the facial feature information of described target face is corresponding.
Wherein, facial feature information is stored in advance in the memory element of terminal with the corresponding table of U.S. face parameter, and different
U.S. face parameter corresponding to facial feature information the most different.
Concrete, user can single carry out U.S. face take pictures time, single U.S. face parameter is configured, terminal obtains
Facial feature information single in preview image and the U.S. face parameter of user setup, and set up single facial feature information with beautiful
Mapping relations between face parameter, store single facial feature information with corresponding U.S. face parameter simultaneously, and are formed
The corresponding table of facial feature information and U.S. face parameter, afterwards, many people carry out together U.S. face take pictures time, second acquisition unit 302
Directly from the facial feature information prestored with the corresponding table of U.S. face parameter, obtain the facial feature information pair of described target face
The U.S. face parameter answered.
Further, second acquisition unit 302 can also obtain the U.S. face parameter of user's current setting;Wherein, described U.S.
Face parameter is corresponding with the facial feature information of described target face.
During current U.S. face is taken pictures, if the U.S. face parameter that user is not desired to arrange before using carries out U.S. face and takes pictures, then
The U.S. face parameter of the target face needing U.S. face in current preview picture can be arranged by real time, concrete, Yong Huke
With in multiple faces of present preview image, choose successively and need the target face of U.S. face, and U.S. to target face successively
Face parameter is configured, and can only choose a target face every time, and be configured the U.S. face parameter of a target face, the
Two acquiring units 302 obtain the U.S. face parameter of user's current setting, wherein, described U.S. face parameter and described target face successively
Facial feature information is corresponding, and the U.S. face parameter that different target face is corresponding can be identical, it is also possible to different, with specific reference to user's
Hobby and demand determine, do not limit.
Photographing unit 303 is for according to described U.S. face parameter shooting photograph.
Photographing unit 303 is for each self-corresponding U.S. face parameter shooting photo of the facial feature information according to target face.
During implementing, when many individuals carry out together U.S. face take pictures time, if user selects in preview image
Multiple faces all carry out U.S. face and process, then photographing unit 303 is according to each self-corresponding U.S. of facial feature information of whole faces
Face parameter shooting photograph, i.e. photographing unit 303, when shooting photograph, implement different U.S. face ginsengs respectively for different faces
Number;Process if user selects that a face in multiple faces of preview image carries out U.S. face, then photographing unit 303 according to
The U.S. face parameter that the facial feature information of the face that family is selected is corresponding, the face selecting user carries out U.S. face and processes, and right
Other faces carry out normal process or uglify process, with prominent leading role's ring of light.
Such scheme, terminal obtains the facial feature information of the target face needing U.S. face in preview image, and obtains institute
State the U.S. face parameter that the facial feature information of target face is corresponding, and according to described U.S. face parameter shooting photograph.Due to terminal pair
Different target face obtains corresponding U.S. face parameter so that many people carry out U.S. face when taking pictures together, can reach everyone
Best U.S. face effect, thus improve the whole structure of clapped photograph.
Continuing with seeing Fig. 3, in another kind of embodiment, each module that terminal 300 includes is for performing reality corresponding to Fig. 2
Execute each step in example, specifically refer to the associated description in embodiment corresponding to Fig. 2 and Fig. 2, do not repeat.Specifically
Ground:
First acquiring unit 301 is for obtaining the facial feature information of the target face needing U.S. face in preview image.
User use have the terminal of beautiful face camera function carry out U.S.'s face take pictures time, the preview that can show at terminal screen
Selecting to need the target face of U.S. face in image, the first acquiring unit 301 obtains the target face needing U.S. face in preview image
Facial feature information.Wherein it is desired to the target face of U.S. face can be one or more, need with specific reference to user is actual
Ask and determine.
Further, the first acquiring unit 301 needs U.S. specifically for determining from multiple faces of described preview image
The target face of face, and obtain the facial feature information of described target face.
Such as, when many individuals carry out together U.S. face take pictures time, if user selects whole to the multiple faces in preview image
Carrying out U.S. face to process, the target face i.e. needing U.S. face is the whole faces in preview image, and the first acquiring unit 301 obtains pre-
Look in image the facial feature information of whole faces;If user selects to carry out a face in multiple faces of preview image
U.S. face processes, and the target face i.e. needing U.S. face is a face in preview image, and the first acquiring unit 301 is from described preview
Multiple faces of image determine the target face needing U.S. face, and obtains the facial feature information of described target face.
Wherein, the facial feature information of different faces is different, facial feature information include but not limited to skin information,
Face information and face mask information etc..Skin information includes but not limited to skin tone value, skin smoothness etc..Implementing
Cheng Zhong, can be first passed through the camera collection facial image of terminal, then be obtained the face feature of face by face recognition technology
Information.
Second acquisition unit 302 is used for obtaining the U.S. face skin tone value of user's current setting, and according to described U.S. face skin tone value
With the skin tone value of described target face, determine the U.S. face parameter that the facial feature information of described target face is corresponding.
U.S. face skin tone value refers to during current U.S. face takes pictures, the desired value of the U.S. face colour of skin of user setup.As many people
Carrying out U.S. face together when taking pictures, user can take pictures for this U.S. face, arranges specific U.S. face skin tone value, second acquisition unit
The 302 U.S. face skin tone value obtaining user setup, and according to U.S. face skin tone value and the skin tone value of target face, determine target face
The each self-corresponding U.S. face parameter of facial feature information.Wherein, U.S. face parameter includes but not limited to whitening parameter, mill skin parameter, thin
Face parameter, anti-acne parameter etc..
The skin tone value of target face is contrasted, when target face by second acquisition unit 302 respectively with U.S. face skin tone value
Skin tone value lower than U.S. face skin tone value time, second acquisition unit 302 determines that whitening parameter corresponding to the skin tone value of target face is
On the occasion of, and the skin tone value of target face is the lowest relative to U.S. face skin tone value, second acquisition unit 302 determines the colour of skin of target face
The whitening parameter that value is corresponding on the occasion of the highest;When the skin tone value of target face is higher than U.S. face skin tone value, second acquisition unit
302 determine that whitening parameter corresponding to the skin tone value of target face is negative value, and the skin tone value of target face is relative to the U.S. face colour of skin
Being worth the highest, second acquisition unit 302 determines that the negative value of whitening parameter corresponding to the skin tone value of target face is the highest, and target person
Other U.S. face parameters outside each self-corresponding whitening parameter of facial feature information of face, second acquisition unit 302 can be from advance
The facial feature information first stored obtains in the corresponding table of U.S. face parameter, does not limits.
Photographing unit 303 is for according to described U.S. face parameter shooting photograph.
Photographing unit 303 is according to each self-corresponding U.S. face parameter of skin tone value of the target face needing U.S. face in preview image
Shooting photograph.I.e. photographing unit 303 is according to the U.S. face skin tone value of user setup and the respective colour of skin of target face that needs U.S. face
Value, all carries out U.S. face with this U.S.'s face skin tone value to the target face needing U.S. face in preview image and processes.
Such scheme, terminal obtains the facial feature information of the target face needing U.S. face in preview image, and obtains institute
State each self-corresponding U.S. face parameter of facial feature information of target face, and according to described U.S. face parameter shooting photograph.Due to end
Hold and different target face is obtained corresponding U.S. face parameter so that many people carry out U.S. face when taking pictures together, can reach institute
The U.S. face effect that someone is best, thus improve the whole structure of clapped photograph.
Terminal is according to the U.S. face skin tone value of user's current setting and the respective skin tone value of target face that needs U.S. face, in advance
Looking at needs in image the target face of U.S. face all to carry out U.S. face with this U.S. face skin tone value to process so that the most individual in clapped photograph
The colour of skin reaches unanimity, thus improves the whole structure of clapped photograph.
See Fig. 4, be a kind of terminal schematic block diagram of another embodiment of the present invention offer.In the present embodiment as depicted
Terminal 400 may include that one or more processor 401;One or more input equipments 402, one or more outputs set
Standby 403 and memorizer 404.Above-mentioned processor 401, input equipment 402, outut device 403 and memorizer 404 pass through communication bus
405 complete mutual communication.
Memorizer 404 is used for storing programmed instruction.
Processor 401 operation below performing according to the programmed instruction of memorizer 404 storage:
Processor 401 is for obtaining the facial feature information of the target face needing U.S. face in preview image.
Processor 401 is additionally operable to obtain the U.S. face parameter that the facial feature information of described target face is corresponding.
Processor 401 is additionally operable to according to described U.S. face parameter shooting photograph.
Further, processor 401 is additionally operable to determine the target needing U.S. face from multiple faces of described preview image
Face, and obtain the facial feature information of described target face.
Further, processor 401 specifically for from the face characteristic information prestored with the corresponding table of U.S. face parameter, obtain
Take the U.S. face parameter that the facial feature information of described target face is corresponding.
Further, processor 401 is specifically for obtaining the U.S. face parameter of user's current setting;Wherein, described U.S. face ginseng
Number is corresponding with the facial feature information of described target face.
Further, processor 401 is additionally operable to obtain the U.S. face skin tone value of user's current setting, and according to described U.S. face skin
Colour and the skin tone value of described target face, determine the U.S. face parameter that the facial feature information of described target face is corresponding.
Such scheme, terminal obtains the facial feature information of the target face needing U.S. face in preview image, and obtains institute
State the U.S. face parameter that the facial feature information of target face is corresponding, and according to described U.S. face parameter shooting photograph.Due to terminal pair
Different target face obtains corresponding U.S. face parameter so that many people carry out U.S. face when taking pictures together, can reach everyone
Best U.S. face effect, thus improve the whole structure of clapped photograph.
Should be appreciated that in embodiments of the present invention, alleged processor 401 can be CPU (Central
Processing Unit, CPU), this processor can also is that other general processors, digital signal processor (Digital
Signal Processor, DSP), special IC (Application Specific Integrated Circuit,
ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other FPGAs
Device, discrete gate or transistor logic, discrete hardware components etc..General processor can be microprocessor or this at
Reason device can also be the processor etc. of any routine.
Input equipment 402 can include that Trackpad, fingerprint adopt sensor (for gathering the finger print information of user and fingerprint
Directional information), photographic head, mike etc., outut device 403 can include display (LCD etc.), speaker etc..
This memorizer 404 can include read only memory and random access memory, and to processor 401 provide instruction and
Data.A part for memorizer 404 can also include nonvolatile RAM.Such as, memorizer 404 can also be deposited
The information of storage device type.
In implementing, processor 401, input equipment 402 described in the embodiment of the present invention, outut device 403 can
Implementation described in the first embodiment of the photographic method that the execution embodiment of the present invention provides and the second embodiment, it is possible to
The implementation of execution terminal described by the embodiment of the present invention, does not repeats them here.
Those of ordinary skill in the art are it is to be appreciated that combine the list of each example that the embodiments described herein describes
Unit and algorithm steps, it is possible to electronic hardware, computer software or the two be implemented in combination in, in order to clearly demonstrate hardware
With the interchangeability of software, the most generally describe composition and the step of each example according to function.This
A little functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Specially
Industry technical staff can use different methods to realize described function to each specifically should being used for, but this realization is not
It is considered as beyond the scope of this invention.
Those skilled in the art is it can be understood that arrive, for convenience of description and succinctly, and the end of foregoing description
End and the specific works process of unit, be referred to the corresponding process in preceding method embodiment, do not repeat them here.
In several embodiments provided herein, it should be understood that disclosed terminal and method, can be passed through it
Its mode realizes.Such as, device embodiment described above is only schematically, such as, and the division of described unit, only
Being only a kind of logic function to divide, actual can have other dividing mode, the most multiple unit or assembly to tie when realizing
Close or be desirably integrated into another system, or some features can be ignored, or not performing.It addition, shown or discussed phase
Coupling between Hu or direct-coupling or communication connection can be the INDIRECT COUPLING by some interfaces, device or unit or communication
Connect, it is also possible to be electric, machinery or other form connect.
Step in embodiment of the present invention method can carry out order according to actual needs and adjust, merges and delete.
Unit in embodiment of the present invention terminal can merge according to actual needs, divides and delete.
The described unit illustrated as separating component can be or may not be physically separate, shows as unit
The parts shown can be or may not be physical location, i.e. may be located at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected according to the actual needs to realize embodiment of the present invention scheme
Purpose.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to
It is that unit is individually physically present, it is also possible to be that two or more unit are integrated in a unit.Above-mentioned integrated
Unit both can realize to use the form of hardware, it would however also be possible to employ the form of SFU software functional unit realizes.
If described integrated unit realizes and as independent production marketing or use using the form of SFU software functional unit
Time, can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially
The part in other words prior art contributed, or this technical scheme completely or partially can be with the form of software product
Embodying, this computer software product is stored in a storage medium, including some instructions with so that a computer
Equipment (can be personal computer, server, or the network equipment etc.) performs the complete of method described in each embodiment of the present invention
Portion or part steps.And aforesaid storage medium includes: USB flash disk, portable hard drive, read only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey
The medium of sequence code.
The above, the only detailed description of the invention of the present invention, but protection scope of the present invention is not limited thereto, and any
Those familiar with the art, in the technical scope that the invention discloses, can readily occur in the amendment of various equivalence or replace
Changing, these amendments or replacement all should be contained within protection scope of the present invention.Therefore, protection scope of the present invention should be with right
The protection domain required is as the criterion.
Claims (10)
1. a photographic method, it is characterised in that described method includes:
Obtain the facial feature information of the target face needing U.S. face in preview image;
Obtain the U.S. face parameter that the facial feature information of described target face is corresponding;
According to described U.S. face parameter shooting photograph.
Method the most according to claim 1, it is characterised in that need the target face of U.S. face in described acquisition preview image
Facial feature information include:
From multiple faces of described preview image, determine the target face needing U.S. face, and obtain the face of described target face
Characteristic information.
Method the most according to claim 1, it is characterised in that the facial feature information pair of described acquisition described target face
The U.S. face parameter answered includes:
From the facial feature information prestored with the corresponding table of U.S. face parameter, obtain the facial feature information pair of described target face
The U.S. face parameter answered.
Method the most according to claim 1, it is characterised in that the facial feature information pair of described acquisition described target face
The U.S. face parameter answered includes:
Obtain the U.S. face parameter of user's current setting;Wherein, described U.S. face parameter and the facial feature information of described target face
Corresponding.
Method the most according to claim 1, it is characterised in that described facial feature information includes skin tone value, described acquisition
The U.S. face parameter that the facial feature information of described target face is corresponding includes:
Obtain the U.S. face skin tone value of user's current setting, and according to the skin tone value of described U.S. face skin tone value with described target face,
Determine the U.S. face parameter that the facial feature information of described target face is corresponding.
6. a terminal, it is characterised in that described terminal includes:
First acquiring unit, for obtaining the facial feature information of the target face needing U.S. face in preview image;
Second acquisition unit, for obtaining the U.S. face parameter that the facial feature information of described target face is corresponding;
Shooting unit, for according to described U.S. face parameter shooting photograph.
Terminal the most according to claim 6, it is characterised in that described first acquiring unit is specifically for from preview image
Multiple faces determine the target face needing U.S. face, and obtains the facial feature information of described target face.
Terminal the most according to claim 6, it is characterised in that described second acquisition unit is specifically with special from the face prestored
Reference breath, with the corresponding table of U.S. face parameter, obtains the U.S. face parameter that the facial feature information of described target face is corresponding.
Terminal the most according to claim 6, it is characterised in that described second acquisition unit is current specifically for obtaining user
The U.S. face parameter arranged;Wherein, described U.S. face parameter is corresponding with the facial feature information of described target face.
Terminal the most according to claim 6, it is characterised in that described facial feature information includes skin tone value, described second
Acquiring unit is specifically for obtaining the U.S. face skin tone value of user's current setting, and according to described U.S. face skin tone value and described target person
The skin tone value of face, determines the U.S. face parameter that the facial feature information of described target face is corresponding.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610562358.2A CN106210521A (en) | 2016-07-15 | 2016-07-15 | A kind of photographic method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610562358.2A CN106210521A (en) | 2016-07-15 | 2016-07-15 | A kind of photographic method and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106210521A true CN106210521A (en) | 2016-12-07 |
Family
ID=57476068
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610562358.2A Withdrawn CN106210521A (en) | 2016-07-15 | 2016-07-15 | A kind of photographic method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106210521A (en) |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106791394A (en) * | 2016-12-20 | 2017-05-31 | 北京小米移动软件有限公司 | Image processing method and device |
CN107231470A (en) * | 2017-05-15 | 2017-10-03 | 努比亚技术有限公司 | Image processing method, mobile terminal and computer-readable recording medium |
CN107292833A (en) * | 2017-05-22 | 2017-10-24 | 奇酷互联网络科技(深圳)有限公司 | Image processing method, device and mobile terminal |
CN107302662A (en) * | 2017-07-06 | 2017-10-27 | 维沃移动通信有限公司 | A kind of method, device and mobile terminal taken pictures |
CN107369142A (en) * | 2017-06-29 | 2017-11-21 | 北京小米移动软件有限公司 | Image processing method and device |
CN107369126A (en) * | 2017-06-07 | 2017-11-21 | 维沃移动通信有限公司 | A kind of face image processing process and device |
CN107566728A (en) * | 2017-09-25 | 2018-01-09 | 维沃移动通信有限公司 | A kind of image pickup method, mobile terminal and computer-readable recording medium |
CN107564073A (en) * | 2017-09-14 | 2018-01-09 | 广州市百果园信息技术有限公司 | Skin color model method and device, storage medium |
CN107592458A (en) * | 2017-09-18 | 2018-01-16 | 维沃移动通信有限公司 | A kind of image pickup method and mobile terminal |
CN107766831A (en) * | 2017-10-31 | 2018-03-06 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN107844764A (en) * | 2017-10-31 | 2018-03-27 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer-readable recording medium |
CN107886469A (en) * | 2017-09-26 | 2018-04-06 | 北京潘达互娱科技有限公司 | A kind of image beautification method, device, electronic equipment and storage medium |
CN107886484A (en) * | 2017-11-30 | 2018-04-06 | 广东欧珀移动通信有限公司 | U.S. face method, apparatus, computer-readable recording medium and electronic equipment |
CN107911609A (en) * | 2017-11-30 | 2018-04-13 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN107948506A (en) * | 2017-11-22 | 2018-04-20 | 珠海格力电器股份有限公司 | Image processing method and device and electronic equipment |
CN107948534A (en) * | 2018-01-03 | 2018-04-20 | 上海传英信息技术有限公司 | A kind of photographic method based on human body complexion difference, device and mobile terminal |
CN107993209A (en) * | 2017-11-30 | 2018-05-04 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN108012081A (en) * | 2017-12-08 | 2018-05-08 | 北京百度网讯科技有限公司 | Intelligence U.S. face method, apparatus, terminal and computer-readable recording medium |
CN108076288A (en) * | 2017-12-14 | 2018-05-25 | 光锐恒宇(北京)科技有限公司 | Image processing method, device and computer readable storage medium |
CN108257097A (en) * | 2017-12-29 | 2018-07-06 | 努比亚技术有限公司 | U.S. face effect method of adjustment, terminal and computer readable storage medium |
CN108366194A (en) * | 2018-01-15 | 2018-08-03 | 维沃移动通信有限公司 | A kind of photographic method and mobile terminal |
CN108447035A (en) * | 2018-03-21 | 2018-08-24 | 广东欧珀移动通信有限公司 | Image optimization method, electronic device and computer readable storage medium |
CN109314750A (en) * | 2017-05-27 | 2019-02-05 | 深圳配天智能技术研究院有限公司 | Method, system and the storage device of polishing are carried out to target to be detected |
WO2019061275A1 (en) * | 2017-09-29 | 2019-04-04 | 深圳传音通讯有限公司 | Skin color processing-based photographing method and photographing apparatus |
WO2019071550A1 (en) * | 2017-10-13 | 2019-04-18 | 深圳传音通讯有限公司 | Image processing method, mobile terminal, and computer-readable storage medium |
CN109919891A (en) * | 2019-03-14 | 2019-06-21 | Oppo广东移动通信有限公司 | Imaging method, device, terminal and storage medium |
CN112399078A (en) * | 2020-10-30 | 2021-02-23 | 维沃移动通信有限公司 | Shooting method and device and electronic equipment |
CN112738396A (en) * | 2020-12-29 | 2021-04-30 | 维沃移动通信(杭州)有限公司 | Image processing method and device and electronic equipment |
CN113077397A (en) * | 2021-03-29 | 2021-07-06 | Oppo广东移动通信有限公司 | Image beautifying processing method and device, storage medium and electronic equipment |
CN114943003A (en) * | 2022-06-22 | 2022-08-26 | 上海传英信息技术有限公司 | Image processing method, intelligent terminal and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413270A (en) * | 2013-08-15 | 2013-11-27 | 北京小米科技有限责任公司 | Method and device for image processing and terminal device |
CN103632165A (en) * | 2013-11-28 | 2014-03-12 | 小米科技有限责任公司 | Picture processing method, device and terminal equipment |
CN104503749A (en) * | 2014-12-12 | 2015-04-08 | 广东欧珀移动通信有限公司 | Photo processing method and electronic equipment |
CN104992155A (en) * | 2015-07-02 | 2015-10-21 | 广东欧珀移动通信有限公司 | Method and apparatus for acquiring face positions |
-
2016
- 2016-07-15 CN CN201610562358.2A patent/CN106210521A/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103413270A (en) * | 2013-08-15 | 2013-11-27 | 北京小米科技有限责任公司 | Method and device for image processing and terminal device |
MX2015007253A (en) * | 2013-08-15 | 2015-08-12 | Xiaomi Inc | Image processing method and apparatus, and terminal device. |
CN103632165A (en) * | 2013-11-28 | 2014-03-12 | 小米科技有限责任公司 | Picture processing method, device and terminal equipment |
CN104503749A (en) * | 2014-12-12 | 2015-04-08 | 广东欧珀移动通信有限公司 | Photo processing method and electronic equipment |
CN104992155A (en) * | 2015-07-02 | 2015-10-21 | 广东欧珀移动通信有限公司 | Method and apparatus for acquiring face positions |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106791394A (en) * | 2016-12-20 | 2017-05-31 | 北京小米移动软件有限公司 | Image processing method and device |
CN107231470A (en) * | 2017-05-15 | 2017-10-03 | 努比亚技术有限公司 | Image processing method, mobile terminal and computer-readable recording medium |
CN107292833A (en) * | 2017-05-22 | 2017-10-24 | 奇酷互联网络科技(深圳)有限公司 | Image processing method, device and mobile terminal |
CN107292833B (en) * | 2017-05-22 | 2020-06-23 | 奇酷互联网络科技(深圳)有限公司 | Image processing method and device and mobile terminal |
CN109314750A (en) * | 2017-05-27 | 2019-02-05 | 深圳配天智能技术研究院有限公司 | Method, system and the storage device of polishing are carried out to target to be detected |
CN107369126A (en) * | 2017-06-07 | 2017-11-21 | 维沃移动通信有限公司 | A kind of face image processing process and device |
CN107369142A (en) * | 2017-06-29 | 2017-11-21 | 北京小米移动软件有限公司 | Image processing method and device |
CN107302662A (en) * | 2017-07-06 | 2017-10-27 | 维沃移动通信有限公司 | A kind of method, device and mobile terminal taken pictures |
CN107564073A (en) * | 2017-09-14 | 2018-01-09 | 广州市百果园信息技术有限公司 | Skin color model method and device, storage medium |
US11348365B2 (en) | 2017-09-14 | 2022-05-31 | Bigo Technology Pte. Ltd. | Skin color identification method, skin color identification apparatus and storage medium |
CN107592458A (en) * | 2017-09-18 | 2018-01-16 | 维沃移动通信有限公司 | A kind of image pickup method and mobile terminal |
CN107592458B (en) * | 2017-09-18 | 2020-02-14 | 维沃移动通信有限公司 | Shooting method and mobile terminal |
CN107566728A (en) * | 2017-09-25 | 2018-01-09 | 维沃移动通信有限公司 | A kind of image pickup method, mobile terminal and computer-readable recording medium |
CN107886469A (en) * | 2017-09-26 | 2018-04-06 | 北京潘达互娱科技有限公司 | A kind of image beautification method, device, electronic equipment and storage medium |
WO2019061275A1 (en) * | 2017-09-29 | 2019-04-04 | 深圳传音通讯有限公司 | Skin color processing-based photographing method and photographing apparatus |
WO2019071550A1 (en) * | 2017-10-13 | 2019-04-18 | 深圳传音通讯有限公司 | Image processing method, mobile terminal, and computer-readable storage medium |
CN107844764A (en) * | 2017-10-31 | 2018-03-27 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer-readable recording medium |
CN107766831B (en) * | 2017-10-31 | 2020-06-30 | Oppo广东移动通信有限公司 | Image processing method, image processing device, mobile terminal and computer-readable storage medium |
CN107844764B (en) * | 2017-10-31 | 2020-05-12 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN107766831A (en) * | 2017-10-31 | 2018-03-06 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN107948506A (en) * | 2017-11-22 | 2018-04-20 | 珠海格力电器股份有限公司 | Image processing method and device and electronic equipment |
WO2019100766A1 (en) * | 2017-11-22 | 2019-05-31 | 格力电器(武汉)有限公司 | Image processing method and apparatus, electronic device and storage medium |
CN107911609B (en) * | 2017-11-30 | 2020-09-22 | Oppo广东移动通信有限公司 | Image processing method, image processing device, computer-readable storage medium and electronic equipment |
CN107993209A (en) * | 2017-11-30 | 2018-05-04 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN107886484A (en) * | 2017-11-30 | 2018-04-06 | 广东欧珀移动通信有限公司 | U.S. face method, apparatus, computer-readable recording medium and electronic equipment |
CN107911609A (en) * | 2017-11-30 | 2018-04-13 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
CN108012081A (en) * | 2017-12-08 | 2018-05-08 | 北京百度网讯科技有限公司 | Intelligence U.S. face method, apparatus, terminal and computer-readable recording medium |
CN108012081B (en) * | 2017-12-08 | 2020-02-04 | 北京百度网讯科技有限公司 | Intelligent beautifying method, device, terminal and computer readable storage medium |
CN108076288A (en) * | 2017-12-14 | 2018-05-25 | 光锐恒宇(北京)科技有限公司 | Image processing method, device and computer readable storage medium |
CN108257097A (en) * | 2017-12-29 | 2018-07-06 | 努比亚技术有限公司 | U.S. face effect method of adjustment, terminal and computer readable storage medium |
CN107948534A (en) * | 2018-01-03 | 2018-04-20 | 上海传英信息技术有限公司 | A kind of photographic method based on human body complexion difference, device and mobile terminal |
CN108366194B (en) * | 2018-01-15 | 2021-03-05 | 维沃移动通信有限公司 | Photographing method and mobile terminal |
CN108366194A (en) * | 2018-01-15 | 2018-08-03 | 维沃移动通信有限公司 | A kind of photographic method and mobile terminal |
CN108447035A (en) * | 2018-03-21 | 2018-08-24 | 广东欧珀移动通信有限公司 | Image optimization method, electronic device and computer readable storage medium |
CN109919891A (en) * | 2019-03-14 | 2019-06-21 | Oppo广东移动通信有限公司 | Imaging method, device, terminal and storage medium |
CN112399078A (en) * | 2020-10-30 | 2021-02-23 | 维沃移动通信有限公司 | Shooting method and device and electronic equipment |
CN112399078B (en) * | 2020-10-30 | 2022-09-02 | 维沃移动通信有限公司 | Shooting method and device and electronic equipment |
CN112738396A (en) * | 2020-12-29 | 2021-04-30 | 维沃移动通信(杭州)有限公司 | Image processing method and device and electronic equipment |
CN113077397A (en) * | 2021-03-29 | 2021-07-06 | Oppo广东移动通信有限公司 | Image beautifying processing method and device, storage medium and electronic equipment |
CN113077397B (en) * | 2021-03-29 | 2024-05-17 | Oppo广东移动通信有限公司 | Image beautifying processing method and device, storage medium and electronic equipment |
CN114943003A (en) * | 2022-06-22 | 2022-08-26 | 上海传英信息技术有限公司 | Image processing method, intelligent terminal and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106210521A (en) | A kind of photographic method and terminal | |
US20220262022A1 (en) | Displaying and editing images with depth information | |
CN107995415A (en) | A kind of image processing method, terminal and computer-readable medium | |
CN104956301B (en) | The method for showing equipment and control display equipment | |
CN106293584A (en) | A kind of double-screen display method and terminal | |
US11363071B2 (en) | User interfaces for managing a local network | |
CN108121485A (en) | A kind of icon method for sorting, terminal and computer readable storage medium | |
CN106201178A (en) | A kind of adjustment screen display direction control method and terminal | |
US20150074576A1 (en) | Information processing methods and electronic devices | |
CN106294549A (en) | A kind of image processing method and terminal | |
CN107181858A (en) | A kind of method and terminal for showing notification message | |
CN106814932A (en) | A kind of desktop wallpaper display methods and terminal | |
CN107870706A (en) | One kind management figure calibration method, terminal and computer-readable medium | |
CN106385537A (en) | Photographing method and terminal | |
CN107479806A (en) | The method and terminal of a kind of changing interface | |
CN106453904A (en) | Information reminding method and terminal | |
CN106250111A (en) | A kind of wallpaper acquisition methods and terminal | |
CN106873985A (en) | A kind of method and terminal that wallpaper is set | |
CN106648287A (en) | Method and terminal for replacing application icon | |
CN106406938A (en) | Application download method and terminal | |
CN109359582A (en) | Information search method, information search device and mobile terminal | |
CN108228024A (en) | A kind of method of application control, terminal and computer-readable medium | |
CN108111747A (en) | A kind of image processing method, terminal device and computer-readable medium | |
CN106412289A (en) | Caller display method and terminal | |
CN106227752A (en) | A kind of photograph sharing method and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20161207 |
|
WW01 | Invention patent application withdrawn after publication |