CN107547797A - A kind of image pickup method, terminal and computer-readable recording medium - Google Patents

A kind of image pickup method, terminal and computer-readable recording medium Download PDF

Info

Publication number
CN107547797A
CN107547797A CN201710624684.6A CN201710624684A CN107547797A CN 107547797 A CN107547797 A CN 107547797A CN 201710624684 A CN201710624684 A CN 201710624684A CN 107547797 A CN107547797 A CN 107547797A
Authority
CN
China
Prior art keywords
action
preview interface
user
shooting preview
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710624684.6A
Other languages
Chinese (zh)
Inventor
王猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201710624684.6A priority Critical patent/CN107547797A/en
Publication of CN107547797A publication Critical patent/CN107547797A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a kind of image pickup method, terminal and computer-readable recording medium, the image pickup method comprises the following steps:In shooting, the action of user is detected by least one ultrasonic sensor;Region corresponding to action in shooting preview interface is defined as target area;Landscaping treatment is carried out to target area in shooting preview interface, and shows the image after landscaping treatment;Using such scheme, according to the action of user, landscaping treatment is carried out to image automatically in shooting preview interface, the trouble of user's later stage manual landscaping treatment image is eliminated, saves user time, improve Consumer's Experience.

Description

A kind of image pickup method, terminal and computer-readable recording medium
Technical field
The present invention relates to field of terminal technology, more specifically to a kind of image pickup method, terminal and computer-readable deposit Storage media.
Background technology
With the development of technique for taking, camera, user are both provided with the terminal such as increasing mobile phone, tablet personal computer Picture can be recorded at any time, keep fine moment here.
At present, requirement of the user to shoot function further improves, in the prior art, beautiful to be carried out to the image of shooting Change is handled, and can only be that after the picture is taken, user carries out the landscaping treatment in later stage to the image of storage manually, expends user time, uses Family experience is poor.
The content of the invention
It is a primary object of the present invention to propose a kind of image pickup method, terminal and computer-readable recording medium, it is intended to solve Certainly in the prior art, after the picture is taken, user carries out the landscaping treatment in later stage to the image of storage manually, caused by when expending user Between the problem of.
In order to solve the above technical problems, the present invention provides a kind of image pickup method, image pickup method comprises the following steps:
In shooting, the action of user is detected by least one ultrasonic sensor;
Region corresponding to action in shooting preview interface is defined as target area;
Landscaping treatment is carried out to target area in shooting preview interface, and shows the image after landscaping treatment.
Wherein, when ultrasonic sensor is high-power operation state,
The action of user is detected by least one ultrasonic sensor to be included:Examined by least one ultrasonic sensor The facial action of user is surveyed, facial action includes following at least one:The action of eyes, the action of eyebrow, the action of lip, The action of nose, the action of ear;
When facial action includes the action of eyes, region corresponding to action in shooting preview interface is defined as target area Domain includes:Region where the eyes of user in shooting preview interface is defined as target area;
When facial action includes the action of eyebrow, region corresponding to action in shooting preview interface is defined as target area Domain includes:Region where the eyebrow of user in shooting preview interface is defined as target area;
When facial action includes the action of lip, region corresponding to action in shooting preview interface is defined as target area Domain includes:Region where the lip of user in shooting preview interface is defined as target area;
When facial action includes the action of nose, region corresponding to action in shooting preview interface is defined as target area Domain includes:Region where the nose of user in shooting preview interface is defined as target area;
When facial action includes the action of ear, region corresponding to action in shooting preview interface is defined as target area Domain includes:Region where the ear of user in shooting preview interface is defined as target area.
Wherein, when by ultrasonic sensor detect user facial action and ultrasonic sensor number be more than 1 When,
The receiver for the ultrasonic sensor for being less than predetermined threshold value with the distance of face area in shooting preview interface is opened Open, the reception office of the ultrasonic sensor of predetermined threshold value will be more than or equal to the distance of face area in shooting preview interface Close.
Wherein, when ultrasonic sensor is low-power operation state,
The action of user is detected by least one ultrasonic sensor to be included:Examined by least one ultrasonic sensor The limb action of user is surveyed, limb action includes following at least one:The action of arm, the action of leg;
Region corresponding to action in shooting preview interface is defined as into target area includes:By user in shooting preview interface Limb action after region be defined as target area.
Wherein, carrying out landscaping treatment to target area in shooting preview interface includes:In shooting preview interface to target area Domain is blurred, and/or mosaic, and/or scribble processing.
Wherein, landscaping treatment is carried out to target area in shooting preview interface, and after showing the image after landscaping treatment, Also include:
Receive operation of the user to confirmation shooting button;
According to operation, the image after the landscaping treatment that shooting preview interface is shown is shot and preserved.
Wherein, in shooting, before the action that user is detected by least one ultrasonic sensor, in addition to:
Display sets interface, sets interface to include the unlatching options of preset function, and preset function includes:In shooting, The action of user is detected by least one ultrasonic sensor;Region corresponding to action in shooting preview interface is defined as mesh Mark region;Landscaping treatment is carried out to target area in shooting preview interface, and shows the image after landscaping treatment;
Receive selected operation of the user to unlatching options;
According to selected operation, preset function is opened;
Or the opening ways of preset function include:When triggering shooting, while open preset function.
Wherein, landscaping treatment is carried out to target area in shooting preview interface, and after showing the image after landscaping treatment, Also include:
Display sets interface, sets interface to include the closing options of preset function;
Receive selected operation of the user to closing options;
According to selected operation, preset function is closed;
Or the closing mode of preset function includes:When exiting shooting, preset function is simultaneously closed off.
Further, the present invention provides a kind of terminal, and terminal includes processor, memory and communication bus;
Communication bus is used to realize the connection communication between processor and memory;
Processor is used to perform one or more program stored in memory, to realize the step of foregoing image pickup method Suddenly.
Further, the present invention provides a kind of computer-readable recording medium, and computer-readable recording medium storage has one Individual or multiple programs, one or more program can be by one or more computing devices, to realize foregoing shooting side The step of method.
Beneficial effect
The invention provides a kind of image pickup method, terminal and computer-readable recording medium, the image pickup method includes following Step:In shooting, the action of user is detected by least one ultrasonic sensor;It is corresponding by being acted in shooting preview interface Region be defined as target area;Landscaping treatment is carried out to target area in shooting preview interface, and after showing landscaping treatment Image;Using such scheme, according to the action of user, landscaping treatment is carried out to image automatically in shooting preview interface, eliminated The trouble of user's later stage manual landscaping treatment image, saves user time, improves Consumer's Experience.
Brief description of the drawings
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is the hardware architecture diagram for realizing each optional terminal of embodiment one of the present invention;
Fig. 2 is a kind of flow chart for image pickup method that first embodiment of the invention provides;
Fig. 3 is to invent a kind of ultrasound senor position setting and user on mobile phone that each embodiment provides The schematic diagram that mobile phone shooting preview interface is shown during self-timer;
A kind of signal for target area that Fig. 4 is first embodiment of the invention, 3rd embodiment, fourth embodiment provide Figure;
Fig. 5 is the signal of invention first embodiment, 3rd embodiment, a kind of setting interface display of fourth embodiment offer Figure;
Fig. 6 is a kind of flow chart for image pickup method that second embodiment of the invention provides;
Fig. 7 is a kind of schematic diagram for terminal that second embodiment of the invention provides.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
Describe to realize the terminal of each embodiment of the present invention referring now to accompanying drawing.In follow-up description, using for The suffix of such as " module ", " part " or " unit " of element is represented only for being advantageous to the explanation of the present invention, itself is not There is specific meaning.Therefore, " module " can be used mixedly with " part ".
Terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as mobile phone, flat board Computer, notebook computer, palm PC, personal digital assistant (Personal Digital Assistant, PDA), portable Media player (Portable Media Player, PMP), guider, wearable device, Intelligent bracelet, pedometer etc. move Dynamic terminal, and the fixed terminal such as digital TV, desktop computer.
It will be appreciated by those skilled in the art that in addition to being used in particular for moving the element of purpose, according to the present invention's The construction of embodiment can also apply to the terminal of fixed type.
Referring to Fig. 1, its hardware architecture diagram for each optional terminal of embodiment one of the realization present invention, the end End 100 can include:RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit 103, A/V (audio/video) input block 104, sensor 105, display unit 106, user input unit 107, interface unit 108, storage Device 109 and the grade part of processor 110.It will be understood by those skilled in the art that the terminal structure shown in Fig. 1 is not formed pair The restriction of terminal, terminal can be included than illustrating more or less parts, either combine some parts or different parts Arrangement.
The all parts of terminal are specifically introduced with reference to Fig. 1:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, the reception and transmission of signal, specifically, by base station Downlink information receive after, handled to processor 110;In addition, up data are sent to base station.Generally, radio frequency unit 101 Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, penetrate Frequency unit 101 can also be communicated by radio communication with network and other equipment.Above-mentioned radio communication can use any communication Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications System), GPRS (General Packet Radio Service, general packet radio service), CDMA1000 (Code Division Multiple Access 1000, CDMA 1000), WCDMA (Wideband Code Division Multiple Access, WCDMA), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division Duplexing-Long Term Evolution, FDD Long Term Evolution) and TDD-LTE (Time Division Duplexing-Long Term Evolution, time division duplex Long Term Evolution) etc..
WiFi belongs to short range wireless transmission technology, and terminal can help user's transceiver electronicses postal by WiFi module 102 Part, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.
Audio output unit 103 can be in call signal reception pattern, call mode, logging mode, language in terminal 100 It is receiving or depositing by radio frequency unit 101 or WiFi module 102 when under the isotypes such as sound recognition mode, broadcast reception mode It is sound that the voice data stored in reservoir 109, which is converted into audio signal and exported,.Moreover, audio output unit 103 may be used also With the specific function provided to terminal 100 performs, related audio output is (for example, call signal receives sound, message sink sound Sound etc.).Audio output unit 103 can include loudspeaker, buzzer etc..
A/V input blocks 104 are used to receive audio or video signal.A/V input blocks 104 can include graphics processor (Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode Or the static images or the view data of video obtained in image capture mode by image capture apparatus (such as camera) are carried out Reason.Picture frame after processing may be displayed on display unit 106.Picture frame after the processing of graphics processor 1041 can be deposited Storage is transmitted in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042 Quiet down sound (voice data), and can be voice data by such acoustic processing.Audio (voice) data after processing can To be converted to the form output that mobile communication base station can be sent to via radio frequency unit 101 in the case of telephone calling model. Microphone 1042 can implement various types of noises and eliminate (or suppression) algorithm to eliminate (or suppression) in reception and send sound Caused noise or interference during frequency signal.
Terminal 100 also includes at least one sensor 105, such as optical sensor, motion sensor and other sensors. Specifically, optical sensor includes ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to ambient light Light and shade adjusts the brightness of display panel 1061, and proximity transducer can close display panel when terminal 100 is moved in one's ear 1061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (generally three axles) and add The size of speed, size and the direction of gravity are can detect that when static, application (such as the horizontal/vertical screen available for identification mobile phone posture Switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;As for mobile phone also Configurable fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, hygrometer, temperature The other sensors such as meter, infrared ray sensor, will not be repeated here.
Display unit 106 is used for the information for showing the information inputted by user or being supplied to user.Display unit 106 can wrap Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used Forms such as (Organic Light-Emitting Diode, OLED) configures display panel 1061.
User input unit 107 can be used for the numeral or character information for receiving input, and produce the user with terminal 100 Set and function control it is relevant key signals input.Specifically, user input unit 107 may include contact panel 1071 and Other input equipments 1072.Contact panel 1071, also referred to as touch-screen, collect touch operation of the user on or near it (for example user uses any suitable objects or annex such as finger, stylus on contact panel 1071 or in contact panel 1071 Neighbouring operation), and corresponding attachment means are driven according to formula set in advance.Contact panel 1071 may include touch detection Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it Contact coordinate is converted into, then gives processor 110, and the order sent of reception processing device 110 and can be performed.In addition, can To realize contact panel 1071 using polytypes such as resistance-type, condenser type, infrared ray and surface acoustic waves.Except contact panel 1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can wrap Include but be not limited to physical keyboard, in function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc. One or more, do not limit herein specifically.
Further, contact panel 1071 can cover display panel 1061, detect thereon when contact panel 1071 or After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, contact panel 1071 and display panel 1061 be the part independent as two to realize the input of mobile terminal and output function, but in certain embodiments, can Input and the output function of terminal are realized so that contact panel 1071 and display panel 1061 is integrated, is not limited herein specifically It is fixed.
Interface unit 108 is connected the interface that can pass through as at least one external device (ED) with terminal 100.It is for example, outside Device can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or wireless number According to port, memory card port, for connecting the port of device with identification module, audio input/output (I/O) port, regarding Frequency I/O ports, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, data are believed Breath, electric power etc.) and one or more elements that the input received is transferred in terminal 100 or can be used at end Data are transmitted between end 100 and external device (ED).
Memory 109 can be used for storage software program and various data.Memory 109 can mainly include storing program area And storage data field, wherein, storing program area can storage program area, application program (such as the sound needed at least one function Sound playing function, image player function etc.) etc.;Storage data field can store according to mobile phone use created data (such as Voice data, phone directory etc.) etc..In addition, memory 109 can include high-speed random access memory, can also include non-easy The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of terminal 100, utilizes each of various interfaces and the whole terminal 100 of connection Part, by running or performing the software program and/or module that are stored in memory 109, and call and be stored in memory Data in 109, the various functions and processing data of terminal 100 are performed, so as to carry out integral monitoring to terminal 100.Processor 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modem processor, Wherein, application processor mainly handles operating system, user interface and application program etc., and modem processor mainly handles nothing Line communicates.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Although Fig. 1 is not shown, terminal 100 can also will not be repeated here including bluetooth module etc..
Based on above-mentioned terminal hardware structure, below by way of specific embodiment, the present invention is described in detail.
First embodiment
A kind of image pickup method is present embodiments provided, the image pickup method of the present embodiment can apply to mobile phone, tablet personal computer Etc. terminal, referring to Fig. 2, Fig. 2 is a kind of flow chart for image pickup method that the present embodiment provides, and the image pickup method includes following step Suddenly:
S201:In shooting, the action of user is detected by least one ultrasonic sensor;
At least one ultrasonic sensor is set in terminal, the positions such as the back side, the side of terminal, ultrasonic wave can be arranged on Include ultrasound wave emitter and ultrasonic receiver in sensor.
S202:Region corresponding to action in shooting preview interface is defined as target area;
The action of user can be detected by ultrasonic sensor, and the position of the action can be detected, by this The position of action is mapped in shooting preview interface, and then can determine position corresponding to the action in shooting preview interface;
For example, in shooting, eyes of user is detected in the action opened eyes and closed one's eyes by ultrasonic sensor, and The position of eyes is identified by ultrasonic sensor, the position is mapped in shooting preview interface, and then can be determined Position in shooting preview interface where eyes.
S203:Landscaping treatment is carried out to target area in shooting preview interface, and shows the image after landscaping treatment.
In one embodiment, when ultrasonic sensor is high-power operation state,
The action that S201 detects user by least one ultrasonic sensor includes:Pass through at least one supersonic sensing Device detects the facial action of user, and facial action includes following at least one:The actions of eyes, the action of eyebrow, lip Action, the action of nose, the action of ear;
The action of eyes is such as opening eyes, close one's eyes, goggle;
The action example of eyebrow such as chooses eyebrow up and down;
The action of lip is such as opening one's mouth, shut up, smile, laugh;
The action of nose is such as firmly supportting big nostril;
The action of ear can firmly and then drive ear dynamic etc. by face;
When facial action includes the action of eyes, region corresponding to action in shooting preview interface is defined as mesh by S202 Mark region includes:Region where the eyes of user in shooting preview interface is defined as target area;
When carrying out landscaping treatment to eyes in shooting preview interface, can be eyes are amplified, the beautification such as bright eye Processing.
When facial action includes the action of eyebrow, region corresponding to action in shooting preview interface is defined as mesh by S202 Mark region includes:Region where the eyebrow of user in shooting preview interface is defined as target area;
Can be that camber modification, camber are carried out more to eyebrow when carrying out landscaping treatment to eyebrow in shooting preview interface Change, eyebrow overstriking, eyebrow subtract the landscaping treatment such as light.
When facial action includes the action of lip, region corresponding to action in shooting preview interface is defined as mesh by S202 Mark region includes:Region where the lip of user in shooting preview interface is defined as target area;
Can be that the intensification of lip color is carried out to lip, changes mouth when carrying out landscaping treatment to lip in shooting preview interface The landscaping treatments such as lip red color;
When the action of lip shows one's teeth, white tooth processing can also be carried out.
When facial action includes the action of nose, region corresponding to action in shooting preview interface is defined as mesh by S202 Mark region includes:Region where the nose of user in shooting preview interface is defined as target area;
Can be that the landscaping treatments such as thin nose are carried out to nose when carrying out landscaping treatment to nose in shooting preview interface.
When facial action includes the action of ear, region corresponding to action in shooting preview interface is defined as mesh by S202 Mark region includes:Region where the ear of user in shooting preview interface is defined as target area;
Can clap when user does not wear earrings when carrying out landscaping treatment to ear in shooting preview interface Taking the photograph increases the landscaping treatments such as earrings on the ear of preview interface.
Wherein, when by ultrasonic sensor detect user facial action and ultrasonic sensor number be more than 1 When,
The receiver for the ultrasonic sensor for being less than predetermined threshold value with the distance of face area in shooting preview interface is opened Open, the reception office of the ultrasonic sensor of predetermined threshold value will be more than or equal to the distance of face area in shooting preview interface Close.
For example, with reference to Fig. 3, Fig. 3 be a kind of ultrasound senor position on mobile phone that the present embodiment provides set, with And the schematic diagram that mobile phone shooting preview interface is shown during user's self-timer, the left surface of mobile phone set gradually from top to bottom A, B, C, This 4 ultrasonic sensors of D, E, F, G, H this 4 ultrasonic sensors are set gradually from top to bottom in the right flank of mobile phone, are used When using front camera self-timer, face area then can only open shooting in part on the upper side in shooting preview interface at family The receiver of this 4 ultrasonic sensors of A, B, E, F of preview interface top half, close shooting preview interface the latter half C, the receiver of this 4 ultrasonic sensors of D, G, H.
In one embodiment, when ultrasonic sensor is low-power operation state,
The action of user is detected by least one ultrasonic sensor to be included:Examined by least one ultrasonic sensor The limb action of user is surveyed, limb action includes following at least one:The action of arm, the action of leg;
The action of arm is such as brandishing arm;
The action of leg is such as brandishing leg;
Region corresponding to action in shooting preview interface is defined as into target area includes:By user in shooting preview interface Limb action after region be defined as target area.
For example, with reference to Fig. 4, Fig. 4 is a kind of schematic diagram for target area that the present embodiment provides, and in shooting, can be seen User into shooting preview interface brandishes arm, and arm brandishes the right to screen from the left side of screen, then by shooting preview In interface the action of user's arm after region be defined as target area.
Wherein, carrying out landscaping treatment to target area in shooting preview interface includes:In shooting preview interface to target area Domain is blurred, and/or mosaic, and/or scribble processing;
For example, in Fig. 4, by user's arm after region blurred, and/or mosaic, and/or scribble are handled.
Optionally, landscaping treatment is carried out to target area in S203 shooting preview interfaces, and shows the figure after landscaping treatment As after, in addition to:
Receive operation of the user to confirmation shooting button;
According to operation, the image after the landscaping treatment that shooting preview interface is shown is shot and preserved;
For example, user, in self-timer, ultrasonic sensor detects the action of eyes of user, then to shooting preview interface Region where eyes carries out bright eye and eyes enhanced processing, and shooting preview interface shows the image after landscaping treatment, when User presses confirmation shooting button, then will carry out the image after bright eye and eyes enhanced processing and shot and preserved.
Optionally, it is a kind of schematic diagram for setting interface display that embodiment provides referring to Fig. 5, Fig. 5, S201 is being shot When, before the action that user is detected by least one ultrasonic sensor, in addition to:
Display sets interface, sets interface to include the unlatching options of preset function, and preset function includes:In shooting, The action of user is detected by least one ultrasonic sensor;Region corresponding to action in shooting preview interface is defined as mesh Mark region;Landscaping treatment is carried out to target area in shooting preview interface, and shows the image after landscaping treatment;
Receive selected operation of the user to unlatching options;
According to selected operation, preset function is opened;
That is, user can open preset function in a manual fashion according to the demand of oneself;
Or the opening ways of preset function include:When triggering shooting, while preset function is opened, eliminate user The step of needing manually opened preset function, user time is saved, improves Consumer's Experience.
Optionally, referring to Fig. 5, landscaping treatment is carried out to target area in S203 shooting preview interfaces, and show at beautification After image after reason, in addition to:
Display sets interface, sets interface to include the closing options of preset function;
Receive selected operation of the user to closing options;
According to selected operation, preset function is closed;
That is, user can close preset function in a manual fashion according to the demand of oneself;
Or the closing mode of preset function includes:When exiting shooting, preset function is simultaneously closed off, eliminates user The step of needing manual-lock preset function, user time is saved, improve Consumer's Experience.
By the implementation of the present embodiment, according to the action of user, image is carried out at beautification automatically in shooting preview interface Reason, eliminates the trouble of user's later stage manual landscaping treatment image, saves user time, improve Consumer's Experience.
Second embodiment
A kind of specific embodiment of image pickup method is present embodiments provided, by taking Fig. 3 as an example, and referring to Fig. 6, Fig. 6 is this reality A kind of flow chart of image pickup method of example offer is applied, the image pickup method comprises the following steps:
S601:The trigger action that user opens to terminal photographs function is received, opens shoot function, while is opened default Function;
Preset function includes:In shooting, the action of user is detected by least one ultrasonic sensor;It will shoot pre- Looking at region corresponding to action in interface is defined as target area;Landscaping treatment is carried out to target area in shooting preview interface, and Show the image after landscaping treatment.
S602:By 8 ultrasonic sensors in terminal detect eyes of user carried out open eyes and close one's eyes action, And the action that the lip of user is smiled;
8 ultrasonic sensors are high-power operation state.
S603:Eyes in shooting preview interface are carried out with the landscaping treatment of bright eye and amplification, and lip color is carried out to lip The landscaping treatment of intensification, and the image after landscaping treatment is shown in shooting preview interface.
S604:Receive operation of the user to confirmation shooting button.
S605:According to operation, the image after the landscaping treatment that shooting preview interface is shown is shot and preserved.
S606:The trigger action that user closes to terminal photographs function is received, shoot function is closed, simultaneously closes off default Function.
3rd embodiment
The present embodiment provides a kind of terminal, referring to Fig. 7, a kind of schematic diagram for terminal that Fig. 7 provides for the present embodiment, and the end End includes processor 701, memory 702 and communication bus 703, wherein:
Communication bus 703 is used to realize the connection communication between processor 701 and memory 702;
Processor 701 is used to perform the one or more programs stored in memory 702, to realize following steps:
In shooting, the action of user is detected by least one ultrasonic sensor;
At least one ultrasonic sensor is set in terminal, the positions such as the back side, the side of terminal, ultrasonic wave can be arranged on Include ultrasound wave emitter and ultrasonic receiver in sensor.
Region corresponding to action in shooting preview interface is defined as target area;
The action of user can be detected by ultrasonic sensor, and the position of the action can be detected, by this The position of action is mapped in shooting preview interface, and then can determine position corresponding to the action in shooting preview interface;
For example, in shooting, eyes of user is detected in the action opened eyes and closed one's eyes by ultrasonic sensor, and The position of eyes is identified by ultrasonic sensor, the position is mapped in shooting preview interface, and then can be determined Position in shooting preview interface where eyes.
Landscaping treatment is carried out to target area in shooting preview interface, and shows the image after landscaping treatment.
In one embodiment, when ultrasonic sensor is high-power operation state,
The action of user is detected by least one ultrasonic sensor to be included:Examined by least one ultrasonic sensor The facial action of user is surveyed, facial action includes following at least one:The action of eyes, the action of eyebrow, the action of lip, The action of nose, the action of ear;
The action of eyes is such as opening eyes, close one's eyes, goggle;
The action example of eyebrow such as chooses eyebrow up and down;
The action of lip is such as opening one's mouth, shut up, smile, laugh;
The action of nose is such as firmly supportting big nostril;
The action of ear can firmly and then drive ear dynamic etc. by face;
When facial action includes the action of eyes, region corresponding to action in shooting preview interface is defined as target area Domain includes:Region where the eyes of user in shooting preview interface is defined as target area;
When carrying out landscaping treatment to eyes in shooting preview interface, can be eyes are amplified, the beautification such as bright eye Processing.
When facial action includes the action of eyebrow, region corresponding to action in shooting preview interface is defined as target area Domain includes:Region where the eyebrow of user in shooting preview interface is defined as target area;
Can be that camber modification, camber are carried out more to eyebrow when carrying out landscaping treatment to eyebrow in shooting preview interface Change, eyebrow overstriking, eyebrow subtract the landscaping treatment such as light.
When facial action includes the action of lip, region corresponding to action in shooting preview interface is defined as target area Domain includes:Region where the lip of user in shooting preview interface is defined as target area;
Can be that the intensification of lip color is carried out to lip, changes mouth when carrying out landscaping treatment to lip in shooting preview interface The landscaping treatments such as lip red color;
When the action of lip shows one's teeth, white tooth processing can also be carried out.
When facial action includes the action of nose, region corresponding to action in shooting preview interface is defined as target area Domain includes:Region where the nose of user in shooting preview interface is defined as target area;
Can be that the landscaping treatments such as thin nose are carried out to nose when carrying out landscaping treatment to nose in shooting preview interface.
When facial action includes the action of ear, region corresponding to action in shooting preview interface is defined as target area Domain includes:Region where the ear of user in shooting preview interface is defined as target area;
Can clap when user does not wear earrings when carrying out landscaping treatment to ear in shooting preview interface Taking the photograph increases the landscaping treatments such as earrings on the ear of preview interface.
Wherein, when by ultrasonic sensor detect user facial action and ultrasonic sensor number be more than 1 When,
The receiver for the ultrasonic sensor for being less than predetermined threshold value with the distance of face area in shooting preview interface is opened Open, the reception office of the ultrasonic sensor of predetermined threshold value will be more than or equal to the distance of face area in shooting preview interface Close.
For example, with reference to Fig. 3, Fig. 3 be a kind of ultrasound senor position on mobile phone that the present embodiment provides set, with And the schematic diagram that mobile phone shooting preview interface is shown during user's self-timer, the left surface of mobile phone set gradually from top to bottom A, B, C, This 4 ultrasonic sensors of D, E, F, G, H this 4 ultrasonic sensors are set gradually from top to bottom in the right flank of mobile phone, are used When using front camera self-timer, face area then can only open shooting in part on the upper side in shooting preview interface at family The receiver of this 4 ultrasonic sensors of A, B, E, F of preview interface top half, close shooting preview interface the latter half C, the receiver of this 4 ultrasonic sensors of D, G, H.
In one embodiment, when ultrasonic sensor is low-power operation state,
The action of user is detected by least one ultrasonic sensor to be included:Examined by least one ultrasonic sensor The limb action of user is surveyed, limb action includes following at least one:The action of arm, the action of leg;
The action of arm is such as brandishing arm;
The action of leg is such as brandishing leg;
Region corresponding to action in shooting preview interface is defined as into target area includes:By user in shooting preview interface Limb action after region be defined as target area.
For example, with reference to Fig. 4, Fig. 4 is a kind of schematic diagram for target area that the present embodiment provides, and in shooting, can be seen User into shooting preview interface brandishes arm, and arm brandishes the right to screen from the left side of screen, then by shooting preview In interface the action of user's arm after region be defined as target area.
Wherein, carrying out landscaping treatment to target area in shooting preview interface includes:In shooting preview interface to target area Domain is blurred, and/or mosaic, and/or scribble processing;
For example, in Fig. 4, by user's arm after region blurred, and/or mosaic, and/or scribble are handled.
Optionally, shooting preview interface to target area carry out landscaping treatment, and show the image after landscaping treatment it Afterwards, processor 701 is additionally operable to perform the one or more programs stored in memory 702, to realize following steps:
Receive operation of the user to confirmation shooting button;
According to operation, the image after the landscaping treatment that shooting preview interface is shown is shot and preserved;
For example, user, in self-timer, ultrasonic sensor detects the action of eyes of user, then to shooting preview interface Region where eyes carries out bright eye and eyes enhanced processing, and shooting preview interface shows the image after landscaping treatment, when User presses confirmation shooting button, then will carry out the image after bright eye and eyes enhanced processing and shot and preserved.
Optionally, referring to Fig. 5, Fig. 5 is a kind of schematic diagram for setting interface display that embodiment provides, and in shooting, is led to Cross before the action of at least one ultrasonic sensor detection user, processor 701 is additionally operable to perform what is stored in memory 702 One or more programs, to realize following steps:
Display sets interface, sets interface to include the unlatching options of preset function, and preset function includes:In shooting, The action of user is detected by least one ultrasonic sensor;Region corresponding to action in shooting preview interface is defined as mesh Mark region;Landscaping treatment is carried out to target area in shooting preview interface, and shows the image after landscaping treatment;
Receive selected operation of the user to unlatching options;
According to selected operation, preset function is opened;
That is, user can open preset function in a manual fashion according to the demand of oneself;
Or the opening ways of preset function include:When triggering shooting, while preset function is opened, eliminate user The step of needing manually opened preset function, user time is saved, improves Consumer's Experience.
Optionally, referring to Fig. 5, landscaping treatment is carried out to target area in shooting preview interface, and after showing landscaping treatment Image after, processor 701 is additionally operable to perform one or more programs for storing in memory 702, to realize following steps:
Display sets interface, sets interface to include the closing options of preset function;
Receive selected operation of the user to closing options;
According to selected operation, preset function is closed;
That is, user can close preset function in a manual fashion according to the demand of oneself;
Or the closing mode of preset function includes:When exiting shooting, preset function is simultaneously closed off, eliminates user The step of needing manual-lock preset function, user time is saved, improve Consumer's Experience.
By the implementation of the present embodiment, according to the action of user, image is carried out at beautification automatically in shooting preview interface Reason, eliminates the trouble of user's later stage manual landscaping treatment image, saves user time, improve Consumer's Experience.
5th embodiment
The present embodiment provides a kind of computer-readable recording medium, and computer-readable recording medium storage has one or more Individual program, one or more program can be by one or more computing devices, to realize following steps:
In shooting, the action of user is detected by least one ultrasonic sensor;
At least one ultrasonic sensor is set in terminal, the positions such as the back side, the side of terminal, ultrasonic wave can be arranged on Include ultrasound wave emitter and ultrasonic receiver in sensor.
Region corresponding to action in shooting preview interface is defined as target area;
The action of user can be detected by ultrasonic sensor, and the position of the action can be detected, by this The position of action is mapped in shooting preview interface, and then can determine position corresponding to the action in shooting preview interface;
For example, in shooting, eyes of user is detected in the action opened eyes and closed one's eyes by ultrasonic sensor, and The position of eyes is identified by ultrasonic sensor, the position is mapped in shooting preview interface, and then can be determined Position in shooting preview interface where eyes.
Landscaping treatment is carried out to target area in shooting preview interface, and shows the image after landscaping treatment.
In one embodiment, when ultrasonic sensor is high-power operation state,
The action of user is detected by least one ultrasonic sensor to be included:Examined by least one ultrasonic sensor The facial action of user is surveyed, facial action includes following at least one:The action of eyes, the action of eyebrow, the action of lip, The action of nose, the action of ear;
The action of eyes is such as opening eyes, close one's eyes, goggle;
The action example of eyebrow such as chooses eyebrow up and down;
The action of lip is such as opening one's mouth, shut up, smile, laugh;
The action of nose is such as firmly supportting big nostril;
The action of ear can firmly and then drive ear dynamic etc. by face;
When facial action includes the action of eyes, region corresponding to action in shooting preview interface is defined as target area Domain includes:Region where the eyes of user in shooting preview interface is defined as target area;
When carrying out landscaping treatment to eyes in shooting preview interface, can be eyes are amplified, the beautification such as bright eye Processing.
When facial action includes the action of eyebrow, region corresponding to action in shooting preview interface is defined as target area Domain includes:Region where the eyebrow of user in shooting preview interface is defined as target area;
Can be that camber modification, camber are carried out more to eyebrow when carrying out landscaping treatment to eyebrow in shooting preview interface Change, eyebrow overstriking, eyebrow subtract the landscaping treatment such as light.
When facial action includes the action of lip, region corresponding to action in shooting preview interface is defined as target area Domain includes:Region where the lip of user in shooting preview interface is defined as target area;
Can be that the intensification of lip color is carried out to lip, changes mouth when carrying out landscaping treatment to lip in shooting preview interface The landscaping treatments such as lip red color;
When the action of lip shows one's teeth, white tooth processing can also be carried out.
When facial action includes the action of nose, region corresponding to action in shooting preview interface is defined as target area Domain includes:Region where the nose of user in shooting preview interface is defined as target area;
Can be that the landscaping treatments such as thin nose are carried out to nose when carrying out landscaping treatment to nose in shooting preview interface.
When facial action includes the action of ear, region corresponding to action in shooting preview interface is defined as target area Domain includes:Region where the ear of user in shooting preview interface is defined as target area;
Can clap when user does not wear earrings when carrying out landscaping treatment to ear in shooting preview interface Taking the photograph increases the landscaping treatments such as earrings on the ear of preview interface.
Wherein, when by ultrasonic sensor detect user facial action and ultrasonic sensor number be more than 1 When,
The receiver for the ultrasonic sensor for being less than predetermined threshold value with the distance of face area in shooting preview interface is opened Open, the reception office of the ultrasonic sensor of predetermined threshold value will be more than or equal to the distance of face area in shooting preview interface Close.
For example, with reference to Fig. 3, Fig. 3 be a kind of ultrasound senor position on mobile phone that the present embodiment provides set, with And the schematic diagram that mobile phone shooting preview interface is shown during user's self-timer, the left surface of mobile phone set gradually from top to bottom A, B, C, This 4 ultrasonic sensors of D, E, F, G, H this 4 ultrasonic sensors are set gradually from top to bottom in the right flank of mobile phone, are used When using front camera self-timer, face area then can only open shooting in part on the upper side in shooting preview interface at family The receiver of this 4 ultrasonic sensors of A, B, E, F of preview interface top half, close shooting preview interface the latter half C, the receiver of this 4 ultrasonic sensors of D, G, H.
In one embodiment, when ultrasonic sensor is low-power operation state,
The action of user is detected by least one ultrasonic sensor to be included:Examined by least one ultrasonic sensor The limb action of user is surveyed, limb action includes following at least one:The action of arm, the action of leg;
The action of arm is such as brandishing arm;
The action of leg is such as brandishing leg;
Region corresponding to action in shooting preview interface is defined as into target area includes:By user in shooting preview interface Limb action after region be defined as target area.
For example, with reference to Fig. 4, Fig. 4 is a kind of schematic diagram for target area that the present embodiment provides, and in shooting, can be seen User into shooting preview interface brandishes arm, and arm brandishes the right to screen from the left side of screen, then by shooting preview In interface the action of user's arm after region be defined as target area.
Wherein, carrying out landscaping treatment to target area in shooting preview interface includes:In shooting preview interface to target area Domain is blurred, and/or mosaic, and/or scribble processing;
For example, in Fig. 4, by user's arm after region blurred, and/or mosaic, and/or scribble are handled.
Optionally, shooting preview interface to target area carry out landscaping treatment, and show the image after landscaping treatment it Afterwards, one or more program can also be by one or more computing device, to realize following steps:
Receive operation of the user to confirmation shooting button;
According to operation, the image after the landscaping treatment that shooting preview interface is shown is shot and preserved;
For example, user, in self-timer, ultrasonic sensor detects the action of eyes of user, then to shooting preview interface Region where eyes carries out bright eye and eyes enhanced processing, and shooting preview interface shows the image after landscaping treatment, when User presses confirmation shooting button, then will carry out the image after bright eye and eyes enhanced processing and shot and preserved.
Optionally, referring to Fig. 5, Fig. 5 is a kind of schematic diagram for setting interface display that embodiment provides, and in shooting, is led to Cross before the action of at least one ultrasonic sensor detection user, one or more program can also be by one or more Manage device to perform, to realize following steps:
Display sets interface, sets interface to include the unlatching options of preset function, and preset function includes:In shooting, The action of user is detected by least one ultrasonic sensor;Region corresponding to action in shooting preview interface is defined as mesh Mark region;Landscaping treatment is carried out to target area in shooting preview interface, and shows the image after landscaping treatment;
Receive selected operation of the user to unlatching options;
According to selected operation, preset function is opened;
That is, user can open preset function in a manual fashion according to the demand of oneself;
Or the opening ways of preset function include:When triggering shooting, while preset function is opened, eliminate user The step of needing manually opened preset function, user time is saved, improves Consumer's Experience.
Optionally, referring to Fig. 5, landscaping treatment is carried out to target area in shooting preview interface, and after showing landscaping treatment Image after, one or more program can also be by one or more computing device, to realize following steps:
Display sets interface, sets interface to include the closing options of preset function;
Receive selected operation of the user to closing options;
According to selected operation, preset function is closed;
That is, user can close preset function in a manual fashion according to the demand of oneself;
Or the closing mode of preset function includes:When exiting shooting, preset function is simultaneously closed off, eliminates user The step of needing manual-lock preset function, user time is saved, improve Consumer's Experience.
By the implementation of the present embodiment, according to the action of user, image is carried out at beautification automatically in shooting preview interface Reason, eliminates the trouble of user's later stage manual landscaping treatment image, saves user time, improve Consumer's Experience.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row His property includes, so that process, method, article or device including a series of elements not only include those key elements, and And also include the other element being not expressly set out, or also include for this process, method, article or device institute inherently Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this Other identical element also be present in the process of key element, method, article or device.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on such understanding, technical scheme is substantially done to prior art in other words Going out the part of contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium In (such as ROM/RAM, magnetic disc, CD), including some instructions to cause a station terminal equipment (can be mobile phone, computer, clothes Be engaged in device, air conditioner, or network equipment etc.) method that performs each embodiment of the present invention.
The preferred embodiments of the present invention are these are only, are not intended to limit the scope of the invention, it is every to utilize this hair The equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other related skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of image pickup method, it is characterised in that the image pickup method comprises the following steps:
In shooting, the action of user is detected by least one ultrasonic sensor;
Region corresponding to action described in shooting preview interface is defined as target area;
Landscaping treatment is carried out to the target area in the shooting preview interface, and shows the image after landscaping treatment.
2. image pickup method as claimed in claim 1, it is characterised in that when the ultrasonic sensor is high-power operation state When,
The action that user is detected by least one ultrasonic sensor includes:Examined by least one ultrasonic sensor The facial action of user is surveyed, the facial action includes following at least one:The actions of eyes, the action of eyebrow, lip Action, the action of nose, the action of ear;
It is described by region corresponding to action described in shooting preview interface when the facial action includes the action of the eyes Being defined as target area includes:Region where the eyes of user described in shooting preview interface is defined as target area;
It is described by region corresponding to action described in shooting preview interface when the facial action includes the action of the eyebrow Being defined as target area includes:Region where the eyebrow of user described in shooting preview interface is defined as target area;
It is described by region corresponding to action described in shooting preview interface when the facial action includes the action of the lip Being defined as target area includes:Region where the lip of user described in shooting preview interface is defined as target area;
It is described by region corresponding to action described in shooting preview interface when the facial action includes the action of the nose Being defined as target area includes:Region where the nose of user described in shooting preview interface is defined as target area;
It is described by region corresponding to action described in shooting preview interface when the facial action includes the action of the ear Being defined as target area includes:Region where the ear of user described in shooting preview interface is defined as target area.
3. image pickup method as claimed in claim 2, it is characterised in that when the face that user is detected by ultrasonic sensor is moved When the number of work and the ultrasonic sensor is more than 1,
The reception of the ultrasonic sensor of predetermined threshold value will be less than with the distance of face area in the shooting preview interface Machine is opened, and the ultrasonic sensor of predetermined threshold value will be more than or equal to the distance of face area in the shooting preview interface Receiver close.
4. image pickup method as claimed in claim 1, it is characterised in that when the ultrasonic sensor is low-power operation state When,
The action that user is detected by least one ultrasonic sensor includes:Examined by least one ultrasonic sensor The limb action of user is surveyed, the limb action includes following at least one:The action of arm, the action of leg;
It is described to be defined as target area described in shooting preview interface and include in region corresponding to action:By in shooting preview interface The limb action of the user after region be defined as target area.
5. image pickup method as claimed in claim 4, it is characterised in that it is described in the shooting preview interface to the target area Domain, which carries out landscaping treatment, to be included:The shooting preview interface target area is blurred, and/or mosaic, and/or Scribble processing.
6. the image pickup method as described in any one of claim 1 to 5, it is characterised in that described in the shooting preview interface pair The target area carries out landscaping treatment, and after showing the image after landscaping treatment, in addition to:
Receive operation of the user to confirmation shooting button;
According to the operation, the image after the landscaping treatment that the shooting preview interface is shown is shot and preserved.
7. the image pickup method as described in any one of claim 1 to 5, it is characterised in that it is described in shooting, by least one Before the action of ultrasonic sensor detection user, in addition to:
Display sets interface, the unlatching options for setting interface to include preset function, and the preset function includes:Shooting When, the action of user is detected by least one ultrasonic sensor;By region corresponding to action described in shooting preview interface It is defined as target area;Landscaping treatment is carried out to the target area in the shooting preview interface, and after showing landscaping treatment Image;
Receive selected operation of the user to the unlatching options;
According to the selected operation, the preset function is opened;
Or the opening ways of the preset function include:When triggering the shooting, while open the preset function.
8. image pickup method as claimed in claim 7, it is characterised in that it is described in the shooting preview interface to the target area Domain carries out landscaping treatment, and after showing the image after landscaping treatment, in addition to:
Display sets interface, the closing options for setting interface to include the preset function;
Receive selected operation of the user to the closing options;
According to the selected operation, the preset function is closed;
Or the closing mode of the preset function includes:When exiting the shooting, the preset function is simultaneously closed off.
9. a kind of terminal, it is characterised in that the terminal includes processor, memory and communication bus;
The communication bus is used to realize the connection communication between processor and memory;
The processor is used to perform one or more program stored in memory, to realize as appointed in claim 1 to 8 The step of image pickup method described in one.
A kind of 10. computer-readable recording medium, it is characterised in that the computer-readable recording medium storage have one or Multiple programs, one or more of programs can be by one or more computing devices, to realize such as claim 1 to 8 Any one of image pickup method the step of.
CN201710624684.6A 2017-07-27 2017-07-27 A kind of image pickup method, terminal and computer-readable recording medium Pending CN107547797A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710624684.6A CN107547797A (en) 2017-07-27 2017-07-27 A kind of image pickup method, terminal and computer-readable recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710624684.6A CN107547797A (en) 2017-07-27 2017-07-27 A kind of image pickup method, terminal and computer-readable recording medium

Publications (1)

Publication Number Publication Date
CN107547797A true CN107547797A (en) 2018-01-05

Family

ID=60970403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710624684.6A Pending CN107547797A (en) 2017-07-27 2017-07-27 A kind of image pickup method, terminal and computer-readable recording medium

Country Status (1)

Country Link
CN (1) CN107547797A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110035227A (en) * 2019-03-25 2019-07-19 维沃移动通信有限公司 Special effect display methods and terminal device
CN111210491A (en) * 2019-12-31 2020-05-29 维沃移动通信有限公司 Image processing method, electronic device, and storage medium
CN114303366A (en) * 2019-09-06 2022-04-08 索尼集团公司 Information processing apparatus, information processing method, and information processing program

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070098396A1 (en) * 2005-11-02 2007-05-03 Olympus Corporation Electronic camera
US20100053358A1 (en) * 2008-08-26 2010-03-04 Canon Kabushiki Kaisha Image capturing apparatus and method for controlling the same
CN102932541A (en) * 2012-10-25 2013-02-13 广东欧珀移动通信有限公司 Mobile phone photographing method and system
CN103885706A (en) * 2014-02-10 2014-06-25 广东欧珀移动通信有限公司 Method and device for beautifying face images
CN104992402A (en) * 2015-07-02 2015-10-21 广东欧珀移动通信有限公司 Facial beautification processing method and device
CN105096241A (en) * 2015-07-28 2015-11-25 努比亚技术有限公司 Face image beautifying device and method
CN105306820A (en) * 2015-10-15 2016-02-03 广东欧珀移动通信有限公司 Method and device for controlling rotation of camera in mobile terminal and mobile terminal
CN105320929A (en) * 2015-05-21 2016-02-10 维沃移动通信有限公司 Synchronous beautification method for photographing and photographing apparatus thereof
CN105488462A (en) * 2015-11-25 2016-04-13 努比亚技术有限公司 Eye positioning identification device and method
CN205631787U (en) * 2016-05-07 2016-10-12 李海祯 Book device is turned over in humanized automation
CN106548117A (en) * 2015-09-23 2017-03-29 腾讯科技(深圳)有限公司 A kind of face image processing process and device
CN106887024A (en) * 2015-12-16 2017-06-23 腾讯科技(深圳)有限公司 The processing method and processing system of photo
CN106937054A (en) * 2017-03-30 2017-07-07 维沃移动通信有限公司 Take pictures weakening method and the mobile terminal of a kind of mobile terminal

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070098396A1 (en) * 2005-11-02 2007-05-03 Olympus Corporation Electronic camera
US20100053358A1 (en) * 2008-08-26 2010-03-04 Canon Kabushiki Kaisha Image capturing apparatus and method for controlling the same
CN102932541A (en) * 2012-10-25 2013-02-13 广东欧珀移动通信有限公司 Mobile phone photographing method and system
CN103885706A (en) * 2014-02-10 2014-06-25 广东欧珀移动通信有限公司 Method and device for beautifying face images
CN105320929A (en) * 2015-05-21 2016-02-10 维沃移动通信有限公司 Synchronous beautification method for photographing and photographing apparatus thereof
CN104992402A (en) * 2015-07-02 2015-10-21 广东欧珀移动通信有限公司 Facial beautification processing method and device
CN105096241A (en) * 2015-07-28 2015-11-25 努比亚技术有限公司 Face image beautifying device and method
CN106548117A (en) * 2015-09-23 2017-03-29 腾讯科技(深圳)有限公司 A kind of face image processing process and device
CN105306820A (en) * 2015-10-15 2016-02-03 广东欧珀移动通信有限公司 Method and device for controlling rotation of camera in mobile terminal and mobile terminal
CN105488462A (en) * 2015-11-25 2016-04-13 努比亚技术有限公司 Eye positioning identification device and method
CN106887024A (en) * 2015-12-16 2017-06-23 腾讯科技(深圳)有限公司 The processing method and processing system of photo
CN205631787U (en) * 2016-05-07 2016-10-12 李海祯 Book device is turned over in humanized automation
CN106937054A (en) * 2017-03-30 2017-07-07 维沃移动通信有限公司 Take pictures weakening method and the mobile terminal of a kind of mobile terminal

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110035227A (en) * 2019-03-25 2019-07-19 维沃移动通信有限公司 Special effect display methods and terminal device
CN114303366A (en) * 2019-09-06 2022-04-08 索尼集团公司 Information processing apparatus, information processing method, and information processing program
CN111210491A (en) * 2019-12-31 2020-05-29 维沃移动通信有限公司 Image processing method, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN107277216A (en) A kind of volume adjusting method, terminal and computer-readable recording medium
CN107864357A (en) Video calling special effect controlling method, terminal and computer-readable recording medium
CN107767333A (en) Method, equipment and the computer that U.S. face is taken pictures can storage mediums
CN107844763A (en) A kind of face identification method, mobile terminal and computer-readable recording medium
CN107730433A (en) One kind shooting processing method, terminal and computer-readable recording medium
CN108184057A (en) Flexible screen terminal taking method, flexible screen terminal and computer readable storage medium
CN108055411A (en) Flexible screen display methods, mobile terminal and computer readable storage medium
CN107707729A (en) A kind of terminal go out screen or bright screen method, terminal and computer-readable recording medium
CN107682627A (en) A kind of acquisition parameters method to set up, mobile terminal and computer-readable recording medium
CN107767430A (en) One kind shooting processing method, terminal and computer-readable recording medium
CN107197094A (en) One kind shooting display methods, terminal and computer-readable recording medium
CN108089808A (en) A kind of screen-picture acquisition methods, terminal and computer readable storage medium
CN107948430A (en) A kind of display control method, mobile terminal and computer-readable recording medium
CN107124552A (en) A kind of image pickup method, terminal and computer-readable recording medium
CN107682547A (en) A kind of voice messaging regulation and control method, equipment and computer-readable recording medium
CN107632757A (en) A kind of terminal control method, terminal and computer-readable recording medium
CN107330347A (en) A kind of display methods, terminal and computer-readable recording medium
CN107248137A (en) A kind of method and mobile terminal for realizing image procossing
CN107463324A (en) A kind of image display method, mobile terminal and computer-readable recording medium
CN107103581A (en) A kind of image inverted image processing method, device and computer-readable medium
CN108196777A (en) A kind of flexible screen application process, equipment and computer readable storage medium
CN107992455A (en) A kind of text handling method, terminal and computer-readable recording medium
CN107682630A (en) Dual camera anti-fluttering method, mobile terminal and computer-readable recording medium
CN108257097A (en) U.S. face effect method of adjustment, terminal and computer readable storage medium
CN109799912A (en) A kind of display control method, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180105

RJ01 Rejection of invention patent application after publication