CN107315529B - Photographing method and mobile terminal - Google Patents

Photographing method and mobile terminal Download PDF

Info

Publication number
CN107315529B
CN107315529B CN201710464380.8A CN201710464380A CN107315529B CN 107315529 B CN107315529 B CN 107315529B CN 201710464380 A CN201710464380 A CN 201710464380A CN 107315529 B CN107315529 B CN 107315529B
Authority
CN
China
Prior art keywords
target
touch
area
special effect
preview image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710464380.8A
Other languages
Chinese (zh)
Other versions
CN107315529A (en
Inventor
朱宗伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201710464380.8A priority Critical patent/CN107315529B/en
Publication of CN107315529A publication Critical patent/CN107315529A/en
Application granted granted Critical
Publication of CN107315529B publication Critical patent/CN107315529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Abstract

The embodiment of the invention discloses a photographing method and a mobile terminal. The method comprises the following steps: detecting touch operation aiming at an object to be processed in a current preview image; determining a target area and a target special effect of the preview image according to the touch operation; the target area comprises the object to be processed; adding the target special effect to a target area of the preview image to obtain a target preview image; and when a shooting instruction is received, generating a target photo according to the target preview image.

Description

Photographing method and mobile terminal
Technical Field
The embodiment of the invention relates to the field of communication, in particular to a photographing method and a mobile terminal.
Background
At present, more and more users shoot photos through mobile terminals such as mobile phones and tablet computers.
In an actual scene of taking a picture, if a person or an object which is not intended to be taken by the user exists in the background of the taken picture, the picture is flawed and does not meet the shooting requirements of the user. For example, when a user shoots a certain scenic spot, other passersby or vehicles pass through the background, the shot picture contains contents irrelevant to the scenic spot, the display effect of the picture is affected, and the user may not be satisfied with the picture.
Therefore, the current photographing method has the problem that the photographed picture does not meet the photographing requirement of the user.
Disclosure of Invention
The invention provides a photographing method, a photographing device and a mobile terminal, which aim to solve the problem that the photographing requirements of users are not met in the background technology.
In a first aspect, a photographing method is provided, the method comprising:
detecting touch operation aiming at an object to be processed in a current preview image;
determining a target area and a target special effect of the preview image according to the touch operation; the target area comprises the object to be processed;
adding the target special effect to a target area of the preview image to obtain a target preview image;
and when a shooting instruction is received, generating a target photo according to the target preview image.
In a second aspect, a mobile terminal is provided, the mobile terminal comprising:
the touch operation detection unit is used for detecting touch operation aiming at an object to be processed in the current preview image;
the area and special effect determining unit is used for determining a target area and a target special effect of the preview image according to the touch operation; the target area comprises the object to be processed;
the target preview image acquisition unit is used for adding the target special effect to a target area of the preview image to obtain a target preview image;
and the target photo generation unit is used for generating a target photo according to the target preview image when a shooting instruction is received.
In this way, according to the embodiment of the present invention, according to the touch operation of the user on the preview image, the target area including the defective content and the target special effect of the preview image are determined, the target special effect is added to the target area to obtain the target preview image, and the photo is generated according to the target preview image, and the defective content of the obtained photo is added with the target special effect capable of reducing the visibility of the defective content, so that the display effect of the photo is improved, and the technical effect of meeting the shooting requirement of the user is achieved. Moreover, the user can obtain the photo with better display effect through simple touch operation without re-photographing or performing special effect processing on the defective photo after photographing, so that the time of the user is saved, and the user experience is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart illustrating steps of a photographing method according to a first embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a photographing method according to a second embodiment of the present invention;
fig. 3A is a block diagram of a mobile terminal according to a third embodiment of the present invention;
fig. 3B is a block diagram of another mobile terminal according to a third embodiment of the present invention;
FIG. 4 is a diagram illustrating a blurring effect process being initiated on a preview image according to the present invention;
FIG. 5 is a diagram illustrating a setting process of blurring effect level according to the present invention;
FIG. 6 is a schematic diagram of a blurring process flow according to the present invention;
fig. 7 is a block diagram of a mobile terminal according to another embodiment of the present invention;
fig. 8 is a block diagram of a mobile terminal according to still another embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Example one
Fig. 1 is a flowchart of steps of a photographing method according to a first embodiment of the present invention, where the method is applied to a mobile terminal, and the method includes:
step 101, detecting a touch operation for an object to be processed in a current preview image.
The embodiment of the invention can be applied to mobile terminals such as mobile phones and tablet computers. When a user takes a picture by using a certain picture taking application of the mobile terminal, the picture taking application can provide a picture taking preview interface on a display screen of the mobile terminal, a preview image for a current scene to be taken is displayed on the picture taking preview interface, and the user can determine that the current scene meets the shooting requirement of the user according to the preview image. The user may submit a capture instruction to capture a picture of the scene in the current preview image.
The display screen of the mobile terminal may include a touch sensing module or a pressure touch sensing module for sensing touch operations such as a touch pressing operation, a touch stopping operation, or a touch clicking operation of a user.
In a specific implementation of the embodiment of the application, when a user takes a picture, the user may perform a certain touch operation on an object to be processed of a preview image displayed on a display screen. The touch operation may include a touch pressing operation, a touch staying operation, a touch clicking operation, and the like.
The object to be processed may include an object that affects the display effect of the photograph and needs special effect processing. Such as pedestrians or vehicles passing by, a relatively worn building, etc.
Step 102, determining a target area and a target special effect of the preview image according to the touch operation; the target area includes the object to be processed.
In specific implementation, a target area to which a special effect needs to be added and the special effect needs to be added can be determined in a preview image according to touch operation of a user.
There may be various specific implementation manners for determining the target area and the target special effect according to the touch operation.
In one implementation of determining the target area and the target special effect according to the touch operation, the target area and the target special effect may be determined through a single touch operation. First, a touch area of a touch operation and a touch intensity of the touch operation on the touch area can be read. And then, respectively determining a target area and a target special effect by adopting the touch area and the touch intensity.
The touch operation such as a touch-press operation, a touch-stay operation, or a touch-click operation may have a touch area. For example, when a touch press is performed on a certain area of the display screen, the area pressed by the touch press generates a touch press sensing for the pressure of the touch press, and the area generating the touch press sensing can be used as a touch area. For another example, when a touch dwell operation is performed in a certain area on the display screen, when the touch dwell time exceeds a preset time, a touch dwell sensing may be generated in the area, and the area where the touch dwell sensing is generated may be used as a touch area. For another example, when a touch click operation is performed on a certain area on the display screen, the area generates a touch click response for the touch click, and the area generating the touch click response may be used as a touch area.
The touch operation may include a touch pressing operation, a touch staying operation or a touch clicking operation, and the touch intensity may include a pressing pressure of the touch pressing operation, a staying time of the touch staying operation or a clicking number of the touch clicking operation.
Touch operations such as touch pressing, touch staying, or touch clicking operations may also have touch intensity, for example, for a touch pressing operation, the magnitude of the pressing pressure of the touch pressing may be taken as the touch intensity; for another example, for the touch dwell operation, the length of the dwell time of the touch dwell may be used as the touch intensity; for another example, the number of times of touch-clicking may be used as the touch intensity for the touch-clicking operation.
After the touch area and the touch intensity are read, the target area can be determined according to the touch area, and the target special effect can be determined according to the touch intensity.
There may be a variety of specific implementations for determining the target area based on the touch area. For example, for an object to be processed in a static state, a touch area including the object to be processed may be directly used as a target area. For another example, for an object to be processed in a motion state, a region where the object to be processed is located may be changed, so that the current touch region is different from the actual region where the object to be processed is located, and therefore, the object to be processed in the current touch region may be first identified, then the object to be processed is tracked, and the actual region where the moved object to be processed is located is taken as a target region, so that the target region includes the moved object to be processed.
There may be a plurality of specific implementation manners for determining the target special effect according to the touch strength. For example, each special effect level corresponding to each different touch intensity is preset, a target special effect level is determined for the current touch intensity, and a special effect of a target special effect level of a certain preset or user-selected special effect type is taken as a target special effect. For another example, when the touch strength continuously changes within a preset time, different special effect levels may be determined for different touch strengths to obtain different special effects, so that a target special effect level may be determined according to the latest obtained touch strength to determine a current target special effect, or a plurality of continuously changing touch strengths may be recorded, a plurality of touch strengths, corresponding special effect levels and special effects may be presented to a user in a list manner, and the user may select one of the touch strengths according to the content presented in the list, and use the special effect corresponding to the selected touch strength as the current target special effect.
In another implementation manner of determining the target area and the target special effect according to the touch operations, the target area and the target special effect may be determined by at least two touch operations. Firstly, a user can determine a moving object identification area on a preview image through a first touch operation such as a touch pressing operation, a touch staying operation or a touch clicking operation, and a target area is determined according to the moving object identification area; then, a preset touch intensity reading area is provided on the preview image, the user can perform a second touch operation such as a touch pressing operation, a touch staying operation or a touch clicking operation on the touch intensity reading area, and the target special effect is determined according to the touch intensity of the second touch operation on the touch intensity reading area.
As can be seen from the above example, there may be a plurality of specific implementation manners for determining the target area and the target special effect according to the touch operation, and those skilled in the art may adopt different implementation manners, which is not limited in this embodiment of the present invention.
Step 103, adding the target special effect to the target area of the preview image to obtain a target preview image.
In a specific implementation, after the target area and the target special effect of the preview image are determined, the target special effect may be added to the target area of the current preview image.
In practical application, different effect types can be adopted to generate a target effect, and the effect types can include blurring, fading, graying and other effects for reducing the visibility of flaw content.
The method comprises the steps of presetting a certain special effect type before photographing, providing a plurality of special effect types for a user to select after touch operation is detected, and determining a target special effect according to the target special effect grade of the special effect type after the user selects the certain special effect type.
After the target special effect is added to the target area of the preview image, a target preview image can be obtained. The target preview image can be displayed to a user on a shooting preview interface, the user can preview the picture with the special effect in real time, and when the user is satisfied with the target preview image, the user can shoot the target preview image.
And 104, when a shooting instruction is received, generating a target photo according to the target preview image.
In specific implementation, when the user is satisfied with the target preview image for the current scene, a shooting instruction can be submitted to take a picture. When a photographing instruction is received, a photograph may be generated from the target preview image. For example, the target preview image is directly used as a photo, or a current scene corresponding to the target preview image is photographed, an original photo photographed for the current scene is subjected to special effect processing according to a target area and a target special effect of the target preview image, and the target special effect is added to the target area on the original photo, so that the target photo is obtained.
For example, a user needs to photograph a certain building, a pedestrian or a vehicle passing by exists in a photographing background of a preview image, the user wants to perform special effect processing on the pedestrian or the vehicle, a target area containing the pedestrian or the vehicle in the preview image is determined according to a pressing touch operation of the user on the pedestrian or the vehicle in the preview image, and a blurring special effect is added to the target area of the preview image to blur the pedestrian or the vehicle, so that the influence of the pedestrian or the vehicle on a photo presentation effect is reduced.
According to the embodiment of the invention, the target area containing the flaw content and the target special effect of the preview image are determined according to the touch operation of the user on the preview image, the target special effect is added to the target area to obtain the target preview image, the photo is generated according to the target preview image, the target special effect capable of reducing the visibility of the flaw content is added to the flaw content of the obtained photo, the display effect of the photo is improved, and the shooting requirement of the user is met.
Moreover, the user can obtain the photo with better display effect through simple touch operation without re-photographing or performing special effect processing on the defective photo after photographing, so that the time of the user is saved, and the user experience is improved.
Example two
Fig. 2 is a flowchart of steps of a photographing method according to a second embodiment of the present invention, where the method is applied to a mobile terminal, and the method includes:
step 201, receiving a special effect processing request submitted by a user for a target photographing application, or judging that the user starts the target photographing application.
Step 202, triggering detection of touch operation for the photographing application.
In a specific implementation, a user may submit a special effect processing request when taking a picture using a certain shooting application. For example, a preview image on a shooting preview interface of the shooting application is pressed, or a submission entrance of the special effect processing request is provided at a certain position of the shooting application, so that a user clicks the entrance to submit the special effect processing request. When a special effect processing request of a user is received, touch operation of the user on a preview image of the photographing application can be triggered and detected, and corresponding special effect processing is carried out.
The method can also monitor the operation of starting a certain photographing application by a user, and after the user starts the photographing application, the touch operation of the user on the preview image of the photographing application is triggered and detected, and corresponding special effect processing is carried out.
In practical applications, a person skilled in the art may trigger the detection of the touch operation in various ways. For example, a special effect processing plug-in may be embedded in the photographing application, and when the photographing application is started, the plug-in is called to detect the touch operation and perform corresponding special effect processing.
Step 203, detecting a touch operation for the object to be processed in the current preview image.
Step 204, determining a target area and a target special effect of the preview image according to the touch operation; the target area includes the object to be processed.
Optionally, the step 204 may comprise the following sub-steps:
substep S11, obtaining a touch area of the touch operation, and determining the target area according to the touch area;
and a substep S12, obtaining the touch intensity on the touch area in the touch operation, and determining the target special effect according to the read touch intensity.
In practical applications, the step of determining the target area according to the touch area may include:
substep S11-1, identifying an object to be processed in the touch area;
a substep S11-2 of judging whether the object to be processed is in a motion state; if not, performing the sub-step S11-3; if yes, go to substep S11-4;
substep S11-3, determining the touch area as the target area;
and a substep S11-4 of tracking the object to be processed and taking the region where the moved object to be processed is located as the target region.
In specific implementation, for a detected touch operation, a touch area of the touch operation and a touch intensity on the touch area may be read, and a target area and a target special effect are determined by using the touch area and the touch intensity, respectively.
In practical applications, the touch operation may include a touch pressing operation, a touch staying operation, or a touch clicking operation, and the touch strength includes a pressing pressure of the touch pressing operation, a staying time of the touch staying operation, or a number of clicks of the touch clicking operation.
Generally, a user performs a touch operation, such as a touch-press operation, a touch-stay operation, or a touch-click operation, on an object to be processed in a current preview image, so as to form a touch area in an area where the object to be processed is currently located. Therefore, after the touch area is read, the object to be processed in the touch area can be identified, and whether the object to be processed is in a motion state or not can be judged.
If the object to be processed is currently in a static state, the current touch area can be used as a target area, so as to perform subsequent special effect processing on the target area.
If the object to be processed is currently in a motion state, the object to be processed may be in another area of the preview image at the next time, that is, the object to be processed may not be in the touch area of the user touch operation, and if the special effect processing is performed on the current touch area, the special effect processing may not be performed on the object to be processed. Therefore, the dynamic tracking function of the shot object on the mobile terminal can be called to track the motion track of the object to be processed, the area where the moved object to be processed is located at present is identified, and the area is used as the target area, so that when special effect processing is performed, the processing can be performed on the area containing the object to be processed.
In practical applications, before the step 203, the method may further include:
when a moving object tracking request is received, the current preview image is judged to contain the object to be processed in the moving state.
In particular implementations, a moving object tracking request may be submitted by a user prior to detecting a touch operation. For example, when a special effect processing request submitted by a user is received or it is detected that the user starts a photographing application, a selection menu may be provided for the user to confirm whether special effect processing is currently required for an object in a motion state, and when the user confirms that the current preview image includes the object to be processed in the motion state, it may be determined that the current preview image includes the object to be processed in the motion state.
Optionally, when the preview image includes an object to be processed in a motion state, the touch operations include a first touch operation and a second touch operation, and the step 202 may include the following sub-steps:
a substep S21, acquiring a moving object identification area indicated by the first touch operation, and determining the target area according to the moving object identification area; wherein the moving object identification area comprises an area determined according to a touch area of the first touch operation;
and a substep S22, obtaining the touch strength of the second touch operation in a preset touch strength reading area, and determining the target special effect according to the read touch strength.
Aiming at the condition that the current preview image is determined to contain the object to be processed in the motion state, a target area and a target special effect can be respectively determined through two different operations of a first touch operation and a second touch operation of a user.
One moving object recognition area may be indicated on the preview image by a first touch operation such as a touch-press operation, a touch-stay operation, or a touch-click operation by the user. For example, a plurality of tools for a user to define a certain object or area may be provided on the photographing application, such as a mark window with a certain mark range, the user may select a certain mark window with a certain shape and size, and drag the mark window to the current position of the object to be processed, and after the user stops dragging, the area in the mark window may be used as a moving object identification area to identify the object to be processed in a moving state in the area. The user may select a mark window of a certain shape and size and then directly click on the preview image, thereby generating the selected mark window at the position where the user clicks, and the area within the mark window may be used as the moving object recognition area.
In addition, a touch intensity reading area may be set at a certain preset position of the preview image, so that the user performs a second touch operation such as a touch pressing operation, a touch staying operation, or a touch clicking operation on the touch intensity reading area. When the second touch operation performed by the user is detected, the touch intensity of the user can be read, and the target special effect is determined according to the read touch intensity.
Optionally, the step of determining the target region according to the moving object identification region includes:
a substep S21-1 of identifying an object to be processed in the moving object identification region;
and a substep S21-2 of tracking the object to be processed and taking the region where the moved object to be processed is located as the target region.
In the specific implementation, the to-be-processed object in the moving object identification area can be identified, a dynamic object tracking function on the mobile terminal is called to track the motion track of the to-be-processed object, the area where the moved to-be-processed object is located at present is identified, and the area is used as a target area, so that when special effect processing is performed, processing can be performed on the area containing the to-be-processed object.
Optionally, the step of determining the target special effect according to the read touch strength may include:
taking the touch control intensity selected by a user on a preset special effect selection interface as target touch control intensity, wherein the special effect selection interface comprises a plurality of historically read touch control intensities; or, taking the currently read touch strength as the target touch strength;
searching a target special effect grade corresponding to the target touch control strength;
and determining the target special effect according to the target special effect grade of the set special effect type.
It should be noted that the touch intensity may be a touch intensity read in the touch area, or may be a touch intensity read in a preset touch intensity reading area.
In a specific implementation, the touch intensity of the touch operation performed by the user may not change or may continuously change. For example, in the case where the magnitude of the pressing pressure of the touch pressing operation is used as the touch intensity, the pressure of the user touch pressing may continue to increase, and thus the touch intensity may also continue to increase accordingly. For another example, in the case where the length of the staying time of the touch staying operation is used as the touch intensity, the staying time of the touch staying by the user is increased with time, and thus the touch intensity is continuously increased accordingly. For the case that the number of clicks of the touch click operation is used as the touch intensity, the touch intensity is continuously increased correspondingly after the user clicks for multiple times. Therefore, a plurality of different touch intensities in the user touch operation can be read one by one, and the plurality of touch intensities are recorded as the touch intensities of the history reading. For the historical read touch control strength, a preset special effect selection interface can be provided on the photographing application, the historical read touch control strength is displayed on the special effect selection interface, a user can select one of the touch control strengths, and the touch control strength selected by the user is used as the target touch control strength. For example, a plurality of pressing pressures 1N, 5N, 10N generated by a previous touch pressing operation of the user are visually displayed on the special effect selection interface for the user to select.
In addition, the currently read touch intensity may also be directly used as the target touch intensity.
After determining the target touch intensity, a special effect level corresponding to the target touch intensity may be searched for as a target special effect level. For example, different special effect levels such as 1, 2, and 3 … may be set for different pressing pressures. Generally, the larger the touch intensity is, the higher the corresponding special effect level is, so that the degree of special effects such as blurring and blurring is higher.
After determining the target special effect level, the target special effect may be determined for a preset special effect type. It should be noted that the special effect type may be one of blurring, and fading, or may be selected by the user to determine the current special effect type for special effect processing after the user performs the touch operation, or may be selected by the user after the user submits the shooting instruction.
And determining a target special effect corresponding to the special effect type at the target special effect level aiming at a certain special effect type. For example, for the blurred effect type, when the target effect level is determined to be 3, the target effect is blurred in level 3.
In practical applications, the special effect selection interface includes special effect levels and/or special effects corresponding to the plurality of touch intensities respectively.
In a specific implementation, the special effect selection interface may include, in addition to a plurality of historically read touch intensities, a special effect level and/or a special effect corresponding to the plurality of touch intensities. In practical application, the touch intensity, the special effect grade and the corresponding special effect can be displayed to the user in a grouping mode. The displaying of the special effect may include displaying the thumbnail preview image after the special effect is added to the target area of the current preview image, and the user may select a certain touch intensity as the target touch intensity according to the displayed thumbnail preview image. Therefore, the user can obtain the target special effect meeting the user requirement without touch operation again.
In practical applications, before the step of searching for the target special effect level corresponding to the target touch strength, the method further includes:
and setting corresponding special effect levels according to different touch control intensities.
In specific implementation, special effect levels corresponding to different touch intensities may also be preset, that is, a certain touch intensity is bound with a certain special effect level. For example, the user may perform a touch-press operation for a certain special effect level, and bind a press pressure of the touch-press operation to the certain special effect level.
By setting the special effect levels corresponding to different touch control intensities, the user can customize the touch control intensity of each special effect level, and the user can conveniently obtain the target special effect meeting the special effect processing requirement by adopting the habitual operation.
The touch control strength and the corresponding special effect grade can be preset, the setting can be performed when the user starts the photographing application, and the setting can be performed when the user starts the photographing application and confirms that special effect treatment is required currently. The set timing can be set by those skilled in the art according to actual needs.
Step 203, adding the target special effect to the target area of the preview image to obtain a target preview image.
And 204, when a shooting instruction is received, generating a target photo according to the target preview image.
It should be noted that the present invention is also applicable to a scene in which special effect processing is performed for a target photographic subject. For example, a user may perform a touch operation on a target photographic subject on a preview image of a current scene to perform special effect processing, such as sharpening and halation, on the target photographic subject to enhance the display effect of the target photographic subject.
According to the embodiment of the invention, the to-be-processed object in the motion state in the touch area is identified, and the area where the to-be-processed object is moved is taken as the target area, so that the target special effect can be added to the area containing the to-be-processed object, and the problem that the to-be-processed object cannot be subjected to special effect processing if the target special effect is still added to the touch area after the to-be-processed object is moved is solved.
In order to provide those skilled in the art with a thorough understanding of the present invention, the following description will be given with reference to specific examples of fig. 4 to 6.
FIG. 4 is a diagram illustrating a blurring effect process being initiated on a preview image according to the present invention. As can be seen from the figure, the user can perform a touch-press operation on the pressed point of the current preview image to start the blurring special effect process.
Fig. 5 is a schematic diagram of a blurring effect level setting process according to the present invention. As can be seen from the figure, after the photographing application is started, the user can select a mode of binding the pressure of the touch press and the blurring special effect level. In the setting mode, the user can bind a certain virtualization level with a certain pressing pressure level, so that in subsequent special effect processing, after the pressing pressure of the touch pressing operation is read, the bound virtualization level can be found.
FIG. 6 is a schematic diagram of a blurring process flow according to the present invention. As can be seen from the figure, after the photographing application is started, the user can bind the pressing pressure and the blurring special effect level, and the user can select whether the object that needs blurring currently moves.
If the user selects to perform blurring on the object which is not moved, the user can directly perform touch pressing operation on the preview image aiming at the object to be processed, the area of the touch pressing operation is taken as a target area, the blurring special effect grade associated with the pressing pressure is searched, the target blurring special effect is determined according to the associated blurring special effect grade, and the target blurring special effect is added aiming at the target area in the preview image.
If the user selects blurring for the moving object, the user may first select a certain area containing the moving object on the preview image by touch and click operation, and the photographing application may invoke a subject dynamic tracking function to dynamically track the object to be processed, and take the area where the object to be processed is located after moving as a target area. Meanwhile, a user can perform touch stay operation on the touch intensity reading area of the preview image, so that the virtualization special effect grade associated with the stay time can be searched according to the stay time of the touch stay operation, the target virtualization special effect is determined according to the associated virtualization special effect grade, and the target virtualization special effect is added aiming at the target area in the preview image.
The target preview image subjected to the blurring special effect processing can be displayed to a user, and the user can click a photographing button according to the displayed target preview image to obtain a photo subjected to the blurring special effect processing on the object to be processed.
EXAMPLE III
Fig. 3A is a block diagram of a mobile terminal according to a third embodiment of the present invention, and the mobile terminal 300 shown in fig. 3A may specifically include a touch operation detection unit 301, an area and special effect determination unit 302, a target preview image acquisition unit 303, and a target photograph generation unit 304.
A touch operation detection unit 301, configured to detect a touch operation for an object to be processed in a current preview image;
a region and special effect determining unit 302, configured to determine a target region and a target special effect of the preview image according to the touch operation; the target area comprises the object to be processed;
a target preview image acquiring unit 303, configured to add the target special effect to a target area of the preview image to obtain a target preview image;
and the target photo generation unit 304 is used for generating a target photo according to the target preview image when receiving a shooting instruction.
Optionally, on the basis of fig. 3A, the region and special effect determining unit 302 may include: a first target area determining subunit 3021 and a first target special effects determining subunit 3022, as shown in fig. 3B.
The first target area determining subunit 3021 is configured to obtain a touch area of the touch operation, and determine the target area according to the touch area;
a first target special effect determining subunit 3022, configured to obtain touch strength on the touch area in the touch operation, and determine the target special effect according to the read touch strength.
Optionally, when the preview image includes an object to be processed in a motion state, the touch operations include a first touch operation and a second touch operation, and on the basis of fig. 3A, the area and special effect determining unit 302 may include: a second target area determining subunit 3023 and a second target special effects determining subunit 3024, as shown in fig. 3B.
The second target area determining subunit 3023 is configured to obtain a moving object identification area indicated by the first touch operation, and determine the target area according to the moving object identification area; wherein the moving object identification area comprises an area determined according to a touch area of the first touch operation;
a second target special effect determining subunit 3024, configured to obtain the touch strength of the second touch operation on a preset touch strength reading area, and determine the target special effect according to the read touch strength.
Optionally, the second target region determining subunit 3023 may include: an object identification module and an object tracking module;
the object identification module is used for identifying the object to be processed in the moving object identification area;
and the object tracking module is used for tracking the object to be processed and taking the area where the moved object to be processed is located as the target area.
Optionally, the first target special effect determining subunit 3022 or the second target special effect determining subunit 3024 may include: the device comprises a target touch strength determining module, a target special effect grade determining module and a target special effect determining module.
The target touch control strength determining module is used for taking the touch control strength selected by a user on a preset special effect selection interface as the target touch control strength, and the special effect selection interface comprises a plurality of historically read touch control strengths; or, taking the currently read touch strength as the target touch strength;
the target special effect grade determining module is used for searching a target special effect grade corresponding to the target touch control strength;
and the target special effect determining module is used for determining the target special effect according to the target special effect grade of the set special effect type.
The mobile terminal 300 can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition. According to the mobile terminal 300 of the embodiment of the invention, according to the touch operation of the user on the preview image, the target area containing the flaw content and the target special effect of the preview image are determined, the target special effect is added to the target area to obtain the target preview image, the photo is generated according to the target preview image, the target special effect capable of reducing the visibility of the flaw content is added to the flaw content of the obtained photo, the display effect of the photo is improved, and the shooting requirement of the user is met. Moreover, the user can obtain the photo with better display effect through simple touch operation without re-photographing or performing special effect processing on the defective photo after photographing, so that the time of the user is saved, and the user experience is improved. Furthermore, by identifying the to-be-processed object in the motion state in the touch area and taking the area where the to-be-processed object is moved as the target area, a target special effect can be added to the area containing the to-be-processed object, and the problem that the to-be-processed object cannot be subjected to special effect processing if the target special effect is still added to the touch area after the to-be-processed object is moved is solved.
Fig. 7 is a block diagram of a mobile terminal according to another embodiment of the present invention. The mobile terminal 700 shown in fig. 7 includes: at least one processor 701, memory 702, at least one network interface 704, and other user interfaces 703. The various components in the mobile terminal 700 are coupled together by a bus system 705. It is understood that the bus system 705 is used to enable communications among the components. The bus system 705 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various busses are labeled in figure 7 as the bus system 705.
The user interface 703 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.
It is to be understood that the memory 702 in embodiments of the present invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a Read-only memory (ROM), a programmable Read-only memory (PROM), an erasable programmable Read-only memory (erasabprom, EPROM), an electrically erasable programmable Read-only memory (EEPROM), or a flash memory. The volatile memory may be a Random Access Memory (RAM) which functions as an external cache. By way of example, but not limitation, many forms of RAM are available, such as static random access memory (staticiram, SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous dynamic random access memory (syncronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced synchronous SDRAM (ESDRAM), synchronous link SDRAM (SLDRAM), and direct memory bus SDRAM (DRRAM). The memory 702 of the systems and methods described in this embodiment of the invention is intended to comprise, without being limited to, these and any other suitable types of memory. The memory 702 may store a preset operation rule, which includes data in a preset condition, such as a preset sliding track, a preset pressure threshold, a preset operation time interval, and the like.
In some embodiments, memory 702 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof: an operating system 7021 and application programs 7022.
The operating system 7021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 7022 includes various applications, such as a media player (MediaPlayer), a Browser (Browser), and the like, for implementing various application services. Programs that implement methods in accordance with embodiments of the present invention can be included within application program 7022.
In the embodiment of the present invention, the processor 701 is configured to detect a touch operation on an object to be processed in a current preview image by calling a program or an instruction stored in the memory 702, specifically, a program or an instruction stored in the application 7022; determining a target area and a target special effect of the preview image according to the touch operation; the target area comprises the object to be processed; adding the target special effect to a target area of the preview image to obtain a target preview image; and when a shooting instruction is received, generating a target photo according to the target preview image.
The method disclosed in the above embodiments of the present invention may be applied to the processor 701, or implemented by the processor 701. The processor 701 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 701. The processor 701 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 702, and the processor 701 reads the information in the memory 702 and performs the steps of the above method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described in this disclosure may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described in this disclosure. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, as an embodiment, the processor 701 is further configured to: acquiring a touch area of the touch operation, and determining the target area according to the touch area; and acquiring the touch control intensity on the touch control area in the touch control operation, and determining the target special effect according to the read touch control intensity.
Optionally, as another embodiment, when the preview image includes an object to be processed in a motion state, the touch operations include a first touch operation and a second touch operation, and the processor 701 is further configured to: acquiring a moving object identification area indicated by the first touch operation, and determining the target area according to the moving object identification area; wherein the moving object identification area comprises an area determined according to a touch area of the first touch operation; and acquiring the touch control intensity of the second touch control operation on a preset touch control intensity reading area, and determining the target special effect according to the read touch control intensity.
Optionally, as another embodiment, the processor 701 is further configured to: identifying an object to be processed in the moving object identification area; and tracking the object to be processed, and taking the area where the moved object to be processed is located as the target area.
Optionally, as another embodiment, the processor 701 is further configured to: taking the touch control intensity selected by a user on a preset special effect selection interface as target touch control intensity, wherein the special effect selection interface comprises a plurality of historically read touch control intensities; or, taking the currently read touch strength as the target touch strength; searching a target special effect grade corresponding to the target touch control strength; and determining the target special effect according to the target special effect grade of the set special effect type.
The mobile terminal 700 can implement the processes implemented by the mobile terminal in the foregoing embodiments, and details are not repeated here to avoid repetition. According to the embodiment of the invention, the mobile terminal 700 determines the target area containing the flaw content and the target special effect of the preview image according to the touch operation of the user on the preview image, adds the target special effect on the target area to obtain the target preview image, generates the photo according to the target preview image, adds the target special effect capable of reducing the visibility of the flaw content to the flaw content of the obtained photo, improves the display effect of the photo, and meets the shooting requirement of the user. Moreover, the user can obtain the photo with better display effect through simple touch operation without re-photographing or performing special effect processing on the defective photo after photographing, so that the time of the user is saved, and the user experience is improved. Furthermore, by identifying the to-be-processed object in the motion state in the touch area and taking the area where the to-be-processed object is moved as the target area, a target special effect can be added to the area containing the to-be-processed object, and the problem that the to-be-processed object cannot be subjected to special effect processing if the target special effect is still added to the touch area after the to-be-processed object is moved is solved.
Fig. 8 is a block diagram of a mobile terminal according to still another embodiment of the present invention. Specifically, the mobile terminal 800 in fig. 8 may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), or a vehicle-mounted computer.
The mobile terminal 800 in fig. 8 includes a Radio Frequency (RF) circuit 810, a memory 820, an input unit 830, a display unit 840, a processor 860, an audio circuit 870, a wifi (wirelessfidelity) module 880, and a power supply 890.
The input unit 830 may be used, among other things, to receive numeric or character information input by a user and to generate signal inputs related to user settings and function control of the mobile terminal 800. Specifically, in the embodiment of the present invention, the input unit 830 may include a touch panel 831. The touch panel 831, also referred to as a touch screen, can collect touch operations performed by a user on or near the touch panel 831 (e.g., operations performed by the user on the touch panel 831 using a finger, a stylus, or any other suitable object or accessory), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 831 may include two portions, i.e., a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 860, and can receive and execute commands sent by the processor 860. In addition, the touch panel 831 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 831, the input unit 830 may include other input devices 832, and the other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Among other things, the display unit 840 may be used to display information input by the user or information provided to the user and various menu interfaces of the mobile terminal 800. The display unit 840 may include a display panel 841, and the display panel 841 may be alternatively configured in the form of an LCD or an organic light-emitting diode (OLED), or the like.
It should be noted that the touch panel 831 can overlay the display panel 841 to form a touch display screen, which, when it detects a touch operation thereon or nearby, is passed to the processor 860 to determine the type of touch event, and then the processor 860 provides a corresponding visual output on the touch display screen according to the type of touch event.
The touch display screen comprises an application program interface display area and a common control display area. The arrangement modes of the application program interface display area and the common control display area are not limited, and can be an arrangement mode which can distinguish two display areas, such as vertical arrangement, left-right arrangement and the like. The application interface display area may be used to display an interface of an application. Each interface may contain at least one interface element such as an icon and/or widget desktop control for an application. The application interface display area may also be an empty interface that does not contain any content. The common control display area is used for displaying controls with high utilization rate, such as application icons like setting buttons, interface numbers, scroll bars, phone book icons and the like.
The processor 860 is a control center of the mobile terminal 800, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile terminal 800 and processes data by operating or executing software programs and/or modules stored in the first memory 821 and calling data stored in the second memory 822, thereby integrally monitoring the mobile terminal 800. Optionally, processor 890 may include one or more processing units.
In the embodiment of the present invention, the processor 860 is configured to detect a touch operation on an object to be processed in a current preview image by calling a software program and/or a module stored in the first memory 821 and/or data stored in the second memory 822; determining a target area and a target special effect of the preview image according to the touch operation; the target area comprises the object to be processed; adding the target special effect to a target area of the preview image to obtain a target preview image; and when a shooting instruction is received, generating a target photo according to the target preview image.
Optionally, as an embodiment, the processor 860 is further configured to: acquiring a touch area of the touch operation, and determining the target area according to the touch area; and acquiring the touch control intensity on the touch control area in the touch control operation, and determining the target special effect according to the read touch control intensity.
Optionally, as another embodiment, when the preview image includes an object to be processed in a motion state, the touch operations include a first touch operation and a second touch operation, and the processor 860 is further configured to: acquiring a moving object identification area indicated by the first touch operation, and determining the target area according to the moving object identification area; wherein the moving object identification area comprises an area determined according to a touch area of the first touch operation; and acquiring the touch control intensity of the second touch control operation on a preset touch control intensity reading area, and determining the target special effect according to the read touch control intensity.
Optionally, as another embodiment, the processor 860 is further configured to: identifying an object to be processed in the moving object identification area; and tracking the object to be processed, and taking the area where the moved object to be processed is located as the target area.
Optionally, as another embodiment, the processor 860 is further configured to: taking the touch control intensity selected by a user on a preset special effect selection interface as target touch control intensity, wherein the special effect selection interface comprises a plurality of historically read touch control intensities; or, taking the currently read touch strength as the target touch strength; searching a target special effect grade corresponding to the target touch control strength; and determining the target special effect according to the target special effect grade of the set special effect type.
The mobile terminal 800 can implement each process implemented by the mobile terminal in the foregoing embodiments, and details are not repeated here to avoid repetition. According to the embodiment of the invention, the mobile terminal 800 determines the target area containing the flaw content and the target special effect of the preview image according to the touch operation of the user on the preview image, adds the target special effect on the target area to obtain the target preview image, generates the photo according to the target preview image, adds the target special effect capable of reducing the visibility of the flaw content to the flaw content of the obtained photo, improves the display effect of the photo, and meets the shooting requirement of the user. Moreover, the user can obtain the photo with better display effect through simple touch operation without re-photographing or performing special effect processing on the defective photo after photographing, so that the time of the user is saved, and the user experience is improved. Furthermore, by identifying the to-be-processed object in the motion state in the touch area and taking the area where the to-be-processed object is moved as the target area, a target special effect can be added to the area containing the to-be-processed object, and the problem that the to-be-processed object cannot be subjected to special effect processing if the target special effect is still added to the touch area after the to-be-processed object is moved is solved.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A photographing method is applied to a mobile terminal, and is characterized by comprising the following steps:
detecting touch operation aiming at an object to be processed in a current preview image;
determining a target area and a target special effect of the preview image according to the touch operation; the target area comprises the object to be processed, wherein when the preview image contains the object to be processed in a motion state, the target area refers to an area where the moved object to be processed is actually located;
adding the target special effect to a target area of the preview image to obtain a target preview image;
when a shooting instruction is received, generating a target picture according to the target preview image;
when the preview image contains an object to be processed in a motion state, the touch operation comprises a first touch operation and a second touch operation, and the step of determining the target area and the target special effect of the preview image according to the touch operation comprises the following steps:
acquiring a moving object identification area indicated by the first touch operation, and determining the target area according to the moving object identification area; wherein the moving object identification area comprises an area determined according to a touch area of the first touch operation;
and acquiring the touch control intensity of the second touch control operation on a preset touch control intensity reading area, and determining the target special effect according to the read touch control intensity.
2. The method according to claim 1, wherein the step of determining a target area and a target special effect of the preview image according to the touch operation comprises:
acquiring a touch area of the touch operation, and determining the target area according to the touch area;
and acquiring the touch control intensity on the touch control area in the touch control operation, and determining the target special effect according to the read touch control intensity.
3. The method of claim 1, wherein the step of determining the target region from the moving object identification region comprises:
identifying an object to be processed in the moving object identification area;
and tracking the object to be processed, and taking the area where the moved object to be processed is located as the target area.
4. The method of claim 2, wherein the step of determining the target special effect according to the read touch strength comprises:
taking the touch control intensity selected by a user on a preset special effect selection interface as target touch control intensity, wherein the special effect selection interface comprises a plurality of historically read touch control intensities; or, taking the currently read touch strength as the target touch strength;
searching a target special effect grade corresponding to the target touch control strength;
and determining the target special effect according to the target special effect grade of the set special effect type.
5. A mobile terminal, the mobile terminal comprising:
the touch operation detection unit is used for detecting touch operation aiming at an object to be processed in the current preview image;
the area and special effect determining unit is used for determining a target area and a target special effect of the preview image according to the touch operation; the target area comprises the object to be processed, wherein when the preview image contains the object to be processed in a motion state, the target area refers to an area where the moved object to be processed is actually located;
the target preview image acquisition unit is used for adding the target special effect to a target area of the preview image to obtain a target preview image;
the target photo generation unit is used for generating a target photo according to the target preview image when a shooting instruction is received;
when the preview image contains the object to be processed in a motion state, the touch operation includes a first touch operation and a second touch operation, and the area and special effect determining unit includes:
a second target area determining subunit, configured to acquire a moving object identification area indicated by the first touch operation, and determine the target area according to the moving object identification area; wherein the moving object identification area comprises an area determined according to a touch area of the first touch operation;
and the second target special effect determining subunit is used for acquiring the touch control intensity of the second touch control operation on a preset touch control intensity reading area and determining the target special effect according to the read touch control intensity.
6. The mobile terminal of claim 5, wherein the region and effect determination unit comprises:
a first target area determining subunit, configured to acquire a touch area of the touch operation, and determine the target area according to the touch area;
and the first target special effect determining subunit is used for acquiring the touch control intensity on the touch control area in the touch control operation and determining the target special effect according to the read touch control intensity.
7. The mobile terminal of claim 5, wherein the second target area determining subunit comprises:
the object identification module is used for identifying the object to be processed in the moving object identification area;
and the object tracking module is used for tracking the object to be processed and taking the area where the moved object to be processed is located as the target area.
8. The mobile terminal of claim 6, wherein the first or second target special effects determining subunit comprises:
the target touch control strength determining module is used for taking the touch control strength selected by a user on a preset special effect selection interface as the target touch control strength, and the special effect selection interface comprises a plurality of historically read touch control strengths; or, taking the currently read touch strength as the target touch strength;
the target special effect grade determining module is used for searching a target special effect grade corresponding to the target touch control strength;
and the target special effect determining module is used for determining the target special effect according to the target special effect grade of the set special effect type.
CN201710464380.8A 2017-06-19 2017-06-19 Photographing method and mobile terminal Active CN107315529B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710464380.8A CN107315529B (en) 2017-06-19 2017-06-19 Photographing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710464380.8A CN107315529B (en) 2017-06-19 2017-06-19 Photographing method and mobile terminal

Publications (2)

Publication Number Publication Date
CN107315529A CN107315529A (en) 2017-11-03
CN107315529B true CN107315529B (en) 2020-05-26

Family

ID=60184100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710464380.8A Active CN107315529B (en) 2017-06-19 2017-06-19 Photographing method and mobile terminal

Country Status (1)

Country Link
CN (1) CN107315529B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110309933A (en) 2018-03-23 2019-10-08 广州极飞科技有限公司 Plant plants data measuring method, work route method and device for planning, system
CN110035227A (en) * 2019-03-25 2019-07-19 维沃移动通信有限公司 Special effect display methods and terminal device
CN113132641A (en) * 2021-04-23 2021-07-16 北京达佳互联信息技术有限公司 Shooting control method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104159032A (en) * 2014-08-20 2014-11-19 广东欧珀移动通信有限公司 Method and device of adjusting facial beautification effect in camera photographing in real time
CN104199610A (en) * 2014-08-27 2014-12-10 联想(北京)有限公司 Information processing method and electronic device
CN105554364A (en) * 2015-07-30 2016-05-04 宇龙计算机通信科技(深圳)有限公司 Image processing method and terminal
CN106231182A (en) * 2016-07-29 2016-12-14 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN106791016A (en) * 2016-11-29 2017-05-31 努比亚技术有限公司 A kind of photographic method and terminal
CN106814959A (en) * 2015-11-30 2017-06-09 东莞酷派软件技术有限公司 A kind of U.S. face photographic method, device and terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2829898A1 (en) * 2001-09-17 2003-03-21 Thomson Licensing Sa WIRELESS VIDEO CAMERA

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104159032A (en) * 2014-08-20 2014-11-19 广东欧珀移动通信有限公司 Method and device of adjusting facial beautification effect in camera photographing in real time
CN104199610A (en) * 2014-08-27 2014-12-10 联想(北京)有限公司 Information processing method and electronic device
CN105554364A (en) * 2015-07-30 2016-05-04 宇龙计算机通信科技(深圳)有限公司 Image processing method and terminal
CN106814959A (en) * 2015-11-30 2017-06-09 东莞酷派软件技术有限公司 A kind of U.S. face photographic method, device and terminal
CN106231182A (en) * 2016-07-29 2016-12-14 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN106791016A (en) * 2016-11-29 2017-05-31 努比亚技术有限公司 A kind of photographic method and terminal

Also Published As

Publication number Publication date
CN107315529A (en) 2017-11-03

Similar Documents

Publication Publication Date Title
CN107678644B (en) Image processing method and mobile terminal
CN106126077B (en) Display control method of application program icons and mobile terminal
CN106406710B (en) Screen recording method and mobile terminal
CN106327185B (en) Starting method of payment application and mobile terminal
CN107179865B (en) Page switching method and terminal
CN107509030B (en) focusing method and mobile terminal
WO2019001152A1 (en) Photographing method and mobile terminal
EP3661187A1 (en) Photography method and mobile terminal
CN106383638B (en) Payment mode display method and mobile terminal
CN107613203B (en) Image processing method and mobile terminal
CN106791437B (en) Panoramic image shooting method and mobile terminal
CN106250021B (en) Photographing control method and mobile terminal
CN107562345B (en) Information storage method and mobile terminal
CN107665434B (en) Payment method and mobile terminal
CN106993091B (en) Image blurring method and mobile terminal
CN107643912B (en) Information sharing method and mobile terminal
CN106454086B (en) Image processing method and mobile terminal
CN107172347B (en) Photographing method and terminal
CN107203313B (en) Method for adjusting desktop display object, mobile terminal and computer readable storage medium
CN107360375B (en) Shooting method and mobile terminal
CN107480500B (en) Face verification method and mobile terminal
CN106354373B (en) Icon moving method and mobile terminal
CN107483821B (en) Image processing method and mobile terminal
JP2017533602A (en) Switching between electronic device cameras
CN107506130B (en) Character deleting method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant