WO2024060903A1 - 相机功能控制方法、电子设备及存储介质 - Google Patents

相机功能控制方法、电子设备及存储介质 Download PDF

Info

Publication number
WO2024060903A1
WO2024060903A1 PCT/CN2023/114101 CN2023114101W WO2024060903A1 WO 2024060903 A1 WO2024060903 A1 WO 2024060903A1 CN 2023114101 W CN2023114101 W CN 2023114101W WO 2024060903 A1 WO2024060903 A1 WO 2024060903A1
Authority
WO
WIPO (PCT)
Prior art keywords
function
interface
camera
camera function
label
Prior art date
Application number
PCT/CN2023/114101
Other languages
English (en)
French (fr)
Inventor
易婕
邵建利
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2024060903A1 publication Critical patent/WO2024060903A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Definitions

  • the present application relates to the field of photography technology, and in particular to a camera function control method, electronic equipment and storage media.
  • More and more electronic devices are equipped with cameras. Users can carry electronic devices to take photos or videos anytime and anywhere. At the same time, in order to improve the user experience, electronic devices provide a variety of camera functions, such as portrait photography, night scene photography, Video recording function, movie function, etc. If you need to use the camera function, the user usually needs to go through: open the camera application -> select a camera function -> operate the camera function to take pictures or videos. When the user needs to capture some wonderful moments, the user needs to go through the above operations in sequence. links, thus reducing the speed of users using the camera.
  • camera functions such as portrait photography, night scene photography, Video recording function, movie function, etc.
  • the prior art introduces quick operations for continuous photo taking or video recording. For example, after opening the camera application, the user uses the shooting button as the sliding starting point, slides to the left to take quick and continuous photos, and slides to the right to record quickly.
  • this method increases the speed of using the camera to a certain extent, the user needs to remember the camera function corresponding to each sliding direction before use. If the user does not remember or remembers it incorrectly, it may lead to operation errors and fail to achieve the desired quick operation effect.
  • the main purpose of this application is to provide a camera function control method, electronic device and storage medium, aiming to solve the technical problem that the existing camera function control method is not convenient enough and affects the user's use speed and experience.
  • this application provides a camera function control method, which method is applied to electronic equipment.
  • the electronic equipment includes multiple camera functions.
  • the first interface corresponding to the camera function includes a first area and a second area.
  • the first area is the camera. the shooting button of the electronic device, the second area is an area other than the shooting button of the camera, the second area includes a plurality of function labels for identifying camera functions
  • the method includes: displaying a first interface of the first camera function of the electronic device; receiving the user In the first operation on the first interface of the first camera function, the first operation includes a sliding operation; in the first interface of the first camera function, when the sliding operation enters the second area from the first area, the sliding operation corresponding to the sliding operation is determined.
  • the direction points to the first function label; if the sliding direction points to the first function label, switch to display the second interface of the second camera function identified by the first function label.
  • the second interface includes text indicating the second camera function, and the display of the text The position is the same as the display position of the first function label; on the second interface of the second camera function, in response to the sliding operation, continue to slide along the direction of the text of the second camera function to control the operation of the second camera function.
  • a new camera function control operation method is redefined.
  • This operation method associates the sliding direction with the position of the function label.
  • the user only needs to slide toward the position of the target function label to trigger the function.
  • the camera function corresponding to the function label.
  • the above-mentioned second interface is displayed, and text indicating the second camera function is displayed on the second interface, prompting the user that the camera is about to run a function, so that the user can know that the camera will run the function after the operation, and avoid misoperation.
  • the display position of the text of the second camera function is the same as the display position of the first function label, which can guide the user to continue sliding to trigger the second camera function.
  • the user's operation method will be more flexible, that is, the camera function will be triggered according to the sliding point. Therefore, it can be quickly controlled based on any camera function selected by the user. For example, sliding in different directions can correspondingly control a variety of different camera functions. Camera functions.
  • users do not need to deliberately remember the corresponding relationship between the sliding direction and the camera function. They only need to slide towards a certain function label to trigger the operation of the corresponding camera function, which improves the flexibility and user experience of the user's camera operation.
  • the second area includes multiple non-overlapping partitions, each partition covers a function label, and the division of each partition is related to the relative position between the shooting button and each function label.
  • the area outside the shooting button is divided into multiple non-overlapping partitions, and it is ensured that each partition covers a corresponding function label, thereby facilitating the user's sliding operation.
  • determining whether the sliding direction corresponding to the sliding operation points to the first function label includes: The first interface of the first camera function, when the sliding operation enters the second area from the first area, determines whether the sliding operation enters the target partition of the second area; if the sliding operation enters the target partition of the second area, determines the target partition coverage
  • the function label is the first function label pointed to by the sliding direction corresponding to the sliding operation.
  • the second area corresponds to multiple non-overlapping angle ranges, each angle range covers a function label, and the division of each angle range is related to the relative position between the shooting button and each function label. .
  • the area outside the shooting button is divided into multiple non-overlapping angle ranges, and it is ensured that each angle range covers a function label, thereby making it convenient for users to slide. operate.
  • the camera function corresponding to the function label that triggers the function label covered by the partition is guaranteed, which not only facilitates the user's sliding operation, but also improves the operation. the convenience.
  • determining whether the sliding direction corresponding to the sliding operation points to the first function label includes: The first interface of the first camera function, when the sliding operation enters the second area from the first area, calculates the angle between the sliding direction corresponding to the sliding operation and the preset reference direction; if the angle range includes the included angle, it is determined to include The function label covered by the angle range of the included angle is the first function label pointed to by the sliding direction corresponding to the sliding operation.
  • the angular range division of the second area is in line with the characteristics of the user's sliding operation. Even if the user's sliding operation has a certain deviation, it can still accurately point to the function label, thereby facilitating user operation and improving the flexibility of camera control.
  • the camera function control method further includes: when switching to display the second interface of the second camera function identified by the first function label, hiding the relevant controls displayed on the first interface of the first camera function.
  • the component, wherein the relevant controls include all function labels displayed on the first interface of the first camera function.
  • the first operation also includes a long press operation
  • the camera function control method further includes: on the first interface of the first camera function, when there is a long press operation in the first area, determine the long press operation. Whether the long press duration of the operation reaches the preset duration threshold; if the long press duration of the long press operation reaches the preset duration threshold, the second interface of the recording function is switched to display; in the second interface of the video recording function, when the long press operation When the duration exceeds the duration threshold, the recording function is controlled to run.
  • the video function is relatively single. Therefore, an additional user operation method is provided for the video function that is different from the photo function. This method is simple to operate, convenient for users to remember, and can quickly Turn on the video recording function.
  • the camera function control method also includes: on the second interface of the video recording function, when the long press operation fails, the video recording function is stopped and the first interface of the first camera function is restored.
  • the long press operation becomes invalid, thus ending the recording.
  • the recording function opened through this application method can automatically resume and start at the end.
  • the front-end operation interface improves the user experience.
  • the function label is slidable, and when the user slides any function label on the first interface of the first camera function, the display positions of all function labels change.
  • sliding the function label allows the user to operate more camera functions on the same interface. Since the partition or angle range division in this application is not fixedly bound to a certain function label, that is, in different application scenarios, sliding to the same partition or angle range can trigger different camera functions, so that the camera control method of this application can adapt to various application scenarios and improve the convenience of user use.
  • the camera function control method also includes: obtaining the second function label selected by the user by sliding the function label on the first interface of the first camera function; switching to display the third function label identified by the second function label.
  • the first interface of camera function the operation interfaces corresponding to different function labels can be displayed by sliding the function labels. That is, no matter which operation interface corresponds to the camera function, the sliding operation method of the present application can be used to control the camera function, thereby making it convenient for the user to Perform quick operations of camera functions in various application scenarios.
  • the second camera function includes a continuous shooting function
  • the first function label includes a photo function label
  • the second interface of the second camera function includes: text of the continuous shooting function; in the second interface of the second camera function, in response to the sliding operation, the sliding continues along the direction where the text of the second camera function is located
  • controlling the operation of the second camera function includes: in the second interface of the continuous shooting function, in response to the sliding operation, the sliding continues along the direction where the text of the continuous shooting function is located, controlling the continuous shooting function to continuously capture photos.
  • the continuous shooting function specifically includes a photo function for taking continuous photos or a portrait function for taking continuous photos. Through the user's sliding operation, the photo function or the portrait function is implemented to take continuous photos, thereby forming a new continuous shooting function, reducing the user's photo operation and improving the user experience.
  • Camera function control methods also include: when the sliding operation fails or the number of continuously taken photos reaches the preset threshold When the continuous shooting function is stopped, the first interface of the first camera function is restored, where the number of continuously taken photos is related to the duration of the sliding operation, and the duration includes the sliding operation of the sliding operation on the second interface of the continuous shooting function. Duration and stay.
  • the above design provides the conditions for exiting the operation of the camera function. First, the sliding operation fails, such as the user lifting his finger. Second, the number of continuously taken photos reaches the maximum number. At the same time, after the operation is completed, the interface before the display starts can be automatically restored, thereby reducing user operations and making it convenient for users to continue using the shortcut operation functions.
  • the second camera function includes a portrait function
  • the first function label includes a portrait function label
  • the text of the second camera function continues in response to the sliding operation.
  • Sliding in the direction to control the operation of the second camera function includes: on the second interface of the portrait function, continuing to slide along the direction of the text of the portrait function in response to the sliding operation, controlling the portrait function to take pictures; calling an image processing program suitable for the portrait function Perform image processing on the generated photos and save the processed images.
  • sliding operations can not only quickly trigger the camera function, but also enable continuous photography, thus reducing user operations.
  • the above design can further perform image processing on the generated photos.
  • the algorithm corresponding to the night scene function to process the picture and save it, or use the algorithm corresponding to the portrait function to blur the background and portrait beautification of the picture. (skin resurfacing, face slimming) and other processing before saving, or adding filters in movie mode during the video recording process, thereby saving user operations and improving user experience.
  • the second camera function includes a video recording function
  • the first function label includes a video recording function label
  • on the second interface of the second camera function, in response to the sliding operation, the text of the second camera function continues along Sliding in the direction to control the operation of the second camera function includes: on the second interface of the video recording function, in response to the sliding operation, continue to slide along the direction of the text of the video recording function, and control the video recording function to record; when the sliding operation fails, switch to display the video recording function the third interface, and continue to keep the recording function running.
  • the user can control the operation of the video function by sliding from the shooting button toward the location of the video function label, which improves the convenience of the user using the video function.
  • the video recording function has a longer running time than the photo taking function, and the video recording function also supports pausing recording, therefore, when the sliding operation fails, for example, the user lifts his finger, the interface is not restored, but the display is switched.
  • the complete recording function operation interface keeps the recording function running, making it convenient for users to control the recording process at any time.
  • the camera function control method also includes: on the third interface of the video recording function, when a user-triggered stop recording instruction is received, the video recording function is stopped and the display of the first camera function is resumed.
  • the first interface In the above design method, the second interface of the recording function will display recording process shooting buttons, such as the recording control button and the pause button.
  • the recording control button When the user manually clicks the recording control button to trigger the stop recording command, it is determined that the user's true intention is to exit recording, so , while stopping the recording function, the interface before the display starts is restored, thereby facilitating the user's subsequent operations and improving the user experience.
  • the present application also provides an electronic device, including a processor and a memory.
  • the processor is configured to call a computer program in the memory to perform the camera function as provided in the above first aspect or any one of the first aspect designs. Control Method.
  • the present application also provides a computer-readable storage medium that stores computer instructions.
  • the computer instructions When the computer instructions are run on an electronic device, the electronic device causes the electronic device to execute the above-mentioned first aspect or the first aspect.
  • Camera function control methods provided by any of the designs.
  • this application also provides a computer program product, the computer program product includes a computer Instructions, when the computer instructions are executed on the electronic device, cause the electronic device to perform the method as described above.
  • Figure 1 is a schematic diagram of the camera function interface of a mobile phone provided by an embodiment of the present application
  • Figure 2 is a schematic diagram of a user operation flow of an existing camera function control method
  • Figure 3 is a schematic diagram of another user operation flow of the existing camera function control method
  • Figure 4 is a schematic diagram of the user operation flow of the camera function control method provided by the embodiment of the present application.
  • FIG. 5 is a schematic diagram of the camera function interface division provided by the embodiment of the present application.
  • Figure 6 is a schematic diagram of a user performing a sliding operation in the first area and the second area according to an embodiment of the present application
  • Figure 7 is a schematic diagram of a user performing a sliding operation in any partition of the first area and the second area provided by the embodiment of the present application;
  • Figure 8 is a schematic diagram of a user performing a sliding operation within any angle range of the first area and the second area provided by the embodiment of the present application;
  • Figure 9 is a schematic flowchart of a camera function control method provided by an embodiment of the present application.
  • Figure 10 is a schematic diagram of an interface for users to slide to continuously take photos provided by an embodiment of the present application.
  • Figure 11 is a schematic diagram of another interface for users to slide to continuously take photos provided by an embodiment of the present application.
  • Figure 12 is a schematic diagram of an interface for users to slide for continuous portrait shooting provided by an embodiment of the present application.
  • Figure 13 is a schematic diagram of another interface for users to slide for continuous portrait shooting provided by an embodiment of the present application.
  • Figure 14 is a schematic diagram of an interface for users to slide to take portrait photos provided by an embodiment of the present application.
  • Figure 15 is a schematic diagram of the video recording interface of the mobile phone provided by the embodiment of the present application.
  • Figure 16 is a schematic diagram of the interface for user sliding to record video provided by the embodiment of the present application.
  • Figure 17 is a schematic diagram of the interface provided by the embodiment of the present application for the user to press and hold for recording;
  • Figure 18 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • At least one (item) means one or more
  • plural means two or more
  • at least two (items) means two or three and three
  • “and/or” is used to describe the relationship between associated objects.
  • System means that there can be three relationships.
  • a and/or B can mean: only A exists, only B exists, and A and B exist simultaneously.
  • a and B can be singular or plural.
  • the character “/” generally indicates that the related objects are in an "or” relationship.
  • At least one of the following” or similar expressions refers to any combination of these items.
  • at least one of a, b or c can mean: a, b, c, "a and b", “a and c", “b and c", or "a and b and c"”.
  • the electronic device has one or more cameras and is installed with a camera application, which can realize functions such as taking pictures and recording videos.
  • the electronic device can be a mobile phone, a wearable device, a tablet, a computer with wireless transceiver functions, a virtual reality (VR) terminal device, an augmented reality (AR) terminal device, etc.
  • VR virtual reality
  • AR augmented reality
  • the electronic device will be specifically described below by taking a mobile phone as an example.
  • FIG. 1 is a schematic diagram of a camera function interface displayed on a mobile phone according to an embodiment of the present application.
  • the mobile phone supports a variety of camera functions, such as photo function, portrait function, night scene function, video recording function, etc., and each camera function corresponds to a camera function interface. For example, if the user selects the camera function, the mobile phone will display the camera function interface, and if the user selects the video recording function, the mobile phone will display the video recording function interface.
  • a functional interface is usually displayed by default, such as the functional interface for taking pictures. If the user wants to use other functions of the camera, such as the portrait function, the user needs to manually select it first. Portrait function, and then the phone automatically switches to display the functional interface corresponding to the portrait function.
  • the layout of the camera function interface mainly includes: viewfinder frame 101, function labels 102, and shooting button 103.
  • the viewfinder frame 101 is used to display images collected by the camera in real time.
  • Function labels 102 are used to indicate different camera functions for users to choose. Each function label 102 corresponds to a camera function, such as a photo function label, a portrait function label, a night scene function label, and a video recording function label, which can be displayed on the same camera function interface. Some or all feature labels.
  • the function labels 102 may be displayed on either side of the viewfinder frame 101 , and multiple function labels 102 may be arranged horizontally, vertically, or in a ring around the shooting button 103 .
  • the mobile phone automatically displays the interface of the camera function corresponding to the triggered tag. For example, when the camera function tag is triggered, the camera function interface is displayed.
  • the shooting button 103 is used to execute the corresponding camera function, which is specifically determined according to the camera function interface currently displayed on the mobile phone. For example, the mobile phone currently displays a camera function interface, and when the user touches the shooting button 103, the mobile phone automatically takes a photo of the image in the current viewfinder 101 and saves the photo.
  • the layout of the camera function interface can also include: gallery thumbnail 104, front and rear camera switching button 105, camera focus adjustment control 106, smart vision control 107, AI photography control 108, Flash control 109, filter shooting mode control 110, setting button 111, etc.
  • the camera focus adjustment control 106 can adjust the camera focus; click the smart vision control 107 to open preset application functions, such as object recognition, text recognition, etc.; open the AI photography control 108 to automatically identify portraits, night scenes and other photo-taking environments according to different scenes, and Automatically adjust photography parameters; click the flash control 109 to control the on or off of the flash; click the filter shooting mode control 110 to select different shooting modes and add different filters to the captured pictures, such as original image mode and green filter mode , Impression filter mode, etc.; the setting button 111 can open the setting menu for camera parameter setting.
  • the camera function interfaces corresponding to different camera functions. For example, there are differences between the functional interface of the camera function, the functional interface of the portrait function, the functional interface of the video recording function, etc. This is specifically reflected in the layout of the camera function interface. The design is based on the actual needs of the camera application, so I won’t go into details here.
  • FIG 2 is a schematic diagram of a user operation flow of an existing camera function control method.
  • the user first clicks the icon of the camera application from the mobile phone desktop to start the camera application (assuming it takes 1 second); as shown in 2b in Figure 2, after the camera application is started, the phone
  • the default camera function interface (such as the photo function interface) is displayed.
  • the user finds the portrait function label from the multiple function labels arranged horizontally and clicks it.
  • the phone switches from the camera function interface to display the portrait
  • the functional interface (assuming it takes 2 seconds) is shown in 2c in Figure 2; as shown in 2d in Figure 2, the user first aims at the photo object, then clicks the shooting button, and the phone automatically runs the camera function to generate the corresponding photo (assumed It takes 1s).
  • the existing camera function control method requires three steps: opening the camera application -> selecting a camera function -> operating the camera function to take pictures or videos, which takes about 4 seconds. If the user needs to capture something wonderful To capture a momentary picture, the user needs to go through the above operation steps in order to take pictures or record videos. Due to the many operation steps, the time consumption increases, which reduces the speed of taking photos or videos, and misses the wonderful moments.
  • FIG 3 is a schematic diagram of another user operation flow of the existing camera function control method.
  • the user first clicks the icon of the camera application from the mobile phone desktop to start the camera application; as shown in 3b in Figure 3, after the camera application is started, the phone displays the default camera function interface (such as the camera function interface) ;
  • the user first aims at the photo object, and then uses the shooting button as the sliding starting point. If the user slides to the left, the camera's photo function will be triggered to take photos in quick succession, and the corresponding photos will be saved; if the user slides to the right, Then trigger the camera recording function to record and generate the corresponding video.
  • this camera function control method uses a fixed shortcut operation method to achieve fast photography and video recording, which reduces the operation steps of taking pictures or video recording, thus speeding up the user's operation speed.
  • this method to a certain extent It improves the user's speed of using the camera.
  • the user needs to remember the camera function corresponding to each sliding direction before use. If not remembered or remembered incorrectly, it may lead to operational errors and affect the user experience.
  • embodiments of the present application provide a camera function control method.
  • the new interaction method combines the camera function selection operation and the camera function control operation, thereby reducing the operation steps of taking pictures or recording videos. , speeding up the user's operation speed, the user does not need to deliberately remember the sliding direction, and the shortcut operation of this application supports more camera functions, thereby improving the user experience.
  • the camera function selection operation in this application specifically refers to the operation of selecting the camera function, such as the operation of clicking "Portrait” on the interface corresponding to the "Photography” function, and the operation of clicking "Video” on the interface corresponding to the "Photography” function.
  • the camera function control operation refers to the operation of clicking the shooting button.
  • the electronic device uses the camera to take pictures or videos.
  • the method of this application combines the above process, allowing users to quickly use different shooting modes, making it easier to remember and improve user experience.
  • FIG. 4 is a schematic diagram of the user operation flow of the camera function control method provided by the embodiment of the present application.
  • the following takes the portrait function control as an example to illustrate the operation process of the user using the camera application of the mobile phone to take portrait photos. The details are as follows:
  • Operation step one The user opens the camera application
  • the user opens the mobile phone desktop, finds the camera icon, and then clicks the camera icon to open the camera application.
  • the way to open the camera application is relatively common. Users can open the mobile phone desktop, find the camera icon and open the camera application.
  • a shortcut key or shortcut operation may be used to trigger opening of the camera application. For example, when the phone is locked, the user finds the camera icon on the lock screen, clicks and slides up to open the camera app, thus saving the user time to open the camera app.
  • Operation link 2 The user selects the portrait function and runs this function to take pictures
  • the mobile phone displays the default camera function operation interface; as shown in 4c in Figure 4, the user aims at the photo object and clicks on the current camera function operation interface. , start the sliding operation from the shooting button position towards the portrait function label (the black arrow 4c in Figure 4 points to the sliding direction); as shown in 4d in Figure 4, after the mobile phone detects the sliding operation, it determines the camera triggered by the current sliding operation
  • the function is a portrait function, which captures an image in response to a sliding operation and uses a portrait processing algorithm to process the image and then save it.
  • this application Compared with the operation mode of the existing camera function control method in Figure 2, this application combines the last two operation links in the existing operation links into one operation link, thereby facilitating the user to quickly switch between different camera modes and improving the user experience. If the phone controls the camera function when the camera application has been started, the user can select the camera function and control the operation of the camera function in one step with just one sliding operation, which greatly improves the user's operation speed.
  • a fixed shortcut operation method is used to control the camera function. For example, sliding to the left can only trigger the camera function, and sliding to the right can only trigger the video function.
  • This application redefines a new camera function control operation method, which associates the sliding direction with the position of the function label.
  • the user only needs to slide toward the position of the target function label to trigger the camera function corresponding to the function label. . Therefore, the user's operation method will be more flexible, that is, the camera function will be triggered according to the sliding point. Therefore, it can be quickly controlled based on any camera function selected by the user. For example, sliding in different directions can correspondingly control a variety of different camera functions.
  • Camera function In different application scenarios, different camera functions can be triggered even if you slide in the same direction, or the same camera function can be triggered in different application scenarios, even if you slide in different directions. Therefore, the camera function control method of this application is more flexible and can be adjusted based on the application scenario when the user uses it. In addition, users do not need to deliberately remember the corresponding relationship between the sliding direction and the camera function. They only need to slide towards a certain function label to trigger the operation of the corresponding camera function. Since the operation method is more flexible, all camera functions can be quickly controlled.
  • the finger stays on the screen of the mobile phone, and the mobile phone responds to the user's operation of staying on the screen of the mobile phone.
  • the number of continuous photos and save pictures can be related to the length of time the user's finger stays on the screen.
  • the phone responds to the lift operation and can stop taking pictures. .
  • the mobile phone sets the maximum number of continuous shots by default, or the user can set the maximum number of continuous shots in advance. In this way, if the phone responds to the user's actions while staying on the phone's screen, continue Execute continuous photos and save the pictures. When the number of continuous photos reaches the maximum number, even if the mobile phone detects that the user's finger still stays on the screen of the mobile phone, that is, the user's finger lifting operation from the screen is not detected. The phone can also stop taking photos.
  • the user can also perform a sliding operation from the shooting button to other labels, such as "movie", "night scene” and other labels.
  • the phone can perform the camera function corresponding to the label. For example, after taking a picture, use the algorithm corresponding to the night scene function to process the picture and save it, or add a filter in movie mode during the video recording process. Mirror, etc.
  • the above embodiment uses a mobile phone as an example to illustrate the photographing method of the embodiment of the present application, the present application is not limited thereto.
  • the execution subject of the photographing method of the embodiment of the present application may also be an electronic device such as a tablet computer or a folding screen device.
  • the camera function interface includes a first area 501 and a second area 502.
  • the first area 501 is a control area for the camera function, preferably a shooting button
  • the second area 502 is around the first area 501. of an area. The following is an example using the first area 501 as the shooting button.
  • control conditions associated with the sliding operation are set.
  • the mobile phone can The camera function selected by the sliding operation is determined, and then the mobile phone controls the operation of the camera function selected by the sliding operation.
  • the user only needs to perform a sliding operation to select the camera function and trigger the operation of the camera function.
  • the control condition is preferably set such that the sliding operation needs to pass through the first area 501 and the second area 502 respectively.
  • the sliding direction of the sliding operation may be from the first area 501 to the second area 502, or from the second area 501 to the second area 502. area 502 to first area 501.
  • the sliding operation passes through the first area 501 (shooting button) to determine that the user wants to take pictures or videos.
  • the camera function selected by the user is determined. That is, the user can simultaneously Realize camera function selection and control camera function operation.
  • the first area 501 is used as the starting point area for the sliding operation
  • the second area 502 is used as The end area of the sliding operation, that is, the sliding direction of the sliding operation is preferably from the first area 501 to the second area 502 .
  • the embodiment of this application provides two methods of control conditions combined with sliding operations to implement sliding operations to select camera functions, including but not limited to the following two methods:
  • the second area 502 in Figure 6 is divided into multiple partitions with different orientations, each partition does not overlap, and each partition corresponds to a function label position.
  • partition 701 corresponds to the portrait function label
  • partition 702 corresponds to the photo taking function label
  • partition 703 corresponds to the video recording function label.
  • the second area 502 above the first area 501 is preferably divided into a plurality of non-overlapping partitions, each of which covers a camera function label.
  • the number of partitions and the corresponding camera function labels may be pre-set by the mobile phone manufacturer or manually configured by the user.
  • the camera function corresponding to each partition is determined based on the currently displayed camera function. For example, if the phone is currently displaying the camera function, the left side of the camera function label is the portrait function label, and the right side is the video recording function label, then the middle partition 702 corresponds to the continuous shooting function, the left partition 701 corresponds to the portrait function, and the right partition 703 Corresponding to the video recording function. If the mobile phone is currently displaying the portrait function, the left side of the portrait function label is the night scene function label, and the right side is the camera function label, then the middle partition 702 corresponds to the portrait function, the left partition 701 corresponds to the night scene function, and the right partition 703 corresponds to continuous shooting Function.
  • each partition covers the text range of the function label, and each partition corresponds to the camera function corresponding to the function label it covers, thereby ensuring that when the user performs a sliding operation, he only needs to slide toward the corresponding function label to ensure that the function label is triggered.
  • the camera function corresponding to the function label For example, if the user slides toward the portrait function label, the portrait function is determined to be triggered when the sliding operation passes through the first area 501 and enters the second area 502; When the operation passes through the first area 501 and enters the second area 502, it is determined that the recording function is triggered.
  • each partition intersects one side boundary of the first area 501.
  • the sliding operation can enter a partition of the second area 502 as soon as it exits the first area 501, and the camera function triggered by the sliding operation can be quickly determined.
  • the second area 502 in Figure 6 is divided into multiple angular ranges with different orientations, each angular range does not overlap, and each angular range corresponds to a function label position.
  • angle range A corresponds to the portrait function
  • angle range B corresponds to the continuous shooting function
  • angle range C corresponds to the video recording function.
  • the first area 501 corresponding to 0-180° is divided into multiple angle ranges, and each angle range covers a function label.
  • the number of angle range divisions and the numerical interval of each angle range can be pre-set by the mobile phone manufacturer or manually configured by the user.
  • the camera functions corresponding to each angle range are specifically determined based on the currently displayed camera functions. For example, if the phone is currently displaying the camera function, the left side of the camera function label is the portrait function label, and the right side is the video function label, then the angle range B (60-120°) in the middle corresponds to the continuous shooting function, and the angle range on the left Range A (120-180°) correspondence For portrait function, the angle range C (0-60°) on the right side corresponds to the video recording function.
  • the left side of the portrait function label is the night scene function label
  • the right side is the camera function label
  • the angle range B (60-120°) in the middle corresponds to the portrait function
  • the angle range A ( 120-180°) corresponds to the night scene function
  • the angle range C (0-60°) on the right corresponds to the continuous shooting function.
  • each angle covers the text range of the function label, and each angle range corresponds to the camera function corresponding to the function label it covers, thereby ensuring that when the user performs a sliding operation, he only needs to slide toward the corresponding function label to ensure that the trigger is
  • the camera function corresponding to this function label for example, if the user slides toward the portrait function label, the portrait function is determined to be triggered when the sliding operation passes through the first area 501 and enters the second area 502; if the user slides toward the video recording function label, then when the sliding operation passes the first area 501 and enters the second area 502, When the sliding operation passes through the first area 501 and enters the second area 502, it is determined that the recording function is triggered.
  • the above two methods can be pre-configured by the mobile phone manufacturer by default, or the above two methods can be pre-configured by the mobile phone manufacturer, and then the user can manually select either method.
  • the partition division method and the angle division method can be preset by default by the mobile phone manufacturer or manually configured by the user.
  • the division method of the partition or the angle range may be set based on the sliding direction of the sliding operation, or the sliding direction of the sliding operation may be determined based on the partition division method or the angle division method. It should be added that the sliding direction of the sliding operation is related to the partition division and angle range division. Therefore, the sliding direction of the sliding operation needs to match the partition division method and the angle division method.
  • partition division and angle range division are specifically related to the relative positions between the shooting button and each function label.
  • both the partition range and the angle range need to cover the text corresponding to each function label.
  • the text corresponding to each function label can be used to indicate the direction of the user's sliding operation, so that the user does not need to Deliberately memorizing the direction of the sliding operation can also accurately trigger the corresponding camera function.
  • the existing camera function control method users need to remember the camera function corresponding to each sliding direction before using it. If they do not remember it or remember it incorrectly, it may lead to operational errors.
  • the existing camera function control method only has two sliding directions: left and right, so users can only use two camera functions.
  • This application associates the sliding direction with the position of the function label. The user only needs to slide in the direction of each function label position to trigger the camera function corresponding to that position. At the same time, because the position of the function label is variable, even if the user slides in the same direction Different camera functions may also be triggered.
  • the camera function control method of this application does not need to deliberately remember the direction of the sliding operation. It also supports multi-directional sliding, which is not only convenient for users to operate, but also can apply a variety of camera functions, improving the user's ability to capture wonderful moments. usage experience.
  • FIG. 9 is a schematic flowchart of a camera function control method provided by an embodiment of the present application. The steps of the camera function control method are exemplified below, including:
  • the mobile phone displays the first interface of the first camera function, which is the interface for the user to operate the camera function;
  • the camera function may refer to the shooting mode of the camera, and the camera of the mobile phone may include multiple different shooting modes. That is to say, the camera of the mobile phone may include multiple different camera functions.
  • the camera of a mobile phone can include different modes such as photo mode, portrait mode, video mode, night scene mode, and movie mode. Camera functions.
  • the first interface may be an interface corresponding to any shooting mode.
  • the first interface may include a viewfinder, a shooting button, and other controls for setting shooting parameters, where the viewfinder may be used to display images captured in real time by the camera of the mobile phone. , that is to say, the viewfinder is used to preview the images collected by the camera in real time; the shooting button is used to control shooting and saving images or videos.
  • the user can click or slide the shooting button, and the mobile phone can perform shooting, Save images or videos; other controls for setting shooting parameters can include the gallery thumbnails mentioned above, front and rear camera switching buttons, camera focus adjustment controls, smart vision controls, AI photography controls, flash controls, and filter shooting mode controls , setting button, etc.
  • object recognition, text recognition, etc. turning on the AI photography control can automatically identify portraits, night scenes and other photographing environments according to different scenes, and automatically adjust the photographing parameters; the flash control can control the turning on or off of the flash; click the filter shooting mode control to select Different shooting modes add different filters to the captured pictures, such as original image mode, green filter mode, impression filter mode, etc.; the setting button can open the setting menu to set camera parameters.
  • the mobile phone in response to the user's triggering operation on the desktop camera application icon, the mobile phone opens the camera application and displays the first interface.
  • the first interface displayed on the phone for the first time may be the default camera function interface, as shown in 2b in Figure 2.
  • the user can also change the camera function interface currently displayed on the phone by clicking on other function labels on the first interface. For example, if the user clicks on the portrait function label, the phone switches to display the portrait function interface. This is shown as 2c in Figure 2.
  • the first interface displayed by the mobile phone can be the default camera function interface displayed when the camera application is started, or it can be the camera function interface corresponding to the function tag selected by the user, that is, the camera function control method of the present application can be implemented in any camera function interface.
  • the mobile phone detects whether there is a sliding operation by the user on the first interface.
  • the first interface includes a first area and a second area;
  • the first interface includes a first area 501 and a second area 502.
  • the first area 501 is a control area for camera functions, preferably a shooting button, and the second area 502 is around the first area 501. of an area.
  • the sliding operation can be a sliding operation in any sliding direction and any sliding path.
  • the mobile phone controls the camera function in response to the sliding operation.
  • the mobile phone determines whether the sliding operation enters the second area from the first area based on the movement trajectory of the touch point corresponding to the sliding operation;
  • the mobile phone when the mobile phone detects a user's sliding operation on the first interface, the mobile phone obtains the movement trajectory of the touch point of the sliding operation on the screen, and determines whether to trigger the camera function control based on the movement trajectory.
  • the mobile phone when the movement trajectory of the touch point on the screen during the sliding operation is from the first area 501 to the second area 502, the mobile phone further determines whether the sliding direction formed by the movement trajectory of the touch point points to a certain function label. If so, it is determined that the current sliding operation can realize the camera function selection and trigger the camera function operation.
  • control conditions combined with sliding operations include but are not limited to the following two methods:
  • Method 1 Sliding operation + partition.
  • the second area includes multiple non-overlapping partitions. Each partition covers a function label.
  • the specific implementation process steps of this method are as follows:
  • the second area 502 is divided into multiple partitions with different orientations. Each partition does not overlap. Each partition covers a function label.
  • partition 701 corresponds to the portrait function label
  • partition 702 corresponds to the portrait function label.
  • partition 703 corresponds to the video recording function label.
  • the mobile phone starts the camera function triggered by the sliding operation, and adjusts the layout of the first interface currently displayed on the mobile phone to display the second interface corresponding to the camera function triggered by the sliding operation.
  • the second interface is the corresponding interface when the camera function is running;
  • the phone When the user performs a sliding operation on the first interface, the phone automatically determines the camera function triggered by the sliding operation, then starts the camera function, and adjusts the layout of the first interface currently displayed on the phone during the startup process, such as hiding function labels and flash icons. , setting menu icons, etc., or also display interface elements that were not previously displayed on the first interface, such as displaying the recording control button and recording time when the recording function is triggered.
  • starting the camera function and displaying the second interface corresponding to the camera function can be performed at the same time, or the camera function can be started to take photos or videos after displaying the second interface corresponding to the camera function.
  • the timing for starting the operation of the camera function may be when the user's finger slides out of the shooting button and enters the second area, or when the user's finger slides across the corresponding function label.
  • the mobile phone controls the continuous shooting function to take continuous pictures based on the sliding operation.
  • the mobile phone can further control the operation of the camera function based on the sliding operation. For example, after the user slides from the shooting button to the "Photography" label, continues to slide upward for a certain distance, and then stays on the screen of the mobile phone with his finger. On the mobile phone, in response to the user's operation of staying on the screen of the mobile phone, the mobile phone continues to take pictures continuously and save pictures. The number of pictures taken continuously and saved can be related to the length of time the user's finger stays on the screen. When the user's finger moves from After the screen is lifted, the phone responds to the lifting operation and can stop taking pictures.
  • the camera function triggered by the sliding operation is the continuous shooting function to realize continuous taking pictures, which specifically includes the photographing function to realize continuous taking pictures or the portrait function to realize continuous taking pictures.
  • the camera function or portrait function can be used to take continuous photos, thereby achieving continuous photo shooting or portrait continuous shooting, reducing the user's operation of using a certain camera function to take continuous photos, and improving the user experience.
  • the mobile phone sets the maximum number of continuous shots by default, or the user can set the maximum number of continuous shots in advance. In this way, if the phone responds to the user's operation of staying on the screen of the phone, continues to perform continuous photography, and saves the picture, when the number of continuous photography reaches the maximum number, even if the phone detects that the user's finger still stays on the screen of the phone , that is, the mobile phone can also stop taking pictures if the user's finger lifting operation from the screen is not detected.
  • the control conditions of this embodiment are combined with the implementation of the sliding operation.
  • multiple function labels are associated with each partition respectively, which facilitates the user to correspond to each partition through the sliding operation, and then perform various camera operations through each partition.
  • Function control Compared with the existing camera function control methods, the camera function control method of this application does not require deliberately remembering the direction of the sliding operation. It also supports multi-directional sliding, which is not only convenient for user operation, but also can apply a variety of camera functions, improving user capture. The experience of using beautiful moments pictures.
  • Method 2 Sliding operation + angle.
  • the second area corresponds to multiple non-overlapping angle ranges.
  • Each angle range corresponds to covering a function label.
  • the specific implementation process steps of this method are as follows:
  • the second area 502 in Figure 6 is divided into multiple angular ranges with different orientations. Each angular range does not overlap, and each angular range corresponds to covering a function label.
  • angle range A corresponds to the portrait function label
  • angle range B corresponds to the camera function label
  • angle range C corresponds to the video recording function label.
  • the positive direction of the x-axis is used as the angle to divide the reference direction.
  • the angle between the sliding direction corresponding to the sliding operation and the preset reference direction is calculated. This angle is used to determine the sliding direction.
  • the mobile phone determines that the camera function corresponding to the angle range including the included angle is the camera function triggered by the sliding operation;
  • the angle range C (0-60°) corresponds to the video function label
  • the angle range B 60-120°
  • the angle range A 120-180°
  • the calculated included angle is 30°, and the included angle is within the angle range of 0-60°, confirming that the sliding operation triggers the recording function
  • the calculated included angle is 90°
  • the included angle is within the angle range of 60-120°, Make sure that the sliding operation triggers the continuous shooting function
  • the calculated angle is 150° and the angle is within the angle range of 120-180°, make sure that the sliding operation triggers the portrait function.
  • the mobile phone starts the camera function triggered by the sliding operation, and adjusts the layout of the first interface currently displayed on the mobile phone to display the second interface corresponding to the camera function triggered by the sliding operation.
  • the second interface is the corresponding interface when the camera function is running;
  • the mobile phone controls the continuous shooting function to take continuous pictures based on the sliding operation.
  • steps S905C and S905D are the same as steps S904B and S904C respectively, so they will not be described again here.
  • FIG. 10 a schematic diagram of a sliding camera interface is shown.
  • 10a in Figure 10 after the mobile phone opens the camera application in response to the user's operation, the mobile phone displays the first interface corresponding to the camera function label by default. The user did not select other function labels, but directly performed a sliding operation on the first interface corresponding to the current camera function label (the black arrow as shown in 10a in Figure 10 indicates the sliding direction); as shown in 10b in Figure 10, when When the sliding operation slides outward from the inner area of the shooting button and passes by the edge of the shooting button, the mobile phone determines that the current sliding direction points to the camera function label based on the movement trajectory of the touch point on the screen during the sliding operation, hiding interface elements unrelated to the current photo taking.
  • various function labels, flash icons, setting menu icons, etc. display viewfinder, picture Library thumbnail and shooting button (i.e. the second interface), and the word "Continuous Shooting” is displayed in the original photo function label position, and a number is displayed in the middle of the shooting button.
  • This number represents the current cumulative number of continuously taken photos (the initial value is 0); as shown in 10c in Figure 10, when the sliding operation continues to slide along the direction of the text of the camera function ("Continuous Shooting") into the second area 502 and passes the word "Continuous Shooting", the touch of the sliding operation is displayed.
  • the line between the point and the edge of the shooting button is used to indicate the sliding direction corresponding to the sliding operation; as shown in 10d in Figure 10, when the sliding operation continues to slide along the current sliding direction, the middle position of the shooting button is displayed
  • the number begins to increase, and as the number of continuous photos increases, the thumbnail preview photos displayed in the gallery thumbnail are also refreshed accordingly; as shown in 10e in Figure 10, when the sliding operation stops but the touch point does not disappear (the user The finger still stays on the current second interface), and the number displayed in the middle of the shooting button continues to increase. It needs further explanation that when the number of consecutively taken photos reaches the maximum continuous shooting threshold, the phone ends the continuous shooting and resumes the display of the camera function.
  • the third interface corresponding to the label when the touch point corresponding to the sliding operation disappears (the user's finger leaves the current second interface), the phone ends the continuous shooting and resumes displaying the camera function label before the continuous shooting started.
  • the third interface and the first interface both belong to the operation interface of the camera function. The difference between the two is that the content displayed in the gallery thumbnail is different, and the content displayed in real time in the viewfinder is different. Since the content displayed in the gallery thumbnail is different from that in the viewfinder, The content displayed in real time changes dynamically, so to a certain extent, the third interface can be directly regarded as the first interface at different times.
  • Figure 11 is a schematic diagram of another sliding camera interface. As shown in 11a in Figure 11, after the phone opens the camera application in response to the user's operation, the phone displays the first interface corresponding to the camera function label by default; as shown in 11b in Figure 11, the user selects the first interface corresponding to other camera function labels.
  • An interface for example, select the portrait function label, and then the mobile phone displays the first interface corresponding to the portrait function label, and the user performs a sliding operation on the first interface corresponding to the current portrait function label (the black arrow shown in 11b in Figure 11 indicates sliding direction); as shown in 11c in Figure 11, when the sliding operation slides outward from the internal area of the shooting button and passes through the edge of the shooting button, the mobile phone determines the current sliding direction according to the movement trajectory of the touch point on the screen during the sliding operation.
  • the camera function label hides interface elements that are not related to the current photo taking, such as various function labels, flash icons, settings menu icons, etc., and only displays the viewfinder, gallery thumbnails, and shooting button (i.e., the second interface), and displays the photo function in the original
  • the label position displays the word "Continuous Shooting", and a number is displayed in the middle of the shooting button. This number represents the current number of consecutively taken photos (the initial value is 0); as shown in 11d in Figure 11, when the sliding operation continues along the photo taking
  • the position of the text of the function ("Continuous Shooting") slides into the second area 502 and passes the word "Continuous Shooting”
  • This connection is used to indicate the sliding.
  • the sliding direction corresponding to the operation as shown in 11e in Figure 11, when the sliding operation continues to slide along the current sliding direction, the number displayed in the middle of the shooting button begins to increase, and at the same time, as the number of continuous photos increases, the gallery thumbnail is displayed
  • the thumbnail preview photos are also refreshed accordingly; as shown in 11f in Figure 11, when the sliding operation stops but the touch point does not disappear (the user's finger still stays on the current second interface), the number displayed in the middle of the shooting button continues increase.
  • the mobile phone ends the continuous shooting and resumes displaying the first interface corresponding to the portrait function label; as shown in 11g in Figure 11, when sliding The touch point corresponding to the operation disappears (the user's finger leaves the current second interface), the phone ends the continuous shooting and resumes displaying the third interface corresponding to the portrait function label before the continuous shooting started, and refreshes the display content of the gallery thumbnails.
  • the third interface and the first interface are both camera function operation interfaces. The difference between the two is that the content displayed in the gallery thumbnail is different, and the content displayed in real time by the viewfinder is different.
  • FIG. 12 a schematic diagram of a sliding portrait continuous shooting interface is shown.
  • the mobile phone displays the first image corresponding to the camera function label by default.
  • the user does not select other function labels, but directly performs sliding operations on the first interface corresponding to the current camera function label (the black arrow shown in 12a in Figure 12 indicates the sliding direction); as shown in 12b in Figure 12 , when the sliding operation slides outward from the inner area of the shooting button and passes through the edge of the shooting button, the mobile phone determines the current sliding direction to point to the portrait function label based on the movement trajectory of the touch point on the screen during the sliding operation, hiding the interface irrelevant to the current photo taking Elements, such as various function labels, flash icons, setting menu icons, etc., only display the viewfinder, gallery thumbnail and shooting button (i.e.
  • the thumbnail preview photos displayed in the gallery thumbnail are also refreshed accordingly. Display; As shown in 12e in Figure 12, when the sliding operation stops but the touch point does not disappear (the user's finger still stays on the current second interface), the number displayed in the middle of the shooting button continues to increase.
  • the phone ends the portrait continuous shooting and resumes displaying the first interface corresponding to the photo function label; as shown in 12f in Figure 12, when the touch point corresponding to the sliding operation disappears (The user's finger leaves the current second interface), the mobile phone ends the portrait continuous shooting and resumes displaying the third interface corresponding to the photo function label before the portrait continuous shooting started and refreshes the display content of the gallery thumbnails.
  • the third interface and the first interface are both camera function operation interfaces. The difference between the two is that the content displayed in the gallery thumbnail is different, and the content displayed in real time by the viewfinder is different.
  • Figure 13 is a schematic diagram of another sliding portrait continuous shooting interface. As shown in 13a in Figure 13, after the phone opens the camera application in response to the user's operation, the phone displays the first interface corresponding to the camera function label by default; as shown in 13b in Figure 13, the user selects the first interface corresponding to other camera function labels.
  • An interface for example, select the portrait function label, and then the mobile phone displays the first interface corresponding to the portrait function label, and the user performs a sliding operation on the first interface corresponding to the current portrait function label (the black arrow shown in 13b in Figure 13 indicates sliding direction); as shown in 13c in Figure 13, when the sliding operation slides outward from the internal area of the shooting button and passes through the edge of the shooting button, the mobile phone determines the current sliding direction according to the movement trajectory of the touch point on the screen during the sliding operation.
  • the portrait function label hides interface elements that are not related to the current photo taking, such as various function labels, flash icons, settings menu icons, etc. It only displays the viewfinder, gallery thumbnails, and shooting buttons (that is, the second interface), and displays them in the original portrait function.
  • the label position displays the words “Portrait Continuous Shooting”, and a number is displayed in the middle of the shooting button.
  • This number represents the current number of continuously accumulated photos (the initial value is 0); as shown in 13d in Figure 13, when the sliding operation continues along the
  • the connection between the touch point of the sliding operation and the edge of the shooting button is displayed, and the connection is to indicate the sliding direction corresponding to the sliding operation; as shown in 13e in Figure 13, when the sliding operation continues to slide along the current sliding direction, the number displayed in the middle of the shooting button begins to increase.
  • the gallery The thumbnail preview photos displayed as thumbnails are also refreshed accordingly; as shown in 13f in Figure 13, when the sliding operation stops but the touch point does not disappear (the user's finger still stays on the current second interface), the middle position of the shooting button is displayed of The numbers continue to increase.
  • the phone ends the portrait continuous shooting and resumes displaying the first interface corresponding to the portrait function label; as shown in 13g in Figure 13 , when the touch point corresponding to the sliding operation disappears (the user's finger leaves the current second interface), the phone ends the portrait continuous shooting and resumes displaying the third interface corresponding to the portrait function label before the portrait continuous shooting started and refreshes the display content of the gallery thumbnails.
  • the third interface and the first interface are both camera function operation interfaces. The difference between the two is that the content displayed in the gallery thumbnail is different, and the content displayed in real time by the viewfinder is different.
  • FIG 14 a schematic diagram of a sliding portrait photography interface is shown.
  • the phone displays the first interface corresponding to the camera function label by default.
  • the user did not select other function labels, but directly performed a sliding operation on the first interface corresponding to the current camera function label (the black arrow shown as 14a in Figure 14 indicates the sliding direction); as shown in 14b in Figure 14,
  • the phone determines that the current sliding direction points to the portrait function label based on the movement trajectory of the touch point on the screen during the sliding operation, hiding interface elements unrelated to the current photo taking.
  • this connection is used to indicate the sliding direction corresponding to the sliding operation, take a portrait photo at the same time, and refresh the thumbnail preview photo displayed in the gallery thumbnail; as shown in 14d in Figure 14, when the touch point corresponding to the sliding operation disappears (the user's finger leaves the current second interface), the mobile phone ends portrait photography and resumes displaying the third interface corresponding to the camera function label before the portrait photography started, and refreshes the display content of the gallery thumbnails.
  • the third interface and the first interface are both camera function operation interfaces. The difference between the two is that the content displayed in the gallery thumbnail is different, and the content displayed in real time by the viewfinder is different.
  • the mobile phone when controlling the camera function to take continuous pictures, calls an image processing program adapted to the camera function to perform image processing on each generated photo in turn. For example, assuming that the current mobile phone controls the portrait function to take continuous photos, each time the mobile phone generates a portrait photo, it will automatically call the beautification program to process the portrait photo, such as face slimming, skin smoothing, background blur, etc. If the user has pre-set beauty settings If the beautification parameters are set by the user, the beautification parameters set by the user will be used, otherwise the default beautification parameters of the beautification program will be used.
  • the camera function triggered by the user's sliding operation is the video recording function
  • the second interface of the video recording function in response to the sliding operation, continue to slide along the direction of the text of the video recording function, and control the video recording function to record; when When the sliding operation fails, switch to display the third interface of the recording function and continue to keep the recording function running.
  • the layout of the video recording interface shown in Figure 15 mainly includes: viewfinder 101, front and rear camera switch button 105, camera focus adjustment control 106, flash control 109, video control button 112, pause button 113, camera function switch button 114 and recording time 115.
  • the viewfinder 101 is used to display the images collected by the camera in real time; click the front and rear camera switching button 105 to switch the front and rear cameras; slide the camera focus adjustment control 106 to adjust the camera focus; click the flash control 109 to control the opening or closing of the flash; click The recording control button 112 can start or end the recording function; click the pause button 113 to pause the recording during the recording process; click the camera function switching button 114 to switch from the recording function to the camera function; the recording time 115 is used to display the current recording duration.
  • the schematic diagram of the sliding recording interface is shown in Figure 16.
  • the mobile phone opens the camera application in response to the user's operation.
  • the mobile phone displays the first interface corresponding to the camera function label by default.
  • the user does not select other function labels, but directly displays the first interface corresponding to the current camera function label.
  • a sliding operation is performed on an interface (the black arrow shown in 16a in Figure 16 indicates the sliding direction). When the sliding operation slides outward from the internal area of the shooting button and passes through the edge of the shooting button, or the user presses and holds the shooting button.
  • the mobile phone determines that the current sliding direction points to the video recording function label based on the movement trajectory of the touch point on the screen during the sliding operation, or hides it in response to the long press of the shooting button.
  • Interface elements unrelated to the current recording such as various function labels, setting menu icons, etc., display the viewfinder and shooting button (i.e. the second interface), and display the word “recording” in the position of the original recording function label, and display the word “recording” in the position of the shooting button and "
  • a connecting line is displayed between the "Recording” labels. The connecting line can be used to prompt the user to slide to the position of the "Recording" label and lock the recording function.
  • the recording time is displayed on the left side of the top of the box. This number represents the cumulative duration of the current recording (the initial value is 00:00); as shown in 16c in Figure 16, when the sliding operation continues along the text of the recording function ("Recording") When the orientation slides into the second area 502 and passes the word "recording", the recording time increases; as shown in 16d in Figure 16, the touch point corresponding to the sliding operation disappears (the user's finger leaves the current second interface), and the recording function is displayed.
  • the recording function is locked, and the recording function continues to run and the recording time increases; as shown in 16e in Figure 16, the user clicks the recording control button on the recording interface, the phone ends recording, and the recording time stops increasing; as shown in 16f in Figure 16
  • the mobile phone resumes displaying the fourth interface corresponding to the camera function label before the recording started, and refreshes the display content of the gallery thumbnail.
  • the fourth interface and the first interface both belong to the operation interface of the camera function. The difference between the two is that the content displayed in the gallery thumbnail is different, and the content displayed in real time in the viewfinder is different. Since the content displayed in the gallery thumbnail is different from that in the viewfinder, The content displayed in real time changes dynamically, so to a certain extent, the third interface can be directly regarded as the first interface at different times.
  • the mobile phone when the mobile phone displays the camera function interface, if there is a long press operation in the control area (that is, the first area) of the current camera function interface, the video recording function is triggered to run; when the long press operation fails, the mobile phone Stop the recording function.
  • the schematic diagram of the long press recording interface is shown in Figure 17.
  • the phone displays the first interface corresponding to the camera function label by default; as shown in 17b in Figure 17, the user performs a long press operation on the shooting button of the first interface.
  • the phone hides interface elements unrelated to the current recording, such as various function labels, setting menu icons, etc., and only displays the viewfinder and shooting button (i.e. the second interface), and displays the original recording
  • the function label displays the word "Video” and generates a guide line that connects the word "Video” to the edge of the shooting button.
  • the recording time is displayed on the left side of the top of the viewfinder.
  • This number represents the cumulative duration of the current recording (initial value is 00:00); as shown in 17c in Figure 17, the user continues to perform the long press operation, and the phone continues to control the recording function, and the recording time increases; as shown in 17d in Figure 17, the user lets go to end the long press operation, and the phone Restore and display the third interface corresponding to the camera function label before the recording starts, and refresh the display content of the gallery thumbnails.
  • the third interface and the first interface are both camera function operation interfaces. The difference between the two is that the content displayed in the gallery thumbnail is different, and the content displayed in real time by the viewfinder is different.
  • each function label is associated with each angle range respectively, thereby facilitating the user to correspond to each angle through the sliding operation. angle range, and then perform multiple camera function controls through the angle range.
  • the camera function control method of this application does not need to remember the direction of the sliding operation. It also supports multi-directional sliding, which is not only convenient for user operation, but also It can be used with a variety of camera functions, improving the user experience of capturing wonderful moments.
  • the present application also provides an electronic device, which includes a memory for storing a computer program and a processor for executing the computer program, wherein when the computer program stored in the memory is modified by the processor When executed, the electronic device is triggered to execute some or all steps of the camera function control method in the above embodiment.
  • FIG. 18 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device 10 may include a processor 100, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, and an antenna 1 , Antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 10 .
  • the electronic device 10 may include more or fewer components than shown in the figures, or some components may be combined, some components may be separated, or some components may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 100 may include one or more processing units.
  • the processor 100 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), an image signal processor ( image signal processor (ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc.
  • application processor application processor
  • GPU graphics processing unit
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural network processing unit neural-network processing unit
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 100 may also be provided with a memory for storing instructions and data.
  • the memory in processor 100 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 100 . If the processor 100 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 100 is reduced, thus improving the efficiency of the system.
  • processor 100 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuitsound, I2S) interface, a pulse code modulation (PCM) interface, and a universal asynchronous receiver (universal asynchronous receiver) /transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and/or universal serial bus bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (derail clock line, SCL).
  • processor 100 may include multiple sets of I2C buses.
  • the processor 100 can separately couple the touch sensor 180K, charger, flash, camera 193, etc. through different I2C bus interfaces.
  • the processor 100 can be coupled to the touch sensor 180K through an I2C interface, so that the processor 100 and the touch sensor 180K communicate through the I2C bus interface to implement the touch function of the electronic device 10 .
  • the I2S interface can be used for audio communication.
  • processor 100 may include multiple sets of I2S buses.
  • the processor 100 can be coupled with the audio module 170 through the I2S bus to implement communication between the processor 100 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface to implement the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface to implement the function of answering calls through a Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is generally used to connect the processor 100 and the wireless communication module 160 .
  • the processor 100 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface to implement the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 100 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (displayserial interface, DSI), etc.
  • the processor 100 and the camera 193 communicate through the CSI interface to implement the shooting function of the electronic device 10 .
  • the processor 100 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 10 .
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 100 with the camera 193, display screen 194, wireless communication module 160, audio module 170, sensor module 180, etc.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and may be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 10, and can also be used to transmit data between the electronic device 10 and peripheral devices. It can also be used to connect headphones to play audio through them. This interface can also be used to connect other electronic devices 10, such as AR devices, etc.
  • the interface connection relationships between the modules illustrated in the embodiment of the present invention are only schematic illustrations and do not constitute a structural limitation on the electronic device 10 .
  • the electronic device 10 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 can Interface 130 receives charging input from the wired charger.
  • the charging management module 140 may receive wireless charging input through the wireless charging coil of the electronic device 10 . While the charging management module 140 charges the battery 142, it can also provide power to the electronic device 10 through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 100.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140 to provide power to the processor 100, the internal memory 121, the display screen 194, the camera 193, the wireless communication module 160, and the like.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be provided in the processor 100 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 10 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 10 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the mobile communication module 150 can provide solutions for wireless communication including 2G/3G/4G/5G applied on the electronic device 10 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be disposed in the processor 100 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 100 may be provided in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs sound signals through audio devices (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194.
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 100 and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 10 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (bluetooth, BT), and global navigation satellite systems. (global navigation satellite system, GNSS), frequency modulation (FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • Wi-Fi wireless fidelity
  • Bluetooth bluetooth, BT
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 100 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 100, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device 10 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 10 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (codedivision multiple access, CDMA), broadband code Wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM , and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou satellite navigation system (beidounavigation satellite system, BDS), quasi-zenith satellite system (quasi- zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou satellite navigation system beidounavigation satellite system, BDS
  • quasi-zenith satellite system quasi-zenith satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 10 implements display functions through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 100 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 194 is used to display images, videos, etc.
  • Display 194 includes a display panel.
  • the display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light emitting diode active-matrix organic light emitting diode
  • AMOLED organic light-emitting diodes
  • FLED flexible light-emitting diodes
  • Miniled MicroLed, Micro-oLed, quantum dot light-emitting diodes (QLED), etc.
  • the electronic device 10 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 10 can implement the shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • ISP is used to process the data fed back by camera 193. For example, when taking a photo, the shutter is opened, and the light is transmitted to the camera photosensitive element through the lens. The light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to ISP for processing and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on the noise, brightness, and skin color of the image. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, ISP can be set in camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) phototransistor.
  • CMOS complementary metal oxide semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to be converted into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • the DSP converts the digital image signal into an image signal in a standard RGB, YUV or other format.
  • the electronic device 10 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 10 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital video.
  • Electronic device 10 may support one or more video codecs. In this way, the electronic device 10 can play or record videos in multiple encoding formats, such as moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • Intelligent cognitive applications of the electronic device 10 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, etc.
  • the internal memory 121 may include one or more random access memories (RAM) and one or more non-volatile memories (NVM).
  • RAM random access memories
  • NVM non-volatile memories
  • Random access memory can include static random-access memory (SRAM), dynamic random-access memory (DRAM), synchronous dynamic random-access memory (SDRAM), double data rate synchronous Dynamic random access memory (double data rate synchronous dynamic random access memory, DDR SDRAM, such as the fifth generation DDR SDRAM is generally called DDR5SDRAM), etc.;
  • SRAM static random-access memory
  • DRAM dynamic random-access memory
  • SDRAM synchronous dynamic random-access memory
  • DDR SDRAM double data rate synchronous Dynamic random access memory
  • DDR SDRAM double data rate synchronous dynamic random access memory
  • DDR5SDRAM double data rate synchronous dynamic random access memory
  • Non-volatile memory can include disk storage devices and flash memory.
  • Flash memory can be divided according to the operating principle to include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc.
  • the storage unit potential level it can include single-level storage cells (single-level cell, SLC), multi-level storage cells (multi-level cell, MLC), third-level storage unit (triple-level cell, TLC), fourth-level storage unit (quad-level cell, QLC), etc.
  • it can include universal flash storage (English: universal flash storage, UFS), Embedded multimedia card (embedded multimedia card, eMMC), etc.
  • the random access memory can be directly read and written by the processor 100 and can be used to store executable programs (such as machine instructions) of the operating system or other running programs. It can also be used to store user and application program data.
  • the non-volatile memory can also store executable programs and user and application data, etc., and can be loaded into the random access memory in advance for direct reading and writing by the processor 100 .
  • the external memory interface 120 can be used to connect an external non-volatile memory to expand the storage capacity of the electronic device 10 .
  • the external non-volatile memory communicates with the processor 100 through the external memory interface 120 to implement the data storage function. For example, save music, video and other files in external non-volatile memory.
  • the electronic device 10 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 100 , or some functional modules of the audio module 170 may be provided in the processor 100 .
  • the speaker 170A also called a "speaker" is used to convert an audio electrical signal into a sound signal.
  • the electronic device 10 can listen to music or listen to a hands-free call through the speaker 170A.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 10 answers a call or a voice message, the voice can be heard by bringing the receiver 170B close to the human ear.
  • Microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak by putting their mouth close to microphone 170C to input the sound signal into microphone 170C.
  • the electronic device 10 can be provided with at least one microphone 170C. In other embodiments, the electronic device 10 can be provided with two microphones 170C, which can not only collect sound signals but also realize noise reduction function. In other embodiments, the electronic device 10 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify the sound source, realize directional recording function, etc.
  • the headphone interface 170D is used to connect wired headphones.
  • the headphone interface 170D can be a USB interface 130, or a 3.5mm Open Mobile Terminal Platform (OMTP) standard interface, or a Cellular Telecommunications Industry Association of the USA (CTIA) standard interface.
  • OMTP Open Mobile Terminal Platform
  • CTIA Cellular Telecommunications Industry Association of the USA
  • the pressure sensor 180A is used to sense pressure signals and can convert the pressure signals into electrical signals.
  • pressure sensor 180A may be disposed on display screen 194 .
  • pressure sensors 180A there are many types of pressure sensors 180A, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc.
  • a capacitive pressure sensor may include at least two parallel plates of conductive material.
  • the electronic device 10 determines the intensity of the pressure based on the change in capacitance.
  • the electronic device 10 detects the strength of the touch operation according to the pressure sensor 180A.
  • the electronic device 10 may also calculate the touched position based on the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch location but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold is applied to the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the motion posture of the electronic device 10 .
  • the angular velocity of electronic device 10 about three axes may be determined by gyro sensor 180B.
  • the gyro sensor 180B can be used for image stabilization. For example, when the shutter is pressed, the gyro sensor 180B detects the angle at which the electronic device 10 shakes, calculates the distance that the lens module needs to compensate based on the angle, and allows the lens to offset the shake of the electronic device 10 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • Air pressure sensor 180C is used to measure air pressure. In some embodiments, the electronic device 10 calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • Magnetic sensor 180D includes a Hall sensor.
  • the electronic device 10 may utilize the magnetic sensor 180D to detect the opening and closing of the flip cover.
  • the electronic device 10 may detect the opening and closing of the flip according to the magnetic sensor 180D. Then, based on the detected opening and closing status of the leather case or the opening and closing status of the flip cover, features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 10 in various directions (generally three axes). When the electronic device 10 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the electronic device 10 and be used in horizontal and vertical screen switching, pedometer and other applications.
  • Distance sensor 180F for measuring distance.
  • Electronic device 10 may measure distance via infrared or laser. In some embodiments, when shooting a scene, the electronic device 10 can utilize the distance sensor 180F to measure distance to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 10 emits infrared light through the light emitting diode.
  • Electronic device 10 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 10 . When insufficient reflected light is detected, the electronic device 10 may determine that there is no object near the electronic device 10 .
  • the electronic device 10 can use the proximity light sensor 180G to detect when the user holds the electronic device 10 close to the ear for talking, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in holster mode, and pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device 10 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 10 is in the pocket to prevent accidental touching.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 10 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photography, fingerprint call answering, etc.
  • Temperature sensor 180J is used to detect temperature.
  • the electronic device 10 utilizes the temperature detected by the temperature sensor 180J to execute the temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 10 reduces the performance of a processor located near the temperature sensor 180J in order to reduce power consumption and implement thermal protection. In other embodiments, when the temperature is lower than another threshold, the electronic device 10 heats the battery 142 to prevent the low temperature from causing the electronic device 10 to shut down abnormally. In some other embodiments, when the temperature is lower than another threshold, the electronic device 10 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also known as "touch device”.
  • the touch sensor 180K can be disposed on the display screen 194.
  • the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near the touch sensor 180K.
  • the touch sensor can pass the detected touch operation to the application processor to determine the touch event type.
  • Visual output related to the touch operation may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 10 at a location different from that of the display screen 194 .
  • Bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human body's vocal part.
  • the bone conduction sensor 180M can also contact the human body's pulse and receive blood pressure beating signals.
  • the bone conduction sensor 180M can also be provided in an earphone and combined into a bone conduction earphone.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the vocal vibrating bone obtained by the bone conduction sensor 180M to implement the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M to implement the heart rate detection function.
  • the buttons 190 include a power button, a volume button, etc.
  • Key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 10 may receive key input and generate key signal input related to user settings and function control of the electronic device 10 .
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for vibration prompts for incoming calls and can also be used for touch vibration feedback.
  • touch operations for different applications can correspond to different vibration feedback effects.
  • the motor 191 can also respond to different vibration feedback effects for touch operations in different areas of the display screen 194 .
  • Different application scenarios such as time reminders, receiving information, alarm clocks, games, etc.
  • the touch vibration feedback effect can also be customized.
  • Indicator 192 may be an indicator light, which may be used to indicate charging status, power changes, messages, missed calls, notifications, etc.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be connected to or separated from the electronic device 10 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
  • the electronic device 10 can support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card, etc. Multiple frame cards can be inserted into the same SIM card interface 195 at the same time. The types of the multi-frame cards may be the same or different.
  • the SIM card interface 195 is also compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 10 interacts with the network through the SIM card to implement functions such as phone calls and data communication.
  • the electronic device 10 uses an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 10 and cannot be separated from the electronic device 10 .
  • This embodiment also provides a computer storage medium that stores computer instructions.
  • the electronic device 10 causes the electronic device 10 to execute the above related method steps to implement the camera function in the above embodiment. Control Method.
  • This embodiment also provides a computer program product.
  • the computer program product When the computer program product is run on a computer, it causes the computer to perform the above related steps to implement the camera function control method in the above embodiment.
  • inventions of the present application also provide a device.
  • This device may be a chip, a component or a module.
  • the device may include a connected processor and a memory.
  • the memory is used to store computer execution instructions.
  • the processor can execute computer execution instructions stored in the memory, so that the chip executes the camera function control method in each of the above method embodiments.
  • the electronic equipment, computer storage media, computer program products or chips provided in this embodiment are all used to execute the corresponding methods provided above. Therefore, the beneficial effects they can achieve can be referred to the corresponding methods provided above. The beneficial effects of the method will not be repeated here.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be combined. Either it can be integrated into another device, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the unit described as a separate component may or may not be physically separate.
  • the component shown as a unit may be one physical unit or multiple physical units, that is, it may be located in one place, or it may be distributed to multiple different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or contribute to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the software product is stored in a storage medium , including several instructions to cause a device (which can be a microcontroller, a chip, etc.) or a processor to execute all or part of the steps of the methods of various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code. .

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)

Abstract

本申请提供一种相机功能控制方法、电子设备及存储介质。该方法包括:显示电子设备的第一相机功能的第一界面;接收用户在第一相机功能的第一界面的第一操作,第一操作包括滑动操作;在第一相机功能的第一界面,当滑动操作从第一区域进入第二区域时,判断滑动操作对应的滑动方向是否指向第一功能标签;若滑动方向指向第一功能标签,则切换显示第一功能标签标识的第二相机功能的第二界面;在第二相机功能的第二界面,响应于滑动操作继续沿第二相机功能的文字所在方位滑动,控制第二相机功能运行。本申请的相机功能控制方式无需用户刻意记忆滑动操作的方向也能精准触发对应的相机功能,提升了用户抓拍精彩瞬间画面的使用体验。

Description

相机功能控制方法、电子设备及存储介质
本申请要求于2022年09月22日提交国家知识产权局、申请号为202211157870.0、发明名称为“相机功能控制方法、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及拍摄技术领域,尤其涉及一种相机功能控制方法、电子设备及存储介质。
背景技术
越来越多的电子设备设置了摄像头,用户可以携带电子设备随时随地的进行拍照或录像,同时,为提升用户体验,电子设备都提供多种相机功能,比如,人像拍照功能、夜景拍照功能、录像功能、电影功能等。若需要使用相机功能,则用户通常需要经过:打开相机应用—>选择某个相机功能—>操作相机功能进行拍照或录像,当用户需要抓拍某些精彩瞬间的画面时,用户需要依次经历以上操作环节,从而降低了用户使用相机的速度。
为提升用户使用相机的速度,现有技术中引入了快捷操作进行连续拍照或录像,比如在打开相机应用后,用户以拍摄按钮为滑动起点,向左滑动进行快速连续拍照,向右滑动则进行快速录像。该类方式虽然一定程度上提升了用户使用相机的速度,但在使用前用户需要记住每个滑动方向对应的相机功能,如果未记住或记错,则可能导致操作错误,无法达到想要的快速操作的效果。
发明内容
本申请的主要目的在于提供一种相机功能控制方法、电子设备及存储介质,旨在解决现有使用相机功能控制方式不够便捷而影响用户使用速度和使用体验的技术问题。
为实现上述技术目的,本申请采用如下技术方案:
第一方面,本申请提供一种相机功能控制方法,该方法应用于电子设备,电子设备包括多种相机功能,相机功能对应的第一界面包括第一区域与第二区域,第一区域为相机的拍摄按钮,第二区域为相机的拍摄按钮以外的区域,第二区域包括多个用于标识相机功能的功能标签,该方法包括:显示电子设备的第一相机功能的第一界面;接收用户在第一相机功能的第一界面的第一操作,第一操作包括滑动操作;在第一相机功能的第一界面,当滑动操作从第一区域进入第二区域时,判断滑动操作对应的滑动方向是否指向第一功能标签;若滑动方向指向第一功能标签,则切换显示第一功能标签标识的第二相机功能的第二界面,第二界面包括表示第二相机功能的文字,文字的显示位置与第一功能标签的显示位置相同;在第二相机功能的第二界面,响应于滑动操作继续沿第二相机功能的文字所在方位滑动,控制第二相机功能运行。
本申请上述方法中,重新定义了一种新的相机功能控制操作方式,该操作方式将滑动方向与功能标签的位置关联,用户只需朝目标功能标签所在位置滑动即可触发该 功能标签对应的相机功能。另外,在用户朝向功能标签滑动后,显示上述第二界面,并在第二界面显示表示第二相机功能的文字,提示用户相机将要运行功能,便于用户知晓操作后相机运行功能,避免误操作。且第二相机功能的文字的显示位置与第一功能标签的显示位置相同,可以引导用户继续滑动触发第二相机功能。
因此,用户操作方式上会更加灵活,也即滑动指向什么相机功能就触发什么相机功能,因而能够基于用户选择的任一种相机功能进行快捷控制,比如向不同方向滑动可以对应控制多种不同的相机功能。此外,用户无需刻意记忆滑动方向与相机功能的对应关系,只需朝向某个功能标签滑动即可触发对应相机功能运行,提升了用户相机操作的灵活性和使用体验。
第一方面的一种可能的设计中,第二区域包括多个不重合的分区,每一分区覆盖一个功能标签,各分区的划分与拍摄按钮和各功能标签之间的相对位置相关。上述设计中,基于拍摄按钮和各功能标签之间的相对位置,将拍摄按钮以外的区域划分为多个不重合的分区,并保证每一个分区都对应覆盖一个功能标签,从而方便用户滑动操作。当用户从拍摄按钮开始朝某一个功能标签滑动时,只要用户在同一个分区内滑动就能保证对应触发该分区覆盖的功能标签标识的相机功能,从而既方便用户滑动操作,又提升了操作的便捷性。
第一方面的一种可能的设计中,在第一相机功能的第一界面,当滑动操作从第一区域进入第二区域时,判断滑动操作对应的滑动方向是否指向第一功能标签包括:在第一相机功能的第一界面,当滑动操作从第一区域进入第二区域时,判断滑动操作是否进入第二区域的目标分区;若滑动操作进入第二区域的目标分区,则确定目标分区覆盖的功能标签为滑动操作对应的滑动方向指向的第一功能标签。上述设计中,第二区域的分区划分符合用户滑动操作的特点,即使用户的滑动操作存在一定偏移也能保证准确指向功能标签,从而便于用户操作,提升相机控制的灵活性。
第一方面的一种可能的设计中,第二区域对应多个不重合的角度范围,每一角度范围覆盖一个功能标签,各角度范围的划分与拍摄按钮和各功能标签之间的相对位置相关。上述设计中,基于拍摄按钮和各功能标签之间的相对位置,将拍摄按钮以外的区域划分为多个不重合的角度范围,并保证每一个角度范围都对应覆盖一个功能标签,从而方便用户滑动操作。当用户从拍摄按钮开始朝某一个功能标签滑动时,只要用户在同一个角度范围内滑动就能保证对应触发该分区覆盖的功能标签标识的相机功能,从而既方便用户滑动操作,又提升了操作的便捷性。
第一方面的一种可能的设计中,在第一相机功能的第一界面,当滑动操作从第一区域进入第二区域时,判断滑动操作对应的滑动方向是否指向第一功能标签包括:在第一相机功能的第一界面,当滑动操作从第一区域进入第二区域时,计算滑动操作对应的滑动方向与预设基准方向之间的夹角;若角度范围包含夹角,则确定包含夹角的角度范围覆盖的功能标签为滑动操作对应的滑动方向指向的第一功能标签。上述设计中,第二区域的角度范围划分符合用户滑动操作的特点,即使用户的滑动操作存在一定偏移也能保证准确指向功能标签,从而便于用户操作,提升相机控制的灵活性。
第一方面的一种可能的设计中,相机功能控制方法还包括:在切换显示第一功能标签标识的第二相机功能的第二界面时,隐藏第一相机功能的第一界面显示的相关控 件,其中,相关控件包括第一相机功能的第一界面显示的所有功能标签。上述设计中,由于第一界面与第二界面都包含有取景框和拍摄按钮,因此在进行第一界面与第二界面的切换时,通过隐藏第一界面的相关控件能够让在显示第二界面时能够让用户直观感受界面的变化,同时也能让用户知晓滑动操作已经触发相机快捷控制功能,从而提升用户使用体验。此外,在原功能标签位置显示与触发的相机功能相应的文字,能够让用户知晓当前滑动操作所触发的相机功能,从而提升用户使用体验。
第一方面的一种可能的设计中,第一操作还包括长按操作,相机功能控制方法还包括:在第一相机功能的第一界面,当第一区域存在长按操作时,判断长按操作的长按时长是否达到预置时长阈值;若长按操作的长按时长达到预置时长阈值,则切换显示录像功能的第二界面;在录像功能的第二界面,当长按操作的长按时长超过所述时长阈值时,控制录像功能运行。上述设计中,相比具有多种拍照模式的拍照功能,录像功能则比较单一,因此针对录像功能额外提供了一种区别于拍照功能的用户操作方式,该方式操作简单,方便用户记忆,能够快速打开录像功能。
第一方面的一种可能的设计中,相机功能控制方法还包括:在录像功能的第二界面,当长按操作失效时,停止运行录像功能,并恢复显示第一相机功能的第一界面。上述设计中,当用户抬手时,长按操作失效,进而结束录像,同时为减少用户操作,进而方便用户继续使用快捷操作功能,因此通过本申请方式打开的录像功能能够在结束时自动恢复开始前的操作界面,提升了用户使用体验。
第一方面的一种可能的设计中,功能标签可滑动,当用户在第一相机功能的第一界面滑动任一种功能标签时,所有功能标签的显示位置均发生变化。上述设计中,功能标签滑动可以让用户在同一界面上操作更多的相机功能。由于本申请中的分区或角度范围划分并未固定与某个功能标签绑定,也即在不同应用场景下,向同一分区或角度范围滑动可以触发不同的相机功能,从而本申请的相机控制方式能够适应各种应用场景,提升用户使用的便利性。
第一方面的一种可能的设计中,相机功能控制方法还包括:获取用户在第一相机功能的第一界面通过滑动功能标签选择的第二功能标签;切换显示第二功能标签标识的第三相机功能的第一界面。上述设计中,通过滑动功能标签可以显示不同功能标签对应的操作界面,也即无论是在哪种相机功能对应的操作界面都可以使用本申请的滑动操作方式进行相机功能控制,从而方便用户在各种应用场景下进行相机功能的快捷操作。
第一方面的一种可能的设计中,第二相机功能包括连拍功能,第一功能标签包括拍照功能标签,第二相机功能的第二界面包括:连拍功能的文字;在第二相机功能的第二界面,响应于滑动操作继续沿第二相机功能的文字所在方位滑动,控制第二相机功能运行包括:在连拍功能的第二界面,响应于滑动操作继续沿连拍功能的文字所在方位滑动时,控制连拍功能进行连续捕获照片。上述设计中,连拍功能具体包括进行连续拍照的拍照功能或者进行连续拍照的人像功能。通过用户滑动操作,实现拍照功能或人像功能进行连续拍照,从而形成新的连拍功能,减少了用户拍照操作,提升了用户使用体验。
相机功能控制方法还包括:当滑动操作失效或连续拍摄的照片数量达到预置阈值 时,停止运行连拍功能,并恢复显示第一相机功能的第一界面,其中,连续拍摄的照片数量与滑动操作的持续时长相关,持续时长包括滑动操作在连拍功能的第二界面的滑动时长与停留时长。上述设计中给出了退出相机功能运行的条件,一是滑动操作失效,比如用户抬起手指,二是连续拍摄的照片数量达到最大数量。同时在操作结束后能够自动恢复显示开始前的界面,从而减少用户操作,方便用户继续使用快捷操作功能。
第一方面的一种可能的设计中,第二相机功能包括人像功能,第一功能标签包括人像功能标签,在第二相机功能的第二界面,响应于滑动操作继续沿第二相机功能的文字所在方位滑动,控制第二相机功能运行包括:在人像功能的第二界面,响应于滑动操作继续沿人像功能的文字所在方位滑动,控制人像功能进行拍照;调用与人像功能相适应的图像处理程序对生成的照片进行图像处理并保存处理后的图片。上述设计中,通过滑动操作不仅可以快捷触发相机功能,同时还可以实现连续拍照,从而减少用户操作。此外,上述设计中还可以进一步对生成照片进行图像处理,比如拍摄图片后,采用夜景功能对应的算法对图片进行处理后保存,或者采用人像功能对应的算法对图片进行背景虚化、人像美颜(磨皮、瘦脸)等处理后再保存,或者在录制视频的过程中,添加电影模式下的滤镜,从而节省用户操作,提升用户使用体验。
第一方面的一种可能的设计中,第二相机功能包括录像功能,第一功能标签包括录像功能标签,在第二相机功能的第二界面,响应于滑动操作继续沿第二相机功能的文字所在方位滑动,控制第二相机功能运行包括:在录像功能的第二界面,响应于滑动操作继续沿录像功能的文字所在方位滑动,控制录像功能进行录像;当滑动操作失效时,切换显示录像功能的第三界面,并继续保持录像功能运行。上述设计方式中,用户从拍摄按钮朝录像功能标签所在方位滑动可以控制录像功能运行,提升了用户使用录像功能的便捷性。另外,考虑到录像功能相比拍照功能,其运行时长更长,同时录像功能还支持暂停录像,因此,在滑动操作失效时,比如用户抬起手指,并不做界面恢复处理,而是切换显示完整的录像功能操作界面,并继续保持录像功能运行,方便用户随时控制录像进程。
第一方面的一种可能的设计中,相机功能控制方法还包括:在录像功能的第三界面,当接收到用户触发的停止录像指令时,停止运行录像功能,并恢复显示第一相机功能的第一界面。上述设计方式中,录像功能的第二界面会显示录像进程拍摄按钮,比如录像控制按钮、暂停按钮,当用户手动点击录像控制按钮触发停止录像指令时,此时确定用户真实意图为退出录像,因此,在停止运行录像功能的同时,恢复显示开始前的界面,从而便于用户后续操作,提升用户使用体验。
第二方面,本申请还提供一种电子设备,包括处理器和存储器,处理器用于调用存储器中的计算机程序,以执行如上述第一方面或第一方面中任一项设计所提供的相机功能控制方法。
第三方面,本申请还提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行如上述第一方面或第一方面中任一项设计所提供的相机功能控制方法。
第四方面,本申请还提供一种计算机程序产品,所述计算机程序产品包括计算机 指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如上所述的方法。
上述第二方面、第三方面、第四方面的效果描述可以参考第一方面的效果描述,在此不再赘述。
附图说明
图1为本申请实施例提供的手机的相机功能界面示意图;
图2为现有相机功能控制方法的一种用户操作流程示意图;
图3为现有相机功能控制方法的另一种用户操作流程示意图;
图4为本申请实施例提供的相机功能控制方法的用户操作流程示意图;
图5为本申请实施例提供的相机功能界面划分示意图;
图6为本申请实施例提供的用户在第一区域和第二区域进行滑动操作的示意图;
图7是本申请实施例提供的用户在第一区域以及第二区域的任一分区内进行滑动操作的示意图;
图8是本申请实施例提供的用户在第一区域以及第二区域的任一角度范围内进行滑动操作的示意图;
图9为本申请实施例提供的相机功能控制方法的流程示意图;
图10为本申请实施例提供的一种用户滑动进行连续拍照的界面示意图;
图11为本申请实施例提供的另一种用户滑动进行连续拍照的界面示意图;
图12为本申请实施例提供的一种用户滑动进行人像连拍的界面示意图;
图13为本申请实施例提供的另一种用户滑动进行人像连拍的界面示意图;
图14为本申请实施例提供的一种用户滑动进行人像拍照的界面示意图;
图15为本申请实施例提供的手机的录像界面示意图;
图16为本申请实施例提供的用户滑动进行录像的界面示意图;
图17为本申请实施例提供的用户长按进行录像的界面示意图;
图18为本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地描述。
本申请的说明书、权利要求书及附图中的术语“第一”和“第二”等仅用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备等,没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元等,或可选地还包括对于这些过程、方法、产品或设备等固有的其它步骤或单元。
在本申请中提及的“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员可以显式地和隐式地理解的是,本申请描述的实施例可以与其它实施例相结合。
在本申请中,“至少一个(项)”是指一个或者多个,“多个”是指两个或两个以上,“至少两个(项)”是指两个或三个及三个以上,“和/或”,用于描述关联对象的关联关 系,表示可以存在三种关系,例如,“A和/或B”可以表示:只存在A,只存在B以及同时存在A和B三种情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指这些项中的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,“a和b”,“a和c”,“b和c”,或“a和b和c”。
本申请实施例中,电子设备具有一个或多个摄像头,并安装有相机应用程序,可以实现拍照、录像等功能。可以理解的是,该电子设备可以是手机、穿戴式设备、平板电脑、带无线收发功能的电脑、虚拟现实(virtual reality,VR)终端设备、增强现实(augmented reality,AR)终端设备等等。下面具体以手机为例对电子设备进行示例性说明。
图1为本申请实施例提供的手机显示的一种相机功能界面示意图。手机支持多种相机功能,比如拍照功能、人像功能、夜景功能、录像功能等,并且每一种相机功能分别对应一种相机功能界面。例如,如果用户选择的是拍照功能,则手机对应显示的是拍照的功能界面,而如果用户选择的是录像功能,则手机对应显示的是录像的功能界面。需要说明的是,当用户打开手机的相机应用时,通常会默认显示一种功能界面,比如默认显示拍照的功能界面,如果用户想使用相机的其他功能,比如人像功能,则用户需要先手动选择人像功能,然后手机再自动切换显示人像功能对应的功能界面。
如图1所示,相机功能界面的布局主要包括:取景框101、功能标签102、拍摄按钮103。
其中,取景框101用于显示摄像头实时采集的图像。
功能标签102用于指示不同的相机功能以供用户选择,每一个功能标签102对应一种相机功能,比如拍照功能标签、人像功能标签、夜景功能标签、录像功能标签,在同一相机功能界面可以显示部分或全部功能标签。功能标签102可以是显示在取景框101的任一侧,可以是多个功能标签102横向排列,也可以是纵向排列或者围绕拍摄按钮103环状排列。当功能标签102被触发时,手机自动显示被触发标签对应的相机功能的界面,比如拍照功能标签被触发,则显示拍照的功能界面。
拍摄按钮103用于执行对应的相机功能,具体根据手机当前显示的拍照功能界面确定。比如手机当前显示的是拍照功能界面,则当用户触控该拍摄按钮103时,手机自动对当前取景框101中图像进行拍照并保存照片。
需要补充说明的是,基于相机应用的实际设计要求,相机功能界面的布局还可以包括:图库缩略图104、前后摄像头切换按钮105、摄像头焦距调节控件106、智慧视觉控件107、AI摄影控件108、闪光灯控件109、滤镜拍摄模式控件110、设置按钮111等。其中,点击图库缩略图104可以显示相册中最近一次保存的照片或视频,向左或者向右滑动,还可以查看相册中的其他图片或视频;点击前后摄像头切换按钮105可以进行前后摄像头切换;滑动摄像头焦距调节控件106可以调节摄像头焦距;点击智慧视觉控件107可以打开预置应用功能,比如物品识别、文字识别等;打开AI摄影控件108可以根据不同的场景自动识别人像、夜景等拍照环境,并且自动调节拍照参数;点击闪光灯控件109可以控制闪光灯的开启或关闭;点击滤镜拍摄模式控件110可以选择不同拍摄模式,为拍摄的图片添加不同的滤镜,比如原图模式、青涩滤镜模式、 印象滤镜模式等;设置按钮111可以打开设置菜单进行相机参数设置。一般情况下,不同的相机功能对应的相机功能界面会存在差异,比如拍照功能的功能界面与人像功能的功能界面、录像功能的功能界面等存在差异,具体表现在相机功能界面的布局上,具体根据相机应用实际需要进行设计,在此不做过多赘述。
图2为现有相机功能控制方法的一种用户操作流程示意图。以人像功能为例,如图2中的2a所示,用户先从手机桌面点击相机应用的图标启动相机应用(假设耗时1s);如图2中的2b所示,相机应用启动后,手机显示默认的相机功能界面(比如拍照功能界面),在图2中的2b所示的界面中,用户从横向排列的多个功能标签中找到人像功能标签并点击,手机从拍照功能界面切换显示人像功能界面(假设耗时2s),如图2中的2c所示;如图2中的2d所示,用户先对准拍照对象,然后点击拍摄按钮,手机自动运行拍照功能,生成对应照片(假设耗时1s)。
由图2可知,现有相机功能控制方法需要依次经过打开相机应用—>选择某个相机功能—>操作相机功能进行拍照或录像三个操作环节,大约耗时4s,如果用户需要抓拍某些精彩瞬间的画面,则用户需要依次经历以上操作环节才能实现拍照或录像,由于操作环节多,耗时增加,从而降低了拍照或录像的速度,进而错失精彩瞬间。
图3为现有相机功能控制方法的另一种用户操作流程示意图。如图3中的3a所示,用户先从手机桌面点击相机应用的图标启动相机应用;如图3中的3b所示,相机应用启动后,手机显示默认的相机功能界面(比如拍照功能界面);如图3中的3c、3d所示,用户先对准拍照对象,然后以拍摄按钮为滑动起点,若向左滑动则触发相机拍照功能进行快速连续拍照,并保存对应照片;若向右滑动则触发相机录像功能进行录像,生成对应视频。由图3可知,相比图2,该相机功能控制方法采用固定的快捷操作方式实现快速拍照和录像,减少了拍照或录像的操作环节,从而加快了用户操作速度,该类方式虽然一定程度上提升了用户使用相机的速度,但是,在使用前用户需要记住每个滑动方向对应的相机功能,如果未记住或记错,则可能导致操作错误,影响用户体验。
针对上述现有相机功能控制方法存在的问题,本申请实施例提供了一种相机功能控制方法,新的交互方式结合了相机功能选择操作与相机功能控制操作,进而减少了拍照或录像的操作环节,加快了用户操作速度,用户无需刻意记忆滑动方向,并且,本申请的快捷操作支持更多相机功能,进而提高了用户体验。其中,本申请的相机功能选择操作具体指,选择相机功能的操作,比如,在“拍照”功能对应的界面下点击“人像”的操作、在“拍照”功能对应的界面下点击“录像”的操作等,相机功能控制操作指点击拍摄按钮的操作,响应于该操作,电子设备使用相机进行拍照或录像。本申请的方法结合了上述过程,使得用户可以快速使用不同的拍摄模式,且便于记忆,提高用户体验。
下面以具体实施例对本申请的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面各具体的实施例可以独立实现,也可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。
图4为本申请实施例提供的相机功能控制方法的用户操作流程示意图。下面以人像功能控制为例,说明用户使用手机的相机应用进行人像拍照的操作环节,具体如下:
操作环节一:用户打开相机应用
示例性地,如图4中的4a所示,用户打开手机桌面,找到相机图标,然后点击相机图标进而打开相机应用。该打开相机应用的方式比较常见,用户可以打开手机桌面,找到相机图标后打开相机应用。
在一可选实施例中,可以采用快捷键或快捷操作的方式触发打开相机应用。例如,在手机锁屏状态下,用户在锁屏界面找到相机图标,点击并向上滑动,打开相机应用,从而节省用户打开相机应用的时间。
操作环节二:用户选择人像功能并运行该功能进行拍照
示例性地,如图4中的4b所示,相机应用打开后,手机显示默认的拍照功能操作界面;如图4中的4c所示,用户对准拍照对象,并在当前拍照功能操作界面上,从拍摄按钮位置开始朝向人像功能标签进行滑动操作(图4中的4c的黑色箭头指向滑动方向);如图4中的4d所示,手机检测到滑动操作后,确定当前滑动操作触发的相机功能为人像功能,响应于滑动操作捕获图像并采用人像处理算法对图像进行处理后保存。
相比图2中现有相机功能控制方法的操作方式,本申请将现有操作环节中的后两个操作环节合并为一个操作环节,从而方便用户快速切换不同的拍照模式,提高了用户体验。如果手机是在相机应用已经启动的状态下进行相机功能控制,则对于用户来说,只需一个滑动操作即可一步同时实现相机功能选择与控制相机功能运行,进而大幅提升了用户操作速度。而相比图3中的现有相机功能控制方法的操作方式中采用固定的快捷操作方式进行相机功能控制,比如向左滑动只能固定触发拍照功能,而向右滑动只能固定触发录像功能,本申请则是重新定义了一种新的相机功能控制操作方式,该操作方式将滑动方向与功能标签的位置关联,用户只需朝目标功能标签所在位置滑动即可触发该功能标签对应的相机功能。因此,用户操作方式上会更加灵活,也即滑动指向什么相机功能就触发什么相机功能,因而能够基于用户选择的任一种相机功能进行快捷控制,比如向不同方向滑动可以对应控制多种不同的相机功能;在不同应用情景下,即使朝同一个方向滑动也能触发不同的相机功能,或者同样的相机功能在不同应用情景下,即使朝不同方向滑动也能触发。因此本申请的相机功能控制方式更加灵活,能够基于用户使用时的应用场景进行调整。此外,用户无需刻意记忆滑动方向与相机功能的对应关系,只需朝向某个功能标签滑动即可触发对应相机功能运行,由于操作方式更加灵活,因此可以实现所有相机功能的快捷控制。
在本申请的另一实施例中,如图4所示,在图4中的4b所示的界面中,用户执行从拍摄按钮向“拍照”标签的滑动操作,手机响应于用户的滑动操作,执行连续拍照,并保存多张图片。
在一种可能的实现方式中,用户从拍摄按钮滑动到“拍照”标签后,继续向上滑动一定距离后,手指停留在手机的屏幕上,手机响应于用户停留在手机的屏幕上的操作,继续执行连续拍照,并保存图片,连续拍照并保存图片的数量可以与用户的手指在屏幕上停留的时间长度相关,当用户的手指从屏幕上抬起后,手机响应于抬起操作,可以停止拍照。
在一种可能的实现方式中,手机默认设置了连拍的最大次数,或者用户可以预先设置连拍的最大次数。这样,如果手机响应于用户停留在手机的屏幕上的操作,继续 执行连续拍照,并保存图片,当执行连续拍照的次数达到最大次数时,即使手机检测到用户的手指仍然停留在手机的屏幕上,也就是未检测到用户的手指从屏幕上的抬起操作,手机也可以停止执行拍照。
在本申请的另一实施例中,如图4所示,在图4中的4b所示的界面中,用户执行从拍摄按钮向“录像”标签的滑动操作,手机响应于用户的滑动操作,执行录像功能,并保存录制的视频。具体的过程将在下文中详细说明。
需要说明的是,用户也可以执行从拍摄按钮到其他标签的滑动操作,比如说,“电影”、“夜景”等标签。手机响应于用户的滑动操作,可以执行与标签对应的相机功能,比如,拍摄图片后,采用夜景功能对应的算法对图片进行处理后保存,或者在录制视频的过程中,添加电影模式下的滤镜,等等。另外,虽然上述实施例以手机为例对本申请实施例的拍摄方法进行说明,但本申请不限于此,本申请实施例的拍摄方法的执行主体还可以是平板电脑、折叠屏设备等电子设备。
下面进一步对用户使用滑动操作实现相机功能选择和相机功能运行进行说明。
示例性地,如图5所示,相机功能界面包括第一区域501与第二区域502,第一区域501为相机功能的控制区域,优选为拍摄按钮,第二区域502为第一区域501周围的一个区域。下面以第一区域501为拍摄按钮进行举例说明。
本申请实施例中,基于第一区域501以及从第一区域501向第二区域502滑动并经过第一区域501边缘时的交叉角度,设置与滑动操作关联的控制条件,手机根据该控制条件可以确定滑动操作选择的相机功能,然后手机再控制滑动操作选择的相机功能运行,对于用户来说,用户只需进行一次滑动操作即可实现相机功能选择以及触发相机功能运行。
本申请实施例中,控制条件优选设置为滑动操作需要分别经过第一区域501与第二区域502,滑动操作的滑动方向可以是从第一区域501到第二区域502,也可以是从第二区域502到第一区域501。其中,滑动操作经过第一区域501(拍摄按钮)确定用户想拍照或录像,滑动操作经过第一区域501进入第二区域402时确定用户选择的相机功能,也即用户通过一个滑动操作即可同时实现相机功能选择与控制相机功能运行。基于现有相机应用中拍摄按钮的界面布局位置(按钮位于界面底部中间位置)以及按钮区域大小(按钮区域较小),优选将第一区域501作为滑动操作的起点区域,将第二区域502作为滑动操作的终点区域,也即滑动操作的滑动方向优选从第一区域501到第二区域502。
示例性地,如图6所示,当用户的滑动操作(黑色箭头表示滑动方向)从第一区域501(拍摄按钮区域)向第二区域502(拍摄按钮区域以外的区域)滑动,并在经过第一区域501的边缘时,确定当前滑动操作实现相机功能选择以及触发相机功能运行。
本申请实施例提供两种方式的控制条件并结合滑动操作,实现滑动操作选择相机功能,包括但不限于以下两种方式:
方式一:滑动操作+分区
示例性地,如图7所示,将图6中的第二区域502划分为不同朝向的多个分区,每个分区不重合,每个分区对应一个功能标签位置。例如,图7中,分区701对应人像功能标签,分区702对应拍照功能标签,分区703对应录像功能标签。当滑动操作 (黑色箭头表示滑动方向)从拍摄按钮区域进入到分区701时,确定滑动操作触发人像功能;当滑动操作从拍摄按钮区域进入到分区702时,确定滑动操作触发连拍功能;当滑动操作从拍摄按钮区域进入到分区703时,确定滑动操作触发录像功能。本实施例对于分区的数量及划分方式不限。连拍功能可以看成是可实现连续拍照的拍照功能。
示例性地,为便于用户快捷操作,优选将第一区域501上方部分的第二区域502划分为多个不重合的分区,每个分区覆盖一个相机功能标签。分区的划分数量以及对应的相机功能标签可以是由手机厂商预先默认设置,也可以由用户手动配置。
各分区对应的相机功能基于当前显示的相机功能确定。例如,若手机当前显示的是拍照功能,拍照功能标签的左侧是人像功能标签,右侧是录像功能标签,则中间分区702对应连拍功能,左侧分区701对应人像功能,右侧分区703对应录像功能。若手机当前显示的是人像功能,人像功能标签的左侧是夜景功能标签,右侧是拍照功能标签,则中间分区702对应人像功能,左侧分区701对应夜景功能,右侧分区703对应连拍功能。
在一实施例中,优选各分区覆盖功能标签的文字范围,并且各分区与其覆盖的功能标签对应的相机功能对应,从而保证用户进行滑动操作时,只需朝对应功能标签滑动即可保证触发该功能标签对应的相机功能,例如,若用户朝人像功能标签滑动,则在滑动操作经过第一区域501并进入第二区域502时,确定触发人像功能;若用户朝录像功能标签滑动,则在滑动操作经过第一区域501并进入第二区域502时,确定触发录像功能。
此外,为加快控制条件判断的响应速度,优选各分区的一侧边界均与第一区域501的一侧边界相交,当用户的滑动操作从第一区域501出来并进入第二区域502时,确保滑动操作在出第一区域501的一瞬间即可进入到第二区域502的一个分区内,进而可快速确定出滑动操作触发的相机功能。
方式二:滑动操作+角度范围
示例性地,如图8所示,将图6中的第二区域502划分为不同朝向的多个角度范围,每个角度范围不重合,每个角度范围对应一个功能标签位置。例如,图8中,角度范围A对应人像功能,角度范围B对应连拍功能,角度范围C对应录像功能。当滑动操作从拍摄按钮区域进入到第二区域502,且滑动方向对应角度范围A时,确定滑动操作触发人像功能;当滑动操作(黑色箭头表示)从拍摄按钮区域进入到第二区域502,且滑动方向对应角度范围B时,确定滑动操作触发连拍功能;当滑动操作从拍摄按钮区域进入到第二区域502,且滑动方向对应角度范围C时,确定滑动操作触发录像功能。本实施例对于角度范围的数量及划分方式不限。
示例性地,为便于用户快捷操作,假设以x轴正方向为角度划分基准方向,将0-180°对应的第一区域501划分为多个角度范围,每个角度范围覆盖一个功能标签。角度范围的划分数量和各角度范围数值区间可以是由手机厂商预先默认设置,也可以由用户手动配置。
各角度范围对应的相机功能具体基于当前显示的相机功能确定。例如,如果手机当前显示的是拍照功能,拍照功能标签的左侧是人像功能标签,右侧是录像功能标签,则中间的角度范围B(60-120°)对应连拍功能,左侧的角度范围A(120-180°)对应 人像功能,右侧的角度范围C(0-60°)对应录像功能。如果机当前显示的是人像功能,人像功能标签的左侧是夜景功能标签,右侧是拍照功能标签,则中间的角度范围B(60-120°)对应人像功能,左侧的角度范围A(120-180°)对应夜景功能,右侧的角度范围C(0-60°)对应连拍功能。
在一实施例中,各角度覆盖功能标签的文字范围,并且各角度范围与其覆盖的功能标签对应的相机功能相对应,从而保证用户进行滑动操作时,只需朝对应功能标签滑动即可保证触发该功能标签对应的相机功能,例如,若用户朝人像功能标签滑动,则在滑动操作经过第一区域501并进入第二区域502时,确定触发人像功能;若用户朝录像功能标签滑动,则在滑动操作经过第一区域501并进入第二区域502时,确定触发录像功能。
可以理解的是,上述两种方式既可以是由手机厂商预先默认配置其中任一种方式,也可以是由手机厂商预先配置上述两种方式,然后再由用户手动选择其中任一种方式。在上述两种方式中,分区划分方式、角度划分方式既可以是由手机厂商预先默认设置,也可以是由用户手动配置。另外,既可以是基于滑动操作的滑动方向来设置分区或角度范围的划分方式,也可以是基于分区划分方式或角度划分方式来确定滑动操作的滑动方向。需要补充说明的是,滑动操作的滑动方向与分区划分、角度范围划分相关,因此,滑动操作的滑动方向需要与分区划分方式、角度划分方式相互匹配。
本申请中,分区划分、角度范围划分具体与拍摄按钮和各功能标签之间的相对位置相关。为便于用户通过滑动操作精准触发对应的相机功能,因此,分区范围和角度范围都需要覆盖标识各功能标签对应的文字,各功能标签对应的文字可以用于指示用户滑动操作的方向,从而用户无需刻意记忆滑动操作的方向也能精准触发对应的相机功能。
现有相机功能控制方式中,用户在使用前需要先记住每个滑动方向对应的相机功能,如果未记住或记错,则可能导致操作错误。另外,现有相机功能控制方式只有向左和向右两个滑动方向,因此用户仅能使用两种相机功能。而本申请将滑动方向与功能标签位置关联,用户只需向各功能标签位置所在方向滑动即可触发该位置对应的相机功能,同时由于功能标签位置可变,因此,即使是在同一个方向滑动也可能触发不同的相机功能。相比现有相机功能控制方式,本申请相机功能控制方式无需刻意记住滑动操作的方向,同时支持多方向滑动,不仅方便用户操作,而且可以实用多种相机功能,提升了用户抓拍精彩瞬间画面的使用体验。
下面以电子设备为手机,相机功能为拍照功能,对上述图7、图8对应的相机控制方法的实现流程进行说明。
图9为本申请实施例提供的相机功能控制方法的流程示意图。下面示例性地说明相机功能控制方法的步骤,包括:
S901,手机显示第一相机功能的第一界面,该第一界面为用户进行相机功能操作的界面;
其中,相机功能可以是指相机的拍摄模式,手机的相机可以包括多个不同的拍摄模式,也就是说,手机的相机可以包括多个不同的相机功能。举例来说,如上文所述,手机的相机可以包括拍照模式、人像模式、录像模式、夜景模式、电影模式等不同的 相机功能。
第一界面可以是任一拍摄模式对应的界面,第一界面可以包括取景框、拍摄按钮以及其他用于设置拍摄参数的控件,其中,取景框可以用于显示手机的相机的摄像头实时捕获的图像,也就是说,取景框用于实时预览摄像头采集的图像;拍摄按钮用于控制拍摄、保存图像或视频,用户可以点击或滑动拍摄按钮,手机响应于用户对拍摄按钮的操作,可以执行拍摄、保存图像或视频;其他用于设置拍摄参数的控件可以包括上文所述的图库缩略图、前后摄像头切换按钮、摄像头焦距调节控件、智慧视觉控件、AI摄影控件、闪光灯控件、滤镜拍摄模式控件、设置按钮等。其中,点击图库缩略图可以显示相册中最近一次保存的照片或视频;点击前后摄像头切换按钮可以进行前后摄像头切换;滑动摄像头焦距调节控件可以调节摄像头焦距;点击智慧视觉控件可以打开预置应用功能,比如物品识别、文字识别等;打开AI摄影控件可以根据不同的场景自动识别人像、夜景等拍照环境,并且自动调节拍照参数;闪光灯控件可以控制闪光灯的开启或关闭;点击滤镜拍摄模式控件可以选择不同拍摄模式,为拍摄的图片添加不同的滤镜,比如原图模式、青涩滤镜模式、印象滤镜模式等;设置按钮可以打开设置菜单进行相机参数设置。
示例性地,手机响应于用户对桌面相机应用图标的触发操作,打开相机应用而显示第一界面。可以理解的是,相机应用打开后,手机首次显示的第一界面可以是默认的拍照功能的界面,如图2中的2b所示。进一步可以理解的是,相机应用打开后,用户还可以通过点击第一界面上的其他功能标签而更改当前手机显示的相机功能的界面,比如用户点击人像功能标签,则手机切换显示人像功能界面,如图2中的2c所示。
本实施例中,手机显示的第一界面可以是相机应用启动时显示的默认相机功能的界面,也可以是用户选择的功能标签对应的相机功能的界面,也即在任一相机功能的界面都可以实现本申请的相机功能控制方法。
S902,手机检测是否存在用户在第一界面的滑动操作,第一界面包括第一区域与第二区域;
示例性地,如图5所示,第一界面包括第一区域501与第二区域502,第一区域501为相机功能的控制区域,优选为拍摄按钮,第二区域502为第一区域501周围的一个区域。
手机在启动相机应用并显示第一界面后,会继续检测在该第一界面上,是否存在用户在手机屏幕上的滑动操作,该滑动操作可以是任意滑动方向、任意滑动路径的滑动操作,当该滑动操作满足预设控制条件时,手机响应于滑动操作进行相机功能控制。
S903,若检测到存在用户在第一界面的滑动操作,则手机基于滑动操作对应触控点的移动轨迹,判断滑动操作是否从第一区域进入第二区域;
本实施例中,当手机检测到存在用户在第一界面的滑动操作时,手机获取该滑动操作在屏幕上的触控点的移动轨迹,并基于该移动轨迹确定是否触发相机功能控制。
示例性地,如图6所示,当滑动操作在屏幕上的触控点的移动轨迹为从第一区域501进入第二区域502时,手机进一步判断由该触控点的移动轨迹形成的滑动方向是否指向某个功能标签,若是,则确定当前滑动操作可以实现相机功能选择以及触发相机功能运行。
下面具体对控制条件结合滑动操作,实现选择相机功能控制进行说明,控制条件结合滑动操作包括但不限于以下两种方式:
方式一:滑动操作+分区,第二区域包括多个不重合的分区,每一分区对应覆盖一个功能标签,该方式的具体实现流程步骤如下:
S904A,若滑动操作从第一区域进入第二区域,则在滑动操作进入第二区域的目标分区时,手机确定目标分区对应的相机功能为滑动操作触发的相机功能;
如图7所示,第二区域502划分为不同朝向的多个分区,每个分区不重合,每个分区对应覆盖一个功能标签,例如,图7中,分区701对应人像功能标签,分区702对应拍照功能标签,分区703对应录像功能标签。当滑动操作从拍摄按钮区域进入到分区701时,确定滑动操作触发人像功能;当滑动操作从拍摄按钮区域进入到分区702时,确定滑动操作触发连拍功能;当滑动操作从拍摄按钮区域进入到分区703时,确定滑动操作触发录像功能。
S904B,手机启动滑动操作触发的相机功能,并调整手机当前显示的第一界面的布局,以显示滑动操作触发的相机功能对应的第二界面,该第二界面为相机功能运行时对应的界面;
用户在第一界面进行滑动操作过程中,手机自动确定滑动操作触发的相机功能,然后启动该相机功能,并在启动过程中调整手机当前显示的第一界面的布局,比如隐藏功能标签、闪光灯图标、设置菜单图标等,或者同时还显示之前第一界面未显示的界面元素,比如触发录像功能时显示录像控制按钮和录像时间。
需要进一步说明的是,启动相机功能运行与显示相机功能对应的第二界面既可以是同时进行,也可以是在显示相机功能对应的第二界面后再启动相机功能运行进行拍照或录像。另外,启动相机功能运行的时机既可以是在用户手指滑出拍摄按钮后进入第二区域之时,也可以是在用户手指划过对应功能标签之时。
S904C,若滑动操作触发的相机功能为连拍功能,则在连拍功能对应的第二界面,手机基于滑动操作,控制连拍功能进行连续拍照。
本实施例中,在触发相机功能后,手机还可以进一步基于滑动操作控制相机功能运行,比如,用户从拍摄按钮滑动到“拍照”标签后,继续向上滑动一定距离后,手指停留在手机的屏幕上,手机响应于用户停留在手机的屏幕上的操作,继续执行连续拍照,并保存图片,连续拍照并保存图片的数量可以与用户的手指在屏幕上停留的时间长度相关,当用户的手指从屏幕上抬起后,手机响应于抬起操作,可以停止拍照。
滑动操作触发的相机功能为实现连续拍照的连拍功能,具体包括实现连续拍照的拍照功能或者实现连续拍照的人像功能。通过用户滑动操作,即可采用拍照功能或人像功能进行连续拍照,从而实现拍照连拍或人像连拍,减少了用户采用某一相机功能进行连拍的操作,提升了用户使用体验。
在一种可能的实现方式中,手机默认设置了连拍的最大次数,或者用户可以预先设置连拍的最大次数。这样,如果手机响应于用户停留在手机的屏幕上的操作,继续执行连续拍照,并保存图片,当执行连续拍照的次数达到最大次数时,即使手机检测到用户的手指仍然停留在手机的屏幕上,也就是未检测到用户的手指从屏幕上的抬起操作,手机也可以停止执行拍照。
本实施例的控制条件结合滑动操作的实施方式,通过对第二区域进行分区,从而将多个功能标签分别与各分区关联,便于用户通过滑动操作对应各分区,进而通过各分区进行多种相机功能控制,相比现有相机功能控制方式,本申请相机功能控制方式无需刻意记住滑动操作的方向,同时支持多方向滑动,不仅方便用户操作,而且可以实用多种相机功能,提升了用户抓拍精彩瞬间画面的使用体验。
方式二:滑动操作+角度,第二区域对应多个不重合的角度范围,每一角度范围对应覆盖一个功能标签,该方式的具体实现流程步骤如下:
S905A,若滑动操作从第一区域进入第二区域,则在滑动操作进入第二区域时,手机计算滑动操作对应的滑动方向与预设基准方向之间的夹角;
如图8所示,将图6中的第二区域502划分为不同朝向的多个角度范围,每个角度范围不重合,每个角度范围对应覆盖一个功能标签。例如,图8中,角度范围A对应人像功能标签,角度范围B对应拍照功能标签,角度范围C对应录像功能标签。当滑动操作从拍摄按钮区域进入到第二区域502,且滑动方向对应角度范围A时,确定滑动操作触发人像功能;当滑动操作从拍摄按钮区域进入到第二区域502,且滑动方向对应角度范围B时,确定滑动操作触发拍照功能,当滑动操作从拍摄按钮区域进入到第二区域502,且滑动方向对应角度范围C时,确定滑动操作触发录像功能。
假设以x轴正方向为角度划分基准方向,当滑动操作从第一区域进入第二区域时,计算滑动操作对应的滑动方向与预设基准方向之间的夹角,该夹角用于确定滑动操作触发的相机功能。
S905B,手机确定包含夹角的角度范围对应的相机功能为滑动操作触发的相机功能;
如图8所示,假设角度范围C(0-60°)对应录像功能标签,角度范围B(60-120°)对应拍照功能标签,角度范围A(120-180°)对应人像功能标签;若计算出的夹角为30°,该夹角在0-60°角度范围内,确定滑动操作触发录像功能;若计算出的夹角为90°,该夹角在60-120°角度范围内,确定滑动操作触发连拍功能;若计算出的夹角为150°,该夹角在120-180°角度范围内,确定滑动操作触发人像功能。
S905C,手机启动滑动操作触发的相机功能,并调整手机当前显示的第一界面的布局,以显示滑动操作触发的相机功能对应的第二界面,该第二界面为相机功能运行时对应的界面;
S905D,若滑动操作触发的相机功能为连拍功能,则在连拍功能对应的第二界面,手机基于滑动操作,控制连拍功能进行连续拍照。
上述步骤S905C、S905D分别与步骤S904B、S904C相同,因此在此不再赘述。
示例性的,如图10所示的一种滑动拍照界面的示意图,如图10中的10a所示,手机响应于用户的操作打开相机应用后,手机默认显示拍照功能标签对应的第一界面,用户未选择其他功能标签,而是直接在当前拍照功能标签对应的第一界面进行滑动操作(如图10中的10a所示的黑色箭头表示滑动方向);如图10中的10b所示,当滑动操作从拍摄按钮内部区域向外滑动并经过拍摄按钮边缘时,手机根据滑动操作在屏幕上的触控点的移动轨迹,确定当前滑动方向指向拍照功能标签,隐藏与当前拍照无关的界面元素,比如各种功能标签、闪光灯图标、设置菜单图标等,显示取景框、图 库缩略图以及拍摄按钮(也即第二界面),并在原拍照功能标签位置显示“连拍”字样,同时在拍摄按钮中间位置显示数字,该数字表示当前连续累计拍摄的照片数量(初始值为0);如图10中的10c所示,当滑动操作继续沿拍照功能的文字(“连拍”)所在方位滑动进入第二区域502并经过“连拍”字样时,显示滑动操作的触控点与拍摄按钮边缘之间的连线,该连线用于指示滑动操作对应的滑动方向;如图10中的10d所示,当滑动操作沿当前滑动方向继续滑动时,拍摄按钮中间位置显示的数字开始增加,同时随着连拍照片数量的增加,图库缩略图显示的缩略预览照片也相应刷新显示;如图10中的10e所示,当滑动操作停止但触控点未消失时(用户手指仍然停留在当前第二界面),拍摄按钮中间位置显示的数字继续增加,需要进一步说明的是,当连续累计拍摄的照片数量达到最大连拍数量阈值时,手机结束连拍并恢复显示拍照功能标签对应的第三界面;如图10中的10f所示,当滑动操作对应的触控点消失(用户手指离开当前第二界面),手机结束连拍并恢复显示连拍开始前的拍照功能标签对应的第三界面,并刷新图库缩略图的显示内容。其中,第三界面和第一界面都属于相机功能的操作界面,二者的区别在于,图库缩略图显示的内容不同,以及取景框实时显示的内容不同,由于图库缩略图显示的内容与取景框实时显示的内容都是动态变化的,因此在一定程度上可以直接将第三界面看成是在不同时刻的第一界面。
示例性的,如图11所示的另一种滑动拍照界面的示意图。如图11中的11a所示,手机响应于用户的操作打开相机应用后,手机默认显示拍照功能标签对应的第一界面;如图11中的11b所示,用户选择其他相机功能标签对应的第一界面,比如选择人像功能标签,进而手机对应显示人像功能标签对应的第一界面,用户在当前人像功能标签对应的第一界面进行滑动操作(如图11中的11b所示的黑色箭头表示滑动方向);如图11中的11c所示,当滑动操作从拍摄按钮内部区域向外滑动并经过拍摄按钮边缘时,手机根据滑动操作在屏幕上的触控点的移动轨迹,确定当前滑动方向指向拍照功能标签,隐藏与当前拍照无关的界面元素,比如各种功能标签、闪光灯图标、设置菜单图标等,仅显示取景框、图库缩略图以及拍摄按钮(也即第二界面),并在原拍照功能标签位置显示“连拍”字样,同时在拍摄按钮中间位置显示数字,该数字表示当前连续累计拍摄的照片数量(初始值为0);如图11中的11d所示,当滑动操作继续沿拍照功能的文字(“连拍”)所在方位滑动进入第二区域502并经过“连拍”字样时,显示滑动操作的触控点与拍摄按钮边缘之间的连线,该连线用于指示滑动操作对应的滑动方向;如图11中的11e所示,当滑动操作沿当前滑动方向继续滑动时,拍摄按钮中间位置显示的数字开始增加,同时随着连拍照片数量的增加,图库缩略图显示的缩略预览照片也相应刷新显示;如图11中的11f所示,当滑动操作停止但触控点未消失时(用户手指仍然停留在当前第二界面),拍摄按钮中间位置显示的数字继续增加,需要进一步说明的是,当连续累计拍摄的照片数量达到最大连拍数量阈值时,手机结束连拍并恢复显示人像功能标签对应的第一界面;如图11中的11g所示,当滑动操作对应的触控点消失(用户手指离开当前第二界面),手机结束连拍并恢复显示连拍开始前的人像功能标签对应的第三界面,并刷新图库缩略图的显示内容。其中,第三界面和第一界面都属于相机功能的操作界面,二者的区别在于,图库缩略图显示的内容不同,以及取景框实时显示的内容不同。
示例性的,如图12所示的一种滑动人像连拍界面的示意图,如图12中的12a所示,手机响应于用户的操作打开相机应用后,手机默认显示拍照功能标签对应的第一界面,用户未选择其他功能标签,而是直接在当前拍照功能标签对应的第一界面进行滑动操作(如图12中的12a所示的黑色箭头表示滑动方向);如图12中的12b所示,当滑动操作从拍摄按钮内部区域向外滑动并经过拍摄按钮边缘时,手机根据滑动操作在屏幕上的触控点的移动轨迹,确定当前滑动方向指向人像功能标签,隐藏与当前拍照无关的界面元素,比如各种功能标签、闪光灯图标、设置菜单图标等,仅显示取景框、图库缩略图以及拍摄按钮(也即第二界面),并在原人像功能标签位置显示“人像连拍”字样,同时在拍摄按钮中间位置显示数字,该数字表示当前连续累计拍摄的照片数量(初始值为0);如图12中的12c所示,当滑动操作继续沿人像功能的文字(“人像连拍”)所在方位滑动进入第二区域502并经过“人像连拍”字样时,显示滑动操作的触控点与拍摄按钮边缘之间的连线,该连线用于指示滑动操作对应的滑动方向;如图12中的12d所示,当滑动操作沿当前滑动方向继续滑动时,拍摄按钮中间位置显示的数字开始增加,同时随着连拍照片数量的增加,图库缩略图显示的缩略预览照片也相应刷新显示;如图12中的12e所示,当滑动操作停止但触控点未消失时(用户手指仍然停留在当前第二界面),拍摄按钮中间位置显示的数字继续增加,需要进一步说明的是,当连续累计拍摄的照片数量达到最大连拍数量阈值时,手机结束人像连拍并恢复显示拍照功能标签对应的第一界面;如图12中的12f所示,当滑动操作对应的触控点消失(用户手指离开当前第二界面),手机结束人像连拍并恢复显示人像连拍开始前的拍照功能标签对应的第三界面并刷新图库缩略图的显示内容。其中,第三界面和第一界面都属于相机功能的操作界面,二者的区别在于,图库缩略图显示的内容不同,以及取景框实时显示的内容不同。
示例性的,如图13所示的另一种滑动人像连拍界面的示意图。如图13中的13a所示,手机响应于用户的操作打开相机应用后,手机默认显示拍照功能标签对应的第一界面;如图13中的13b所示,用户选择其他相机功能标签对应的第一界面,比如选择人像功能标签,进而手机对应显示人像功能标签对应的第一界面,用户在当前人像功能标签对应的第一界面进行滑动操作(如图13中的13b所示的黑色箭头表示滑动方向);如图13中的13c所示,当滑动操作从拍摄按钮内部区域向外滑动并经过拍摄按钮边缘时,手机根据滑动操作在屏幕上的触控点的移动轨迹,确定当前滑动方向指向人像功能标签,隐藏与当前拍照无关的界面元素,比如各种功能标签、闪光灯图标、设置菜单图标等,仅显示取景框、图库缩略图以及拍摄按钮(也即第二界面),并在原人像功能标签位置显示“人像连拍”字样,同时在拍摄按钮中间位置显示数字,该数字表示当前连续累计拍摄的照片数量(初始值为0);如图13中的13d所示,当滑动操作继续沿人像功能的文字(“人像连拍”)所在方位滑动进入第二区域502并经过“人像连拍”字样时,显示滑动操作的触控点与拍摄按钮边缘之间的连线,该连线用于指示滑动操作对应的滑动方向;如图13中的13e所示,当滑动操作沿当前滑动方向继续滑动时,拍摄按钮中间位置显示的数字开始增加,同时随着连拍照片数量的增加,图库缩略图显示的缩略预览照片也相应刷新显示;如图13中的13f所示,当滑动操作停止但触控点未消失时(用户手指仍然停留在当前第二界面),拍摄按钮中间位置显示的 数字继续增加,需要进一步说明的是,当连续累计拍摄的照片数量达到最大连拍数量阈值时,手机结束人像连拍并恢复显示人像功能标签对应的第一界面;如图13中的13g所示,当滑动操作对应的触控点消失(用户手指离开当前第二界面),手机结束人像连拍并恢复显示人像连拍开始前的人像功能标签对应的第三界面并刷新图库缩略图的显示内容。其中,第三界面和第一界面都属于相机功能的操作界面,二者的区别在于,图库缩略图显示的内容不同,以及取景框实时显示的内容不同。
示例性的,如图14所示的一种滑动人像拍照界面的示意图,如图14中的14a所示,手机响应于用户的操作打开相机应用后,手机默认显示拍照功能标签对应的第一界面,用户未选择其他功能标签,而是直接在当前拍照功能标签对应的第一界面进行滑动操作(如图14中的14a所示的黑色箭头表示滑动方向);如图14中的14b所示,当滑动操作从拍摄按钮内部区域向外滑动并经过拍摄按钮边缘时,手机根据滑动操作在屏幕上的触控点的移动轨迹,确定当前滑动方向指向人像功能标签,隐藏与当前拍照无关的界面元素,比如各种功能标签、闪光灯图标、设置菜单图标等,显示取景框、图库缩略图以及拍摄按钮(也即第二界面),并在原人像功能标签位置显示“人像”字样;如图14中的14c所示,当滑动操作继续沿人像功能的文字(“人像”)所在方位滑动进入第二区域502并经过“人像”字样时,显示滑动操作的触控点与拍摄按钮边缘之间的连线,该连线用于指示滑动操作对应的滑动方向,同时拍摄一张人像照片,并刷新图库缩略图显示的缩略预览照片;如图14中的14d所示,当滑动操作对应的触控点消失(用户手指离开当前第二界面),手机结束人像拍照并恢复显示人像拍照开始前的拍照功能标签对应的第三界面,并刷新图库缩略图的显示内容。其中,第三界面和第一界面都属于相机功能的操作界面,二者的区别在于,图库缩略图显示的内容不同,以及取景框实时显示的内容不同。
在一可选实施例中,当控制拍照功能进行连续拍照时,手机调用与拍照功能相适应的图像处理程序依次对生成的每一张照片进行图像处理。例如,假设当前手机控制人像功能进行连续拍照,则手机每生成一张人像照片时,自动调用美颜程序对人像照片进行处理,比如瘦脸、磨皮、背景虚化等,如果用户预先设置了美颜参数,则使用用户设置的美颜参数,否则使用美颜程序的默认美颜参数。
在一可选实施例中,若用户滑动操作触发的相机功能为录像功能,则在录像功能的第二界面,响应于滑动操作继续沿录像功能的文字所在方位滑动,控制录像功能进行录像;当滑动操作失效时,切换显示录像功能的第三界面,并继续保持录像功能运行。
示例性的,如图15所示的录像界面的布局主要包括:取景框101、前后摄像头切换按钮105、摄像头焦距调节控件106、闪光灯控件109、录像控制按钮112、暂停按钮113、拍照功能切换按钮114以及录像时间115。
其中,取景框101用于显示摄像头实时采集的图像;点击前后摄像头切换按钮105可以进行前后摄像头切换;滑动摄像头焦距调节控件106可以调节摄像头焦距;点击闪光灯控件109可以控制闪光灯的开启或关闭;点击录像控制按钮112可启动或结束录像功能;点击暂停按钮113可在录像过程中暂停录像;点击拍照功能切换按钮114可以从录像功能切换为拍照功能;录像时间115用于显示当前录像时长。
示例性的,如图16所示的滑动录像界面的示意图。如图16中的16a所示,手机响应于用户的操作,打开相机应用,手机默认显示拍照功能标签对应的第一界面,用户未选择其他功能标签,而是直接在当前拍照功能标签对应的第一界面进行滑动操作(如图16中的16a所示的黑色箭头表示滑动方向),当滑动操作从拍摄按钮内部区域向外滑动并经过拍摄按钮边缘时,或者,用户长按拍摄按钮,按住拍摄按钮不放开,如图16中的16b所示,手机根据滑动操作在屏幕上的触控点的移动轨迹,确定当前滑动方向指向录像功能标签,或者响应于长按拍摄按钮的操作,隐藏与当前录像无关的界面元素,比如各种功能标签、设置菜单图标等,显示取景框、拍摄按钮(也即第二界面),并在原录像功能标签位置显示“录像”字样,在拍摄按钮和“录像”标签之间显示连接线,连接线可以用于提示用户滑动到“录像”标签的位置,锁定录像功能,也就是不需要用户的手指持续停留在屏幕上,可以持续录制视频,同时在取景框顶部左侧位置显示录像时间,该数字表示当前录像的累计时长(初始值为00:00);如图16中的16c所示,当滑动操作继续沿录像功能的文字(“录像”)所在方位滑动进入第二区域502并经过“录像”字样时,录像时间增加;如图16中的16d所示,滑动操作对应的触控点消失(用户手指离开当前第二界面),显示录像功能的第三界面,锁定录像功能,录像功能继续运行并且录像时间增加;如图16中的16e所示,用户点击录像界面的录像控制按钮,手机结束录像,录像时间停止增加;如图16中的16f所示,当录像结束时,手机恢复显示录像开始前的拍照功能标签对应的第四界面,并刷新图库缩略图的显示内容。其中,第四界面和第一界面都属于相机功能的操作界面,二者的区别在于,图库缩略图显示的内容不同,以及取景框实时显示的内容不同,由于图库缩略图显示的内容与取景框实时显示的内容都是动态变化的,因此在一定程度上可以直接将第三界面看成是在不同时刻的第一界面。
在一可选实施例中,当手机显示相机功能界面时,若当前相机功能界面的控制区域(也即第一区域)存在长按操作,则触发录像功能运行;当长按操作失效时,手机停止录像功能运行。
示例性的,如图17所示的长按录像界面的示意图。如图17中的17a所示,手机打开相机应用后,手机默认显示拍照功能标签对应的第一界面;如图17中的17b所示,用户在该第一界面的拍摄按钮进行长按操作,当长按时长超过预置时长阈值时,手机隐藏与当前录像无关的界面元素,比如各种功能标签、设置菜单图标等,仅显示取景框、拍摄按钮(也即第二界面),并在原录像功能标签位置显示“录像”字样,并且生成引导线,该引导线连接“录像”字样与拍摄按钮边缘,同时在取景框顶部左侧位置显示录像时间,该数字表示当前录像的累计时长(初始值为00:00);如图17中的17c所示,用户继续进行长按操作,手机继续控制录像功能运行,录像时间增加;如图17中的17d所示,用户松手结束长按操作,手机恢复显示录像开始前的拍照功能标签对应的第三界面,并刷新图库缩略图的显示内容。其中,第三界面和第一界面都属于相机功能的操作界面,二者的区别在于,图库缩略图显示的内容不同,以及取景框实时显示的内容不同。
本实施例的控制条件结合滑动操作的实施方式,通过对第二区域进行角度范围划分,从而将各功能标签分别与各角度范围关联,从而便于用户通过滑动操作对应各角 度范围,进而通过角度范围进行多种相机功能控制,相比现有相机功能控制方式,本申请相机功能控制方式无需刻意记住滑动操作的方向,同时支持多方向滑动,不仅方便用户操作,而且可以实用多种相机功能,提升了用户抓拍精彩瞬间画面的使用体验。
与上述实施例相对应,本申请还提供一种电子设备,该电子设备包括用于存储计算机程序的存储器和用于执行计算机程序的处理器,其中,当存储器中存储的计算机程序被改处理器执行时,触发电子设备执行上述实施例中相机功能控制方法的部分或全部步骤。
图18为本申请实施例提供的一种电子设备的结构示意图。参考图18,电子设备10可以包括处理器100,外部存储器接口120,内部存储器121,通用串行总线(universalserial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备10的具体限定。在本申请另一些实施例中,电子设备10可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器100可以包括一个或多个处理单元,例如:处理器100可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processingunit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器100中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器100中的存储器为高速缓冲存储器。该存储器可以保存处理器100刚用过或循环使用的指令或数据。如果处理器100需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器100的等待时间,因而提高了系统的效率。
在一些实施例中,处理器100可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuitsound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purposeinput/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial  bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器100可以包含多组I2C总线。处理器100可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器100可以通过I2C接口耦合触摸传感器180K,使处理器100与触摸传感器180K通过I2C总线接口通信,实现电子设备10的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器100可以包含多组I2S总线。处理器100可以通过I2S总线与音频模块170耦合,实现处理器100与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器100与无线通信模块160。例如:处理器100通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器100与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(displayserial interface,DSI)等。在一些实施例中,处理器100和摄像头193通过CSI接口通信,实现电子设备10的拍摄功能。处理器100和显示屏194通过DSI接口通信,实现电子设备10的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器100与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备10充电,也可以用于电子设备10与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备10,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备10的结构限定。在本申请另一些实施例中,电子设备10也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB 接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备10的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备10供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器100。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器100,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器100中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备10的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备10中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备10上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器100中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器100的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器100,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备10上的包括无线局域网(wirelesslocal area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。
无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器100。无线通信模块160还可以从处理器100接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备10的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备10可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(codedivision multiple access,CDMA),宽带码分多址(wideband code division multipleaccess,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidounavigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellitesystem,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备10通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器100可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emittingdiode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrixorganic light emitting diode的,AMOLED),柔性发光二极管(flex light-emittingdiode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot lightemitting diodes,QLED)等。在一些实施例中,电子设备10可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备10可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备10可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备10在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备10可以支持一种或多种视频编解码器。这样,电子设备10可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备10的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
内部存储器121可以包括一个或多个随机存取存储器(random access memory,RAM)和一个或多个非易失性存储器(non-volatile memory,NVM)。
随机存取存储器可以包括静态随机存储器(static random-access memory,SRAM)、动态随机存储器(dynamic random access memory,DRAM)、同步动态随机存储器(synchronous dynamic random access memory,SDRAM)、双倍资料率同步动态随机存取存储器(double data rate synchronous dynamic random access memory,DDR SDRAM,例如第五代DDR SDRAM一般称为DDR5SDRAM)等;
非易失性存储器可以包括磁盘存储器件、快闪存储器(flash memory)。
快闪存储器按照运作原理划分可以包括NOR FLASH、NAND FLASH、3D NAND FLASH等,按照存储单元电位阶数划分可以包括单阶存储单元(single-level cell,SLC)、多阶存储单元(multi-level cell,MLC)、三阶储存单元(triple-level cell,TLC)、四阶储存单元(quad-level cell,QLC)等,按照存储规范划分可以包括通用闪存存储(英文:universalflash storage,UFS)、嵌入式多媒体存储卡(embedded multi media Card,eMMC)等。
随机存取存储器可以由处理器100直接进行读写,可以用于存储操作系统或其他正在运行中的程序的可执行程序(例如机器指令),还可以用于存储用户及应用程序的数据等。
非易失性存储器也可以存储可执行程序和存储用户及应用程序的数据等,可以提前加载到随机存取存储器中,用于处理器100直接进行读写。
外部存储器接口120可以用于连接外部的非易失性存储器,实现扩展电子设备10的存储能力。外部的非易失性存储器通过外部存储器接口120与处理器100通信,实现数据存储功能。例如将音乐,视频等文件保存在外部的非易失性存储器中。
电子设备10可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器100中,或将音频模块170的部分功能模块设置于处理器100中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备10可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备10接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备10可以设置至少一个麦克风170C。在另一些实施例中,电子设备10可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备10还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备10平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of theUSA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备10根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备10根据压力传感器180A检测所述触摸操作强度。电子设备10也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备10的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备10围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备10抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备10的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,电子设备10通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备10可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备10是翻盖机时,电子设备10可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测电子设备10在各个方向上(一般为三轴)加速度的大小。当电子设备10静止时可检测出重力的大小及方向。还可以用于识别电子设备10姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备10可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备10可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备10通过发光二极管向外发射红外光。电子设备10使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备10附近有物体。当检测到不充分的反射光时,电子设备10可以确定电子设备10附近没有物体。电子设备10可以利用接近光传感器180G检测用户手持电子设备10贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备10可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备10是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备10可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备10利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备10执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备10对电池142加热,以避免低温导致电子设备10异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备10对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备10的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备10可以接收按键输入,产生与电子设备10的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备10的接触和分离。电子设备10可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多帧卡。所述多帧卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备10通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备10采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备10中,不能和电子设备10分离。
本实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在电子设备10上运行时,使得电子设备10执行上述相关方法步骤实现上述实施例中的相机功能控制方法。
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的相机功能控制方法。
另外,本申请的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中的相机功能控制方法。
其中,本实施例提供的电子设备、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
该作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
该集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。

Claims (16)

  1. 一种相机功能控制方法,应用于电子设备,其特征在于,所述电子设备包括多种相机功能,所述相机功能对应的第一界面包括第一区域与第二区域,所述第一区域为相机的拍摄按钮,所述第二区域为相机的拍摄按钮以外的区域,所述第二区域包括多个用于标识相机功能的功能标签,所述相机功能控制方法包括:
    显示所述电子设备的第一相机功能的第一界面;
    接收用户在所述第一相机功能的第一界面的第一操作,所述第一操作包括滑动操作;
    在所述第一相机功能的第一界面,当所述滑动操作从所述第一区域进入所述第二区域时,判断所述滑动操作对应的滑动方向是否指向第一功能标签;
    若所述滑动方向指向所述第一功能标签,则切换显示所述第一功能标签标识的第二相机功能的第二界面,所述第二界面包括表示所述第二相机功能的文字,所述文字的显示位置与所述第一功能标签的显示位置相同;
    在所述第二相机功能的第二界面,响应于所述滑动操作继续沿所述第二相机功能的文字所在方位滑动,控制所述第二相机功能运行。
  2. 根据权利要求1所述的相机功能控制方法,其特征在于,所述第二区域包括多个不重合的分区,每一分区覆盖一个功能标签,各分区的划分与拍摄按钮和各功能标签之间的相对位置相关。
  3. 根据权利要求2所述的相机功能控制方法,其特征在于,所述在所述第一相机功能的第一界面,当所述滑动操作从所述第一区域进入所述第二区域时,判断所述滑动操作对应的滑动方向是否指向第一功能标签包括:
    在所述第一相机功能的第一界面,当所述滑动操作从所述第一区域进入所述第二区域时,判断所述滑动操作是否进入所述第二区域的目标分区;
    若所述滑动操作进入所述第二区域的目标分区,则确定所述目标分区覆盖的功能标签为所述滑动操作对应的滑动方向指向的第一功能标签。
  4. 根据权利要求1-3任一项所述的相机功能控制方法,其特征在于,所述第二区域对应多个不重合的角度范围,每一角度范围覆盖一个功能标签,各角度范围的划分与拍摄按钮和各功能标签之间的相对位置相关。
  5. 根据权利要求4所述的相机功能控制方法,其特征在于,所述在所述第一相机功能的第一界面,当所述滑动操作从所述第一区域进入所述第二区域时,判断所述滑动操作对应的滑动方向是否指向第一功能标签包括:
    在所述第一相机功能的第一界面,当所述滑动操作从所述第一区域进入所述第二区域时,计算所述滑动操作对应的滑动方向与预设基准方向之间的夹角;
    若所述角度范围包含所述夹角,则确定包含所述夹角的角度范围覆盖的功能标签为所述滑动操作对应的滑动方向指向的第一功能标签。
  6. 根据权利要求1-5任一项所述的相机功能控制方法,其特征在于,所述相机功能控制方法还包括:
    在切换显示所述第一功能标签标识的第二相机功能的第二界面时,隐藏所述第一相机功能的第一界面显示的相关控件,其中,所述相关控件包括所述第一相机功能的 第一界面显示的所有功能标签。
  7. 根据权利要求1-6任一项所述的相机功能控制方法,其特征在于,所述第一操作还包括长按操作,所述相机功能控制方法还包括:
    在所述第一相机功能的第一界面,当所述第一区域存在长按操作时,判断所述长按操作的长按时长是否达到预置时长阈值;
    若所述长按操作的长按时长达到预置时长阈值,则切换显示录像功能的第二界面;
    在所述录像功能的第二界面,当所述长按操作的长按时长超过所述时长阈值时,控制所述录像功能运行。
  8. 根据权利要求7所述的相机功能控制方法,其特征在于,所述相机功能控制方法还包括:
    在所述录像功能的第二界面,当所述长按操作失效时,停止运行所述录像功能,并恢复显示所述第一相机功能的第一界面。
  9. 根据权利要求1-8任一项所述的相机功能控制方法,其特征在于,所述功能标签可滑动,当用户在所述第一相机功能的第一界面滑动任一种功能标签时,所有功能标签的显示位置均发生变化。
  10. 根据权利要求9所述的相机功能控制方法,其特征在于,所述相机功能控制方法还包括:
    获取用户在所述第一相机功能的第一界面通过滑动功能标签选择的第二功能标签;
    切换显示所述第二功能标签标识的第三相机功能的第一界面。
  11. 根据权利要求1-10任一项所述的相机功能控制方法,其特征在于,所述第二相机功能包括连拍功能,所述第一功能标签包括拍照功能标签,所述第二相机功能的第二界面包括:所述连拍功能的文字;
    所述在所述第二相机功能的第二界面,响应于所述滑动操作继续沿所述第二相机功能的文字所在方位滑动,控制所述第二相机功能运行包括:
    在所述连拍功能的第二界面,响应于所述滑动操作继续沿所述连拍功能的文字所在方位滑动时,控制所述连拍功能进行连续捕获照片;
    所述相机功能控制方法还包括:
    当所述滑动操作失效或连续拍摄的照片数量达到预置阈值时,停止运行所述连拍功能,并恢复显示所述第一相机功能的第一界面,其中,连续拍摄的照片数量与所述滑动操作的持续时长相关,所述持续时长包括所述滑动操作在所述连拍功能的第二界面的滑动时长与停留时长。
  12. 根据权利要求1-11中任一项所述的相机功能控制方法,其特征在于,所述第二相机功能包括人像功能,所述第一功能标签包括人像功能标签,所述在所述第二相机功能的第二界面,响应于所述滑动操作继续沿所述第二相机功能的文字所在方位滑动,控制所述第二相机功能运行包括:
    在所述人像功能的第二界面,响应于所述滑动操作继续沿所述人像功能的文字所在方位滑动,控制所述人像功能进行拍照;
    调用与所述人像功能相适应的图像处理程序对生成的照片进行图像处理并保存处理后的图片。
  13. 根据权利要求1-12任一项所述的相机功能控制方法,其特征在于,所述第二相机功能包括录像功能,所述第一功能标签包括录像功能标签,所述在所述第二相机功能的第二界面,响应于所述滑动操作继续沿所述第二相机功能的文字所在方位滑动,控制所述第二相机功能运行包括:
    在所述录像功能的第二界面,响应于所述滑动操作继续沿所述录像功能的文字所在方位滑动,控制所述录像功能进行录像;
    当所述滑动操作失效时,切换显示所述录像功能的第三界面,并继续保持所述录像功能运行。
  14. 根据权利要求13所述的相机功能控制方法,其特征在于,所述相机功能控制方法还包括:
    在所述录像功能的第三界面,当接收到用户触发的停止录像指令时,停止运行所述录像功能,并恢复显示所述第一相机功能的第一界面。
  15. 一种电子设备,其特征在于,所述电子设备包括处理器和存储器,所述处理器用于调用所述存储器中的计算机程序,以执行如权利要求1-14任一项所述的相机功能控制方法。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-14任一项所述的相机功能控制方法。
PCT/CN2023/114101 2022-09-22 2023-08-21 相机功能控制方法、电子设备及存储介质 WO2024060903A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211157870.0A CN117750186A (zh) 2022-09-22 2022-09-22 相机功能控制方法、电子设备及存储介质
CN202211157870.0 2022-09-22

Publications (1)

Publication Number Publication Date
WO2024060903A1 true WO2024060903A1 (zh) 2024-03-28

Family

ID=90251393

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/114101 WO2024060903A1 (zh) 2022-09-22 2023-08-21 相机功能控制方法、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN117750186A (zh)
WO (1) WO2024060903A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180097027A (ko) * 2017-02-22 2018-08-30 주식회사 시어스랩 사용자 인터페이스를 이용한 영상 촬영 모드 전환 방법 및 장치
CN112153283A (zh) * 2020-09-22 2020-12-29 维沃移动通信有限公司 拍摄方法、装置及电子设备
CN112770058A (zh) * 2021-01-22 2021-05-07 维沃移动通信(杭州)有限公司 拍摄方法、装置、电子设备和可读存储介质
CN113840089A (zh) * 2021-09-26 2021-12-24 闻泰通讯股份有限公司 一种相机功能切换方法、装置、终端设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180097027A (ko) * 2017-02-22 2018-08-30 주식회사 시어스랩 사용자 인터페이스를 이용한 영상 촬영 모드 전환 방법 및 장치
CN112153283A (zh) * 2020-09-22 2020-12-29 维沃移动通信有限公司 拍摄方法、装置及电子设备
CN112770058A (zh) * 2021-01-22 2021-05-07 维沃移动通信(杭州)有限公司 拍摄方法、装置、电子设备和可读存储介质
CN113840089A (zh) * 2021-09-26 2021-12-24 闻泰通讯股份有限公司 一种相机功能切换方法、装置、终端设备及存储介质

Also Published As

Publication number Publication date
CN117750186A (zh) 2024-03-22

Similar Documents

Publication Publication Date Title
EP4084450B1 (en) Display method for foldable screen, and related apparatus
EP4024845A1 (en) Time-lapse photography method and device
WO2020168965A1 (zh) 一种具有折叠屏的电子设备的控制方法及电子设备
WO2020224449A1 (zh) 一种分屏显示的操作方法及电子设备
US20220206682A1 (en) Gesture Interaction Method and Apparatus, and Terminal Device
CN113475057B (zh) 一种录像帧率的控制方法及相关装置
CN110658975B (zh) 一种移动终端操控方法及装置
CN114514498A (zh) 一种对电子设备的操作方法及电子设备
WO2023273323A9 (zh) 一种对焦方法和电子设备
CN113891009B (zh) 曝光调整方法及相关设备
CN114887323B (zh) 一种电子设备操控方法及电子设备
WO2021238370A1 (zh) 显示控制方法、电子设备和计算机可读存储介质
WO2023131070A1 (zh) 一种电子设备的管理方法、电子设备及可读存储介质
WO2023241209A1 (zh) 桌面壁纸配置方法、装置、电子设备及可读存储介质
CN116048243B (zh) 一种显示方法和电子设备
WO2022042768A1 (zh) 索引显示方法、电子设备及计算机可读存储介质
CN114089902A (zh) 手势交互方法、装置及终端设备
CN113923372B (zh) 曝光调整方法及相关设备
WO2024060903A1 (zh) 相机功能控制方法、电子设备及存储介质
WO2021223560A1 (zh) 屏幕状态的控制方法及电子设备
CN114173005B (zh) 一种应用布局控制方法、装置、终端设备及计算机可读存储介质
WO2023071497A1 (zh) 拍摄参数调节方法、电子设备及存储介质
WO2023124178A1 (zh) 显示预览图像的方法、装置及可读存储介质
WO2023020420A1 (zh) 音量显示方法、电子设备及存储介质
CN114125144B (zh) 一种防误触的方法、终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23867196

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023867196

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2023867196

Country of ref document: EP

Effective date: 20240626