CN115484394A - Guiding and using method of air gesture and electronic equipment - Google Patents

Guiding and using method of air gesture and electronic equipment Download PDF

Info

Publication number
CN115484394A
CN115484394A CN202111679527.8A CN202111679527A CN115484394A CN 115484394 A CN115484394 A CN 115484394A CN 202111679527 A CN202111679527 A CN 202111679527A CN 115484394 A CN115484394 A CN 115484394A
Authority
CN
China
Prior art keywords
gesture
camera
electronic equipment
user
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111679527.8A
Other languages
Chinese (zh)
Other versions
CN115484394B (en
Inventor
黄雨菲
易婕
牛思月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Publication of CN115484394A publication Critical patent/CN115484394A/en
Application granted granted Critical
Publication of CN115484394B publication Critical patent/CN115484394B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a guiding and using method of an air gesture and electronic equipment, and relates to the technical field of terminals. The method can display the identification of the air-separating gesture when the user is identified to have the requirement of using the air-separating gesture, remind the user of using the air-separating gesture in time, and improve the efficiency of the user in the shooting process. The method comprises the following steps: the electronic equipment displays a shooting preview interface. If the time difference between the first time and the current time is larger than a first preset time, responding to the fact that the electronic equipment is detecting to record videos, the electronic equipment is connected with a selfie stick, the electronic equipment displays a first popup window on a shooting preview interface, the first popup window comprises a gesture identification, the gesture identification is used for reminding a user of using an air-separating gesture to switch a shooting mode, and the first time is the time that the electronic equipment last responds to the first camera to detect the air-separating gesture and switches the shooting mode.

Description

Guiding and using method of air gesture and electronic equipment
The present application claims priority of chinese patent application entitled "a method for creating a user video and an electronic device based on a story line pattern" filed on 16/06/2021, and having the application number of 202110676709.3, and priority of chinese patent application entitled "a method for guiding use of spaced gestures and an electronic device" filed on 29/11/2021, and having the application number of 202111436311.9, which are filed on 16/06/2021, and the entire contents of which are incorporated herein by reference.
Technical Field
The application relates to the technical field of terminals, in particular to a guiding and using method of an air gesture and an electronic device.
Background
With the development of internet technology, the functions of application programs are more and more abundant. When the application program has the new function, a new hand guide can be popped up on the interface of the application program, and the use method of the new function is displayed to the user through the new hand guide, so that the user can know and master the use method of the new function more quickly.
However, even if the user views the novice guide, the user often forgets to use the new function.
Disclosure of Invention
The embodiment of the application provides a guidance and use method of an air-separating gesture and electronic equipment, which can display an identifier of the air-separating gesture when a user is identified to have a requirement for using the air-separating gesture so as to remind the user of using the air-separating gesture.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, an embodiment of the present application provides a method for guiding and using an air gesture, where the method is applied to an electronic device including a display screen, a first camera and a second camera, the first camera and the second camera are located on different sides of the display screen, and the first camera and the display screen are located on the same side of the electronic device, and the method includes: the electronic equipment displays a shooting preview interface, the shooting preview interface comprises at least one of a first image and a second image, the first image is an image collected by the first camera in real time, and the second image is an image collected by the second camera in real time. If the time difference between the first time and the current time is larger than a first preset time, responding to the fact that the electronic equipment is detecting to record videos, the electronic equipment is connected with a selfie stick, the electronic equipment displays a first popup window on a shooting preview interface, the first popup window comprises a gesture identification, the gesture identification is used for reminding a user of using an air-separating gesture to switch a shooting mode, and the first time is the time that the electronic equipment last responds to the first camera to detect the air-separating gesture and switches the shooting mode.
The method for guiding and using the air-separating gesture can detect whether the time difference between the last time (namely the first time) when the user uses the air-separating gesture and the current time is larger than a first preset time or not when the electronic device is in a front shooting mode, a rear shooting mode, a front-back shooting mode and a picture-in-picture shooting mode, and detects that the electronic device records a video and considers that the user needs to use the air-separating gesture when the electronic device is connected with a selfie stick when the time difference between the first time and the current time is larger than the first preset time, so that a first pop-up window is displayed, and the user is reminded to use the air-separating gesture to switch the shooting mode by displaying a gesture identification on the guide pop-up window. The method can effectively remind the user of using a new function (such as an air gesture), and the efficiency of the user in the shooting process is improved.
In one possible implementation manner, in response to detecting that the electronic device is recording a video, the electronic device is connected to a selfie stick, and the electronic device displays a first popup window on a shooting preview interface, including: in response to receiving a first operation of a user, the electronic equipment starts to record a video; in response to detecting the connection from the racket, the electronic device displays a first popup window on a shooting preview interface.
That is, when the time difference between the first time and the current time is greater than the first preset time, the electronic device may start recording the video first and then display the first pop-up window when the selfie stick is connected.
In one possible implementation manner, in response to detecting that the electronic device is recording a video, the electronic device is connected to a selfie stick, and the electronic device displays a first popup window on a shooting preview interface, including: the electronic equipment is connected with the selfie stick; in response to receiving the first operation of the user, the electronic equipment starts to record the video, and the electronic equipment displays a first popup window on a shooting preview interface.
That is to say, under the condition that the time difference between the first time and the current time is greater than the first preset time, the electronic device may connect the selfie stick first, and the user may control the electronic device to start recording the video through the selfie stick, the blank gesture, and the control on the shooting preview, and at this time, the electronic device may also display the first popup window on the shooting preview interface.
In a possible implementation manner, the shooting preview interface includes a first control, and the method further includes: responding to the operation of the user on the first control, and displaying a second popup window on a shooting preview interface by the electronic equipment; the electronic equipment circularly plays a plurality of guide videos in the second popup window according to a preset sequence, and each guide video in the guide videos is used for showing a using method of an empty gesture.
Therefore, the electronic equipment can guide the user to use the air gesture in the video recording process, and the electronic equipment can also display the using method of the air gesture through the second pop-up window and guide the user to use the air gesture from multiple aspects.
In one possible implementation, the method further includes: responding to the operation of sliding the second popup to the left by the user, displaying the last guide video adjacent to the first video which is playing on the second popup by the electronic equipment, and stopping the circular playing; or, in response to the operation of the user to slide the second popup to the right, the electronic device displays the next guide video adjacent to the first video in the second popup and stops the loop playing.
That is, the electronic device may automatically and cyclically play the plurality of guide videos on the second pop-up window, or the user may manually select a guide video that the user wants to see. After the user manually selects the guide video (i.e., slides the second pop-up window left or right), the electronic device displays the guide video selected by the user instead of playing the guide video in a loop.
In a possible implementation manner, the second popup window includes a first display area and a second display area, the first display area is used for playing a plurality of guide videos in a circulating manner, the second display area is used for displaying a plurality of prompt messages corresponding to the plurality of guide videos in a circulating manner, the plurality of guide videos correspond to the plurality of prompt messages one to one, each prompt message is used for explaining a function and a use method of a blank gesture displayed by the corresponding guide video, and the guide video being displayed in the first display area corresponds to the prompt message being displayed in the second display area.
That is, the second pop-up window can display not only the guide video but also prompt information (which may include characters and icons) corresponding to the guide video, which is more beneficial for the user to understand the use method of the air-separating gesture.
In one possible implementation, the second popup comprises a confirmation option, and the method further comprises: and in response to the operation of the user on the confirmation option, the electronic equipment closes the second popup window. That is, the second popup window may be manually closed by the user, and if the user does not click the confirmation option, the electronic device may always display the second popup window.
In one possible implementation, the method further includes: if the electronic equipment detects that the user does not click the first control and does not respond to the first camera to detect the blank gesture and switch the shooting mode, the electronic equipment displays a second popup window on the shooting preview interface.
It can be seen that the first electronic device can confirm whether the user learns the usage of the spaced gesture by detecting whether the user clicks the first control, and confirm whether the user uses the spaced gesture by detecting whether the photographing mode is switched in response to the first camera detecting the spaced gesture. When it is detected that the user does not click on the first control and does not switch the shooting mode in response to the first camera detecting the air-separating gesture, the user is considered to not learn the usage of the air-separating gesture and not use the air-separating gesture, and in this case, the electronic device can automatically pop up the second pop-up window to guide the user to learn to use the air-separating gesture. Further, the electronic device may perform the detection when entering the multi-mirror recording mode for the second time, and automatically pop up the second popup window when the condition is satisfied.
In a possible implementation manner, the shooting preview interface further includes a guidance prompt, the guidance prompt is set in a preset area of the first control, and the guidance prompt is used for instructing a user to click the first control to view the guidance video. It will be appreciated that the electronic device may prompt the user to learn to use the blank gesture by displaying a guidance prompt. Further, the electronic device may display the guidance prompt when entering the multi-mirror recording mode for the first time.
In one possible implementation manner, the shooting preview interface further includes a second control, and the method further includes: in response to the detection of the operation of the user on the second control, the electronic equipment displays a third popup window on a shooting preview interface, wherein the third popup window comprises preview interfaces of multiple shooting modes; if the electronic equipment is recording a video, the time difference between the first time and the current time is greater than a first preset time, the electronic equipment displays a first popup window on a shooting preview interface, and the first popup window is not overlapped with a third popup window.
In one possible implementation, the gesture identification includes: first gesture sign, second gesture sign and third gesture sign, the gesture of moving to both sides is instructed to first gesture sign, and the gesture of upset palm is instructed to second gesture sign, and the gesture of clenching the fist is instructed to third gesture sign.
In a second aspect, an embodiment of the present application provides a method for guiding use of an air gesture, where the method is applied to an electronic device including a display screen, a first camera and a second camera, the first camera and the second camera are located on different sides of the display screen, and the first camera and the display screen are located on the same side of the electronic device, and the method includes: the electronic equipment displays a shooting preview interface, wherein the shooting preview interface comprises at least one of a first image and a second image, the first image is an image collected by a first camera in real time, and the second image is an image collected by a second camera in real time; in response to receiving a first operation of a user, the electronic equipment starts to record a video; if the time difference between the first time and the current time is greater than the first preset time, responding to the fact that the distance between the user shot by the first camera and the display screen is greater than or equal to the preset distance in the second preset time after the video recording starts, the electronic equipment displays a first popup window on a shooting preview interface, the first popup window comprises a gesture identifier, and the first time is the time when the electronic equipment last responds to the first camera to detect the blank gesture and switches the shooting mode.
That is to say, under the condition that the time difference between the first time and the current time is greater than the first preset time, if the electronic device detects that the distance between the user shot by the first camera and the display screen is greater than or equal to the preset distance within the second preset time after the video starts to be recorded, it is considered that the user may take a picture by holding the electronic device by hand, so that the first pop-up window is displayed, and the user is reminded to use the blank gesture to switch the shooting mode by displaying the gesture identifier on the guide pop-up window. The method can effectively remind the user of using a new function (such as an air gesture), and the efficiency of the user in the shooting process is improved.
In a possible implementation manner, the shooting preview interface includes a first control, and the method further includes: responding to the operation of the user on the first control, and displaying a second popup window on a shooting preview interface by the electronic equipment; the electronic equipment circularly plays a plurality of guide videos in the second popup window according to a preset sequence, and each guide video in the plurality of guide videos is used for showing a using method of the space gesture.
Therefore, the electronic equipment can guide the user to use the space gesture in the video recording process, and the electronic equipment can also show the use method of the space gesture through the second popup window to guide the user to use the space gesture from multiple aspects.
In one possible implementation, the method further includes: responding to the operation of sliding the second popup to the left by the user, displaying the last guide video adjacent to the first video which is playing on the second popup by the electronic equipment, and stopping the circular playing; or, in response to the operation of sliding the second popup window to the right by the user, the electronic equipment displays the next guide video adjacent to the first video on the second popup window and stops the circular playing.
That is to say, the electronic device may automatically and cyclically play the plurality of guidance videos on the second pop-up window, or the user may manually select a guidance video that the user wants to watch. After the user manually selects the guide video (i.e., slides the second pop-up window left or right), the electronic device displays the guide video selected by the user instead of playing the guide video in a loop.
In a possible implementation manner, the second popup window includes a first display area and a second display area, the first display area is used for playing a plurality of guide videos in a circulating manner, the second display area is used for displaying a plurality of prompt messages corresponding to the plurality of guide videos in a circulating manner, the plurality of guide videos correspond to the plurality of prompt messages one to one, each prompt message is used for explaining a function and a use method of a blank gesture displayed by the corresponding guide video, and the guide video being displayed in the first display area corresponds to the prompt message being displayed in the second display area.
That is, the second pop-up window can display not only the guide video but also prompt information (which may include characters and icons) corresponding to the guide video, which is more beneficial for the user to understand the use method of the air-separating gesture.
In one possible implementation, the second popup comprises a confirmation option, and the method further comprises: and in response to the operation of the user on the confirmation option, the electronic equipment closes the second popup window. That is, the second popup window may be manually closed by the user, and if the user does not click the confirmation option, the electronic device may always display the second popup window.
In one possible implementation, the method further comprises: if the electronic equipment detects that the user does not click the first control and does not respond to the first camera to detect the blank gesture and switch the shooting mode, the electronic equipment displays a second popup window on the shooting preview interface.
It can be seen that the first electronic device can confirm whether the user learns the usage of the space-time gesture by detecting whether the user clicks the first control, and confirm whether the user uses the space-time gesture by detecting whether the photographing mode is switched in response to the first camera detecting the space-time gesture. When it is detected that the user does not click on the first control and does not switch the shooting mode in response to the first camera detecting the air-separating gesture, the user is considered to not learn the usage of the air-separating gesture and not use the air-separating gesture, and in this case, the electronic device can automatically pop up the second pop-up window to guide the user to learn to use the air-separating gesture. Furthermore, the electronic device can enter the multi-mirror video recording mode for the second time to perform the detection, and automatically pop up the second pop-up window when the conditions are met.
In a possible implementation manner, the shooting preview interface further includes a guidance prompt, the guidance prompt is set in a preset area of the first control, and the guidance prompt is used for instructing a user to click the first control to view the guidance video. It will be appreciated that the electronic device may prompt the user to learn to use the blank gesture by displaying a guidance prompt. Further, the electronic device may display the guidance prompt when entering the multi-mirror recording mode for the first time.
In a possible implementation manner, the shooting preview interface further includes a second control, and the method further includes: in response to the detection of the operation of the user on the second control, the electronic equipment displays a third popup window on a shooting preview interface, wherein the third popup window comprises preview interfaces of multiple shooting modes; if the electronic equipment is recording a video, the time difference between the first time and the current time is greater than a first preset time, the electronic equipment displays a first popup window on a shooting preview interface, and the first popup window is not overlapped with a third popup window.
In one possible implementation, the gesture identification includes: first gesture sign, second gesture sign and third gesture sign, the gesture of moving to both sides is instructed to first gesture sign, and the gesture of upset palm is instructed to second gesture sign, and the gesture of clenching the fist is instructed to third gesture sign.
In a third aspect, an embodiment of the present application provides a method for guiding and using an air gesture, where the method is applied to an electronic device including a display screen, a first camera and a second camera, the first camera and the second camera are located on different sides of the display screen, and the first camera and the display screen are located on the same side of the electronic device, and the method includes: the electronic equipment displays a shooting preview interface, wherein the shooting preview interface comprises at least one of a first image and a second image, the first image is an image acquired by a first camera in real time, and the second image is an image acquired by a second camera in real time; in response to receiving a first operation of a user, the electronic equipment starts to record a video; if the time difference between the first time and the current time is larger than the first preset time, responding to the fact that the first image proportion is reduced within the second preset time after the video recording is started, the electronic equipment displays a first popup window on a shooting preview interface, the first popup window comprises a gesture identification, the first time is the time when the electronic equipment switches a shooting mode in response to the first camera detecting the blank gesture last time, and the first image proportion is the area proportion of a face area in the first image.
That is to say, under the condition that the time difference between the first time and the current time is greater than the first preset time, if the electronic device detects that the area proportion of the face region in the first image is reduced within the second preset time after the video starts to be recorded, it is considered that the user may take a picture by holding the electronic device by hand, so that the first pop-up window is displayed, and the user is reminded to use the blank gesture to switch the shooting mode by displaying the gesture identifier on the guide pop-up window. The method can effectively remind the user of using a new function (such as an air gesture), and the efficiency of the user in the shooting process is improved.
In a fourth aspect, an embodiment of the present application provides an electronic device, which includes a display screen, a first camera, a second camera, and a processor, where the processor is coupled to a memory, and the memory stores program instructions, and when the program instructions stored in the memory are executed by the processor, the electronic device implements the method according to the first aspect, the second aspect, or the third aspect, and any possible design manner thereof.
In a fifth aspect, a computer-readable storage medium includes computer instructions; the computer instructions, when executed on the electronic device, cause the electronic device to perform the method of the first, second or third aspect, or any of its possible designs.
In a sixth aspect, the present application provides a system-on-chip that includes one or more interface circuits and one or more processors. The interface circuit and the processor are interconnected by wires.
The above chip system may be applied to an electronic device including a communication module and a memory. The interface circuit is configured to receive signals from a memory of the electronic device and to transmit the received signals to the processor, the signals including computer instructions stored in the memory. When executed by a processor, the computer instructions may cause an electronic device to perform the method according to the first, second or third aspect, or any possible design thereof.
In a seventh aspect, the present application provides a computer program product which, when run on a computer, causes the computer to perform the method according to the first, second or third aspect and any possible design thereof.
Drawings
Fig. 1A is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure;
fig. 1B is a software architecture diagram of an electronic device according to an embodiment of the present application;
FIGS. 2A-2B are schematic diagrams of a set of interfaces provided by embodiments of the present application;
FIGS. 3A-3B are schematic diagrams of a set of interfaces provided by embodiments of the present application;
FIG. 4 is a schematic diagram of a set of interfaces provided by an embodiment of the present application;
FIGS. 5A-5C are schematic diagrams of a set of interfaces provided by embodiments of the present application;
FIGS. 6A-6F are a set of schematic diagrams of interfaces provided by embodiments of the present application;
FIGS. 7A-7D are a set of schematic diagrams of interfaces provided by embodiments of the present application;
FIGS. 8A-8C are a set of schematic diagrams of interfaces provided by embodiments of the present application;
FIGS. 9A-9C are a set of schematic diagrams of interfaces provided by embodiments of the present application;
FIG. 10 is a schematic diagram of a set of interfaces provided by an embodiment of the present application;
11A-11B are a set of schematic interfaces provided by embodiments of the present application;
FIG. 12 is a schematic diagram of a set of interfaces provided by an embodiment of the present application;
FIGS. 13A-13B are schematic diagrams of a set of interfaces provided by embodiments of the present application;
fig. 14 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
In the interest of clarity and conciseness in the description of the various embodiments that follows and to facilitate the understanding of those skilled in the art, a brief introduction to related concepts or technologies is first presented.
The shooting preview interface refers to an interface displayed before or during shooting of the electronic equipment and can be used for displaying images acquired by the camera in real time. In addition, the shooting preview interface can also display a plurality of controls, and the plurality of controls can comprise a flash light control for turning on/off a flash light, a beauty control for turning on/off a beauty function, a shutter control for shooting and the like.
The single-lens shooting mode refers to a mode that the electronic equipment only shoots through one camera. The electronic equipment only displays the image shot by one camera in the shooting preview interface in the single-mirror shooting mode. The single-lens shooting may include a front shooting mode, a rear shooting mode, and the like.
Specifically, the front-end shooting mode refers to a mode in which the electronic equipment shoots through a front-end camera. When the electronic equipment is in the front shooting mode, images collected by the front camera in real time can be displayed on a shooting preview interface.
And the rear shooting mode refers to a mode that the electronic equipment shoots through a rear camera. When the electronic equipment is in a rear shooting mode, images acquired by the rear camera in real time can be displayed on a shooting preview interface.
The multi-mirror shooting mode refers to a mode that electronic equipment can shoot through a plurality of cameras. Under the multi-mirror shooting mode, the display screen simultaneously displays images respectively shot by a plurality of cameras in a shooting preview interface, and the images shot by different cameras can be displayed in a splicing mode or in a picture-in-picture mode. The multi-lens shooting mode can include a front-back shooting mode, a back-back shooting mode, a picture-in-picture shooting mode and the like according to the type of the camera used by the electronic equipment and the display mode of images shot by different cameras. In the embodiment of the present application, the multi-mirror shooting may also be referred to as multi-mirror video recording.
The front-back shooting mode refers to a mode in which the electronic equipment can shoot through the front camera and the rear camera simultaneously. When the electronic equipment is in a front-back shooting mode, images (for example, a first image and a second image) shot by the front camera and the back camera can be simultaneously displayed in a shooting preview interface, and the first image and the second image are displayed in a splicing manner. When the electronic equipment is vertically arranged, the first image and the second image can be spliced up and down; when the electronic equipment is transversely arranged, the first image and the second image can be spliced left and right. The display area of the first image is matched with the display area of the second image by default.
The rear-rear shooting mode refers to a mode in which the electronic device can simultaneously shoot through two rear cameras (if a plurality of rear cameras exist). When the electronic device is in the rear-rear shooting mode, the electronic device can simultaneously display images (for example, a first image and a second image) shot by two rear cameras in a shooting preview interface, and the first image and the second image are displayed in a splicing manner. When the electronic equipment is vertically arranged, the first image and the second image can be spliced up and down; when the electronic equipment is transversely arranged, the first image and the second image can be spliced left and right.
The picture-in-picture shooting mode refers to a mode in which the electronic device can shoot through two cameras simultaneously. When the electronic device is in the picture-in-picture shooting mode, images (e.g., a first image, a second image) shot by two cameras can be simultaneously displayed in a shooting preview interface. The second image is displayed in the whole area of the shooting preview interface, the first image is overlapped on the second image, and the display area of the first image is smaller than that of the second image. By default, the first image may be located at the lower left of the second image. The two cameras can be freely combined, and for example, the two cameras can be two front cameras, two rear cameras or one front camera and one rear camera.
It should be noted that the above-mentioned "single-mirror shooting", "multi-mirror shooting", "front shooting mode", "rear shooting mode", "front-back shooting mode", "picture-in-picture shooting mode", and "rear-back shooting mode" are only names used in the embodiments of the present application, and the meanings thereof are described in the embodiments of the present application, and the names do not limit the embodiments in any way.
The application provides a guiding use method of an air gesture, which is applied to electronic equipment comprising a first camera and a second camera of a display screen, wherein the first camera and the second camera are positioned on different sides of the display screen, and the first camera and the display screen are positioned on the same side of the electronic equipment. The electronic equipment can display a shooting preview interface, the preview interface comprises at least one of a first image and a second image, the first image is an image acquired by the first camera in real time, and the second image is an image acquired by the second camera in real time. If the time difference between the first time and the current time is greater than the first preset time, the electronic device detects that the user has a demand for using the blank gesture, and the electronic device pops up a guidance popup window (which may be called a first popup window) of the blank gesture, wherein the guidance popup window includes a gesture identifier for displaying a use method of the blank gesture. The electronic device is in a recording state in a multi-mirror video recording mode, and the first time is the time when the electronic device last responds to the first camera to detect the blank gesture and switches the shooting mode (which can also be understood as the time when the user last uses the blank gesture). By the method, the user can be reminded of switching the shooting mode of the electronic equipment by using the air-separating gesture in time, and the human-computer interaction efficiency and the user experience can be effectively improved.
The electronic device may be an electronic device including a display screen or a folding screen device. When the electronic equipment is a single-screen mobile phone, the first camera and the second camera can be respectively a front camera and a rear camera of the electronic equipment. When the electronic equipment is a folding screen mobile phone, the display screen can comprise a first screen and a second screen, the first screen is rotatably connected with the second screen, the first camera and the second camera can be respectively positioned on two sides of the first screen, and the first camera and the first screen are positioned on the same side of the electronic equipment. Or the first camera is positioned on the first screen, the second camera is positioned on the second screen, and the first camera, the second camera, the first screen, the second screen, the first camera and the second camera are positioned on the same side of the electronic equipment. In the embodiment of the present application, an electronic device is taken as an example and explained as an electronic device including one display screen.
There are a variety of scenarios in which a user may desire to use a clear gesture, which will be described in detail below in conjunction with the accompanying figures. In addition, the above-mentioned space gesture is only a name used in the embodiment of the present application, and may also be referred to as a hover gesture, and the like, and specifically refers to a gesture input without contacting the electronic device, and the meaning of the gesture is described in the embodiment of the present application, and the name of the gesture does not limit the embodiment at all.
In order to more clearly and specifically describe the shooting method provided by the embodiment of the present application, the following first describes an electronic device according to the embodiment of the present application.
The electronic device may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) device, a Virtual Reality (VR) device, an Artificial Intelligence (AI) device, a wearable device, an in-vehicle device, a smart home device, and/or a smart city device, and the specific type of the electronic device is not particularly limited by the embodiments of the present application.
Referring to fig. 1A, fig. 1A illustrates a hardware structure diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 1A, the electronic device 200 may include a processor 210, an external memory interface 220, an internal memory 221, a Universal Serial Bus (USB) interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 250, a wireless communication module 260, an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an earphone interface 270D, a sensor module 280, a button 290, a motor 291, an indicator 292, a plurality of cameras 293, a display screen 294, a Subscriber Identity Module (SIM) card interface 295, and the like.
The sensor module 280 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the electronic apparatus 200. In other embodiments, electronic device 200 may include more or fewer components than illustrated, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units, such as: the processor 210 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be a neural center and a command center of the electronic device 200. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 210 for storing instructions and data. In some embodiments, the memory in processor 210 is a cache memory. The memory may hold instructions or data that have just been used or recycled by processor 210. If the processor 210 needs to use the instruction or data again, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 210, thereby increasing the efficiency of the system.
In some embodiments, processor 210 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc. It should be understood that the connection relationship between the modules illustrated in the present embodiment is only an exemplary illustration, and does not limit the structure of the electronic device 200. In other embodiments, the electronic device 200 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
In this embodiment of the application, the processor 210 may receive a plurality of consecutive images corresponding to an empty gesture input by the user, for example, a "palm" captured by the camera 293, and then the processor 210 may perform a comparative analysis on the consecutive images, determine that the empty gesture corresponding to the consecutive images is the "palm", and determine that an operation corresponding to the empty gesture is, for example, starting recording, and then the processor 210 may control the camera application to perform the corresponding operation. The corresponding operations may include, for example: the multiple cameras are called to shoot images simultaneously, then the images shot by the multiple cameras respectively are synthesized through a GPU in a splicing or picture-in-picture (partial superposition) mode, and the like, and the synthesized image is displayed in a shooting preview interface of the electronic device by calling the display screen 294.
The external memory interface 220 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the electronic device 200. The external memory card communicates with the processor 210 through the external memory interface 220 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 221 may be used to store computer-executable program code, including instructions. The processor 210 executes various functional applications of the electronic device 200 and data processing by executing instructions stored in the internal memory 221. For example, in the present embodiment, the processor 210 may execute instructions stored in the internal memory 221, and the internal memory 221 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (e.g., audio data, phone book, etc.) created during use of the electronic device 200, and the like. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
In the embodiment of the present application, the internal memory 221 may store picture files or recorded video files that are taken by the electronic device in different shooting modes.
The charge management module 240 is configured to receive a charging input from a charger. The charger may be a wireless charger or a wired charger. The charging management module 240 may also supply power to the terminal device through the power management module 241 while charging the battery 242.
The power management module 241 is used to connect the battery 242, the charging management module 240 and the processor 210. The power management module 241 receives input from the battery 242 and/or the charging management module 240, and provides power to the processor 210, the internal memory 221, the external memory, the display 294, the camera 293, and the wireless communication module 260. In some embodiments, the power management module 241 and the charging management module 240 may also be disposed in the same device.
The wireless communication function of the electronic device 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, the modem processor, the baseband processor, and the like. In some embodiments, antenna 1 of electronic device 200 is coupled to mobile communication module 250 and antenna 2 is coupled to wireless communication module 260, such that electronic device 200 may communicate with networks and other devices via wireless communication techniques.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 200 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 250 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the electronic device 200. The mobile communication module 250 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 250 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation.
The mobile communication module 250 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be disposed in the processor 210. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be disposed in the same device as at least some of the modules of the processor 210.
The wireless communication module 260 may provide solutions for wireless communication applied to the electronic device 200, including WLAN (e.g., wireless fidelity, wi-Fi) network, bluetooth (BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like.
The wireless communication module 260 may be one or more devices integrating at least one communication processing module. The wireless communication module 260 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 260 may also receive a signal to be transmitted from the processor 210, frequency-modulate and amplify the signal, and convert the signal into electromagnetic waves via the antenna 2 to radiate the electromagnetic waves.
The electronic device 200 implements display functions via the GPU, the display screen 294, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 294 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 294 is used to display images, video, and the like. The display screen 294 includes a display panel.
The electronic device 200 may implement a shooting function through the ISP, the camera 293, the video codec, the GPU, the display screen 294, and the application processor. The ISP is used to process the data fed back by the camera 293. The camera 293 is used to capture still images or video. In some embodiments, electronic device 200 may include N cameras 293, N being a positive integer greater than 2.
Electronic device 200 may implement audio functions via audio module 270, speaker 270A, receiver 270B, microphone 270C, headphone interface 270D, and an application processor, among others. Such as music playing, recording, etc.
The keys 290 include a power-on key, a volume key, and the like. The keys 290 may be mechanical keys. Or may be touch keys. The motor 291 may generate a vibration cue. The motor 291 can be used for both incoming call vibration prompting and touch vibration feedback. Indicator 292 may be an indicator light that may be used to indicate a state of charge, a change in charge, or may be used to indicate a message, missed call, notification, etc.
The plurality of cameras 293 are used to capture images. In the embodiment of the present application, the number of the cameras 293 may be M, where M ≧ 2, M is a positive integer. The number of the cameras which are started in the multi-mirror shooting of the electronic equipment can be N, wherein N is more than or equal to 2 and less than or equal to M, and N is a positive integer.
In the embodiment of the present application, the types of the camera 293 may be distinguished according to the hardware configuration and the physical location. For example, the plurality of cameras included in the camera 293 may be respectively disposed on the front side and the back side of the electronic device, the camera disposed on the side of the display screen 294 of the electronic device may be referred to as a front camera, and the camera disposed on the side of the rear cover of the electronic device may be referred to as a rear camera; for example, the cameras 293 include cameras having different focal lengths and different viewing angles, and a camera having a shorter focal length and a larger viewing angle may be referred to as a wide-angle camera, and a camera having a longer focal length and a smaller viewing angle may be referred to as a normal camera. The content of the images shot by different cameras is different in that: the front camera is used for shooting the scenery facing the front side of the electronic equipment, and the rear camera is used for shooting the scenery facing the back side of the electronic equipment; the wide-angle camera can shoot a scene with a larger area within a shorter shooting distance range, and the scene shot at the same shooting distance is smaller than the image of the scene shot by using a common lens in the picture. The length of the focal length and the size of the angle of view are relative concepts, and there is no specific parameter limitation, so that the wide-angle camera and the normal camera are also relative concepts, and can be specifically distinguished according to physical parameters such as the focal length and the angle of view.
Specifically, in this embodiment, the camera 293 includes at least one camera having a (time of flight, TOF) 3D sensing module or a structured light (structured light) 3D sensing module, and the camera acquires 3D data of an object in a captured image, so that the processor 210 can identify an operation instruction corresponding to a gesture of a user in space according to the 3D data of the object.
The camera for acquiring the 3D data of the object can be an independent low-power-consumption camera, or can be other common front-facing cameras or rear-facing cameras, the common front-facing cameras or the rear-facing cameras support a low-power-consumption mode, when the low-power-consumption cameras work, or the common front-facing cameras or the rear-facing cameras work in the low-power-consumption mode, the frame rate of the cameras is lower than that of the common cameras working in a non-low-power-consumption mode, and output images are in a black-and-white format. A common camera can output 30 frames of images, 60 frames of images, 90 frames of images and 240 frames of images in 1 second, but when the low-power consumption camera or a common front camera or a rear camera runs a low-power consumption mode, the camera can output 2.5 frames of images in 1 second, for example, and when the camera captures a first image representing the same blank gesture, the camera can switch to output 10 frames of images in 1 second so as to accurately identify an operation instruction corresponding to the blank gesture through continuous multi-image identification, and furthermore, the pixels of the captured image of the low-power consumption camera are lower than those of the captured image of the common camera. Meanwhile, compared with the common camera, the power consumption is reduced when the common camera works in a low power consumption mode.
The display screen 294 is used to display images, video, and the like. In some embodiments, the electronic device may include 1 or N display screens 294, N being a positive integer greater than 1. In the present embodiment, the display screen 294 can be used to display images from any one or more of the cameras 293, such as displaying a plurality of frames from one camera 293 in a preview interface, or displaying a plurality of frames from one camera 293 in a saved video file, or displaying a photo from one camera 293 in a saved picture file.
The SIM card interface 295 is used to connect a SIM card. The SIM card can be attached to and detached from the electronic apparatus 200 by being inserted into the SIM card interface 295 or being pulled out from the SIM card interface 295. The electronic device 200 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 295 may support a Nano SIM card, a Micro SIM card, a SIM card, etc.
Fig. 1B is a block diagram of a software structure of an electronic device according to an embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 1B, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 1B, the application framework layers may include a windows manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The telephone manager is used for providing a communication function of the electronic equipment. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide a fusion of the 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes an exemplary workflow of software and hardware of the electronic device during shooting in conjunction with capturing a shooting scene.
When the touch sensor receives a touch operation, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of the application framework layer, starts the camera application, further starts the camera drive by calling the kernel layer, and captures a still image or a video through the camera 293. In this embodiment, the touch operation received by the touch sensor may be replaced by an operation of capturing an idle gesture input by the user by the camera 293. Specifically, when the camera 293 captures an operation of an air gesture, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the air gesture operation into an original input event (including an image of the air gesture, a timestamp of the air gesture operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the operation corresponding to the input event. Taking the operation of the space gesture as an example of the operation of switching the shooting mode, the camera application calls the interface of the application framework layer, and then starts other camera drivers by calling the kernel layer, so as to switch to other cameras 293 to capture still images or videos.
Next, the specific contents of the guided use method of the blank gesture provided in the present application will be described with reference to the accompanying drawings.
The electronic equipment can pop up a guide popup window with the air separating gesture when entering the multi-mirror video for the first time, and the using method of the air separating gesture is displayed for a user. The electronic equipment can also pop up a guidance popup window of the air separation gesture when the time difference between the last time of using the air separation gesture by the user and the current time is detected to be larger than first preset time and the user is detected to have a requirement for using the air separation gesture, and the using method of the air separation gesture is displayed for the user. The following description will respectively describe the process of the electronic device popping up the guidance popup window in the above two scenarios with reference to the drawings.
Referring to fig. 2A-2B, a process of the electronic device entering multi-mirror recording for the first time is shown.
As shown in fig. 2A, the handset may display a home interface 301. The home interface 301 may include an icon 302 for a camera application. The mobile phone may receive an operation of the user clicking the icon 302, and in response to the operation, as shown in fig. 2B, the mobile phone may open the camera application and display a shooting preview interface 303 of the camera application. It can be understood that the camera application is an application of image shooting on an electronic device such as a smart phone and a tablet computer, and may be a system application or a third-party application, and the application does not limit the name of the application. That is, the user may click on the icon 302 of the camera application to open the capture preview interface 303 of the camera application. Without being limited thereto, the user may also invoke the camera application to open the capture preview interface 303 in other applications, such as the user clicking on a capture control in a social-like application to open the capture preview interface 303. The social application can support the user to share the shot pictures or videos and the like with others.
It should be noted that the shooting preview interface 303 may be a user interface of a default shooting mode of the camera application, for example, a user interface provided when the camera application is in a front shooting mode. It is understood that the default shooting mode may be other modes, such as a rear shooting mode, a front-rear shooting mode, and the like. Still alternatively, the camera application may have a memory function, and the shooting preview interface 303 may be a user interface of a shooting mode in which the camera application is located when the camera application is last exited.
Fig. 2B illustrates an example of the shooting preview interface 303 as a corresponding shooting preview interface when the camera application is in the front shooting mode. As shown in fig. 2B, the capture preview interface 303 can include a preview image 304 capture mode option 305, a flash control, a shutter control, and the like. The preview image 304 is an image captured by a front camera among the plurality of cameras 293. It should be noted that the electronic device may refresh the image displayed by the shooting preview interface 303 (i.e., the preview image 304) in real time, so that the user can preview the image currently shot by the camera 293. The shooting mode option 305 is used to provide a plurality of shooting modes for the user to select. The plurality of photographing modes may include: photographed 305a, recorded 305b, multi-mirror recorded 305c, real-time blurring, panoramic, etc. The electronic device may receive an operation of the user to slide the photographing mode option 305 to the left/right, and in response to the operation, the electronic device may turn on the photographing mode selected by the user. It should be noted that, not limited to the illustration in fig. 2B, more or fewer options than those illustrated in fig. 2B may be displayed in the shooting mode option 305.
The shooting mode corresponding to the shot 305a is commonly used single-lens shooting, and may include a front shooting mode, a rear shooting mode, and the like. That is, when the photo 305a is selected, the electronic device can take a photo through the front camera or the rear camera. For a detailed description of the front shooting mode and the rear shooting mode, reference is made to the foregoing description, which is not repeated herein.
The shooting modes corresponding to the multi-mirror video 305c may include multi-mirror shooting and single-mirror shooting. That is, when the multi-lens video recorder 305c is selected, the electronic device may perform single-lens shooting through one camera or multi-lens shooting through a plurality of cameras. For the detailed description of the multi-mirror shooting, reference may be made to the foregoing detailed description, and details are not repeated herein.
As shown in fig. 2B, the photo 305a is in a selected state. That is, the electronic device is currently in a photographing mode. If the user wishes to start the multiple mirror recording mode, the user may slide the capture mode option 305 to the left and select the multiple mirror recording 305c. When the user's operation of sliding the shooting mode option 305 to the left and selecting the multi-mirror recording 305c is detected, the electronic device may turn on the multi-mirror recording mode and display a shooting preview interface 401 as shown in fig. 3A.
As shown in fig. 3A, after entering the multi-mirror recording mode, the electronic device may display a shooting preview interface 401. The shooting preview interface 401 includes an image 401a (may also be referred to as a second image), an image 401b (may also be referred to as a first image), an air mirror change control 402, a tutorial guidance control 403, a prompt message 404, a setting control 405, and a shooting mode switching control 406. The image 401a is an image acquired by the rear camera in real time, the image 401b is an image acquired by the front camera in real time, and the image 401a and the image 401b are vertically spliced due to the fact that the electronic equipment is vertically arranged. Specifically, the first image is located in a first area of the shooting preview interface, the second image is located in a second area of the shooting preview interface, and the first area and the second area are not overlapped. For example, the first area may be an area where the image 401b in fig. 3A is located, and the second area may be an area where the image 401a in fig. 3A is located.
It should be noted that, in the present application, when the electronic device enters the multi-lens recording mode for the first time, the front camera and the rear camera are turned on by default, and the display mode of the image is determined as the splicing mode by default. However, in other embodiments, the default camera that is turned on may be two rear cameras, one front camera, one rear camera, or the like. In addition, the display mode of the image is not limited to the splicing mode, and may be a picture-in-picture mode, and the like, and is not limited specifically here. Alternatively, the camera application may have a memory function, and after the electronic device enters the multi-mirror recording mode, the electronic device may turn on the camera that was operating when the camera application last exited from the multi-mirror recording mode, and display the camera in the last display mode.
The air-space mirror-changing control 402 can be used for a user to quickly turn on/off the air-space mirror-changing function. After the function of replacing the mirror in the air is started, the user can control the electronic equipment through the air gesture. It should be noted that, after the electronic device enters the multi-mirror video recording mode, the mirror-changing function in the air may be turned on by default. Accordingly, the blank swap control 402 is in an on state (which may also be referred to as a first state) for indicating that the blank swap function is on. Of course, the empty-space mirror-changing function can also be turned off. For example, in response to detecting a touch operation by a user on the air slide control 402, the electronic device may turn off the air slide function. At this time, the air blanking swap control 402 is in an off state (also referred to as a second state) for indicating that the air blanking swap function has been turned off.
The teaching guidance control 403 (also referred to as a first control) can be used to guide the user to learn an overhead gesture of switching the overhead mirror, such as a gesture of turning on a recorded overhead gesture, a gesture of switching between a double mirror and a single mirror, a gesture of turning on/off a picture-in-picture, a gesture of interchanging front and rear lenses, a gesture of ending recording, and the like. It should be noted that the tutorial guidance control 403 is associated with the air-spaced mirror changing function: when the air mirror changing function is turned on (or the air mirror changing control 402 is in an on state), the shooting preview interface 401 may display the teaching guidance control 403; when the air slide mirror function is turned off (or the air slide mirror control 402 is in an off state), the capture preview interface 401 may hide (may be understood as not displaying) the tutorial guide control 403. It should be noted that, after the electronic device starts recording, the teaching guidance control 403 may also be hidden.
The prompt message 404 (also referred to as a guidance prompt) is disposed at a preset position of the teaching guidance control 403 (e.g., disposed at the left side of the teaching guidance control 403 and pointing to the teaching guidance control 403), and is used to remind the user to click the teaching guidance control 403 to view teaching of the space-spaced mirror-changing gesture. For example, the prompt 404 may be a "click viewable" mirror change over space "gesture.
The setting control 405 can be used to adjust parameters for taking a picture (such as resolution, picture scale, etc.), and turn on or off some modes for taking a picture (such as changing mirrors in the air, taking a picture at regular time, capturing a smile, taking a picture under voice control, etc.), etc. Specifically, the electronic device may receive an operation of the user clicking the setting control 405, and in response to the operation, the electronic device may display the setting interface 405a as shown in fig. 3B. The settings interface 405a may include options such as clear mirror change 405b, photo scale, voice-activated photograph, etc. The blanking swap mirror 405b can turn on or off the blanking swap mirror function. Note that the on-off state of the clear mirror 405b in the setting interface 405a and the on-off state of the clear mirror change control 402 in the shooting preview interface 401 are linked, and the on-off state of the clear mirror change 405b and the on-off state of the clear mirror change control 402 are kept identical. That is, if the blank mirror 405b is switched from the on/off state to the off/on state, the blank mirror-changing control 402 is also switched from the on/off state to the off/on state, and vice versa.
The photographing mode switching control 406 may be used to provide a plurality of photographing modes for the user to select, such as a front-back photographing mode, a back-back photographing mode, a picture-in-picture photographing mode, a front photographing mode, a rear photographing mode, and the like.
It should be noted that the process of scheduling the multi-mirror recording again from the electronic device may be identical to the process shown in fig. 2A-2B. The difference is that after the electronic device has entered the multi-mirror recording, the prompt message 404 is no longer displayed on the shooting preview interface.
If the user wishes to view the tutorial guidance for the blank gesture, the tutorial guidance control 403 may be clicked. As shown in fig. 4, the electronic device may receive an operation of the user clicking on the tutorial guide control 403, and in response to the operation, the electronic device may display a guide popup 407 (which may also be referred to as a second popup) as shown in fig. 5A. The guide popup 407 may be used to show the use of the blank gesture and related secondary interpretation. Specifically, the guidance pop-up window 407 may include an animation display area 407a (also referred to as a first display area), a text prompt area 407b (also referred to as a second display area), and a confirmation option 407c. The animation display area 407a is used for displaying a plurality of guide videos to show the use method and effect of the space gesture. In one design, the animated display 407a may be presented in an animated manner, which is vivid and understandable, and helps the user to quickly understand and learn the spaced gesture. The content displayed in the text prompt area 407b is matched with the content displayed in the animation display area 407a, and can be used for assisting in explaining the function of the blank gesture being displayed in the animation display area 407a and the use method of the blank gesture.
As shown in fig. 5A to 5C, the animation display area 407a may display a guidance animation that starts recording with a space.
Specifically, as shown in fig. 5A, in the animation display area 407a, the electronic apparatus is in the front-back shooting mode. The electronic device can display an image 409a captured by the rear camera and an image 409b captured by the front camera on the capture preview interface 408. The electronic device may also display the clear change mirror icon 410 upon recognition of the "raise hand" gesture. As shown in fig. 5B, when the electronic device continuously detects the gesture of holding the "raise hand", the time progress bar of the blank mirror change icon 410 may be gradually filled, and the shooting preview interface 408 shown in fig. 5C is displayed after a first preset time (e.g., 2 seconds). Compared with the shooting preview interface 408 shown in fig. 5A, the shooting preview interface 408 shown in fig. 5C is a shooting preview interface after the recording state is opened by the electronic device.
Correspondingly, the text prompt area 407b may display "start recording in the air, wait for 2 seconds after the mirror-changing icon 410 (specific icon is shown in the figure) appears in the air when the user raises his hand, so as to assist the user in understanding how to start recording using the air gesture.
As shown in fig. 6A-6F, animation display area 407a may also display a guided animation that switches the double mirror and the single mirror in space.
6A-6C show a boot animation of an electronic device switching from a dual mirror to a single mirror.
As shown in fig. 6A, in the animation display area 407a, the electronic apparatus is in the front-back photographing mode. The electronic device can display the rear camera captured image 409a and the front camera captured image 409b on the capture preview interface 408. Here, the image 409a is positioned on the left side of the photographing preview interface 408, and the image 409b is positioned on the right side of the photographing preview interface 408. Still as shown in fig. 6A, the electronic device may display the blank mirror icon 410 upon recognizing the "raise hand" gesture. As shown in fig. 6B, after the electronic device detects the gesture of "palm sliding from right to left", the electronic device may switch from the front-back shooting mode to the front-end shooting mode, and display a shooting preview interface 408 as shown in fig. 6C, where the shooting preview interface 408 includes an image 409B. During the switching process, the image 409a may finally disappear from the capture preview interface 408 according to the gesture of moving from right to left and gradually reducing the display area, while the image 409b finally fills the entire capture preview interface 408 according to the gesture of moving from right to left and gradually increasing the display area.
Fig. 6D-6F show boot animations for an electronic device switching from single to double mirror.
As shown in fig. 6D, the electronic device may display an image 409b captured by the front camera in the capture preview interface 408 when in the front capture mode. In addition, the electronic device may also display the clear mirror icon 410 upon recognizing the "raise hand" gesture. As shown in fig. 6E, upon detecting the gesture of "palm sliding from left to right", the electronic device may switch the photographing mode from the front photographing mode to the front-rear photographing mode and display a photographing preview interface 408 as shown in fig. 6F. In fig. 6F, the electronic apparatus simultaneously displays an image 409a captured by the rear camera and an image 409b captured by the front camera. The image 409a and the image 408b are displayed in a left-right splicing manner, and the image 409a can be finally displayed on the left side of the shooting preview interface 408 according to a gradually increasing display area corresponding to a gesture of sliding the palm from left to right, and the image 408b is finally displayed on the right side of the shooting preview interface 408 according to a corresponding gradually moving from left to right and gradually decreasing display area.
Correspondingly, in fig. 6A to 6F, the text prompt area 407b may display "switch between dual mirrors and single mirror in space, and when the user lifts his hand and turns his hand to the space-separated mirror icon 410, he slides left to push away the left screen and slides right to push away the right screen. (when the screen is erected, the screen is pushed away by sliding the screen on the left, and the screen is pushed away by sliding the screen on the right)', so that the user is assisted in understanding how to switch between the single mirror and the double mirror by using the space gesture.
As shown in fig. 7A-7C, the animation display area 407A may also display a leading animation of turning on/off the picture-in-picture with a space.
As shown in fig. 7A, the electronic device is in the post-capture mode and displays an image 409a captured by the post-camera in the capture preview interface 408. Still as shown in fig. 7A, the electronic device may display the blank mirror icon 410 upon recognizing the "raise hand" gesture. As shown in fig. 7B, the electronic device may, upon recognizing the "raise and punch" gesture, turn on the picture-in-picture mode and display a capture preview interface 408 as shown in fig. 7C. The shooting preview interface 408 shown in fig. 7C simultaneously displays the image 409a and the image 409b. The image 409a is still displayed in the entire display area of the shooting preview interface 408, and the image 409b is superimposed on the image 409a.
In addition, animation display area 407A may also display the inverse of the process of FIGS. 7A-7C. That is, the electronic device may display the capture preview interface 408 as shown in fig. 7B after detecting the "raise hand" gesture in the capture preview interface 408 shown in fig. 7C. Then, when the gesture of "raise hand and make a fist" is detected in the photographing preview interface 408 shown in fig. 7B, the photographing preview interface 408 shown in fig. 7A is displayed. In other words, the user can open or close the picture-in-picture through this "raise and fist" gesture.
Correspondingly, in fig. 7A-7C, the text prompt area 407b may display "open/close picture-in-picture in space, and a fist is made after the open gesture icon 408 appears on the hand-up" to assist the user in understanding how to open/close picture-in-picture in space.
In one possible design, before displaying the capture preview interface 408 shown in fig. 7A, the electronic device may further display a capture preview interface 408 shown in fig. 7D, where the capture preview interface 408 is displayed before the electronic device recognizes a "raise hands" gesture. The shooting preview interface 408 can display a prompt message 411, the prompt message 411 is used for prompting a user, and when the electronic device is in a rear shooting mode, the electronic device can recognize an air gesture by using a front camera, so that the user is prevented from feeling that privacy is invaded. In this embodiment, the prompt information 411 may be an icon. However, in other embodiments, the prompt 411 may be in the form of text or a combination of text and icons. Note that the scene shown in fig. 7D may be displayed in the animation display area 407a as a single guide animation alone, instead of being combined with the animation of switching between the two mirrors and the single mirror in the space.
As shown in fig. 8A to 8C, the animation display area 407a may also display a guide animation of the shot before and after the blank-cut.
As shown in fig. 8A, the electronic device is in the post-capture mode and displays an image 409a captured by the post-camera in the capture preview interface 408. The electronic device may display the clear-to-empty mirror icon 410 upon recognition of the "raise hands" gesture. As shown in fig. 8B, the electronic device may switch the photographing mode from the rear photographing mode to the front photographing mode upon recognizing the gesture of "flip from palm to back of hand" and display a photographing preview interface 408 as shown in fig. 7C. The image 409b captured by the front camera is displayed on the capture preview interface 408.
Correspondingly, in fig. 8A-8C, the text prompt area 407b may display "switch the front and rear shots in the blank space, and turn over from the palm to the back of the hand after the blank hand gesture icon 408 appears on the raising hand" to assist the user in understanding how to switch the front and rear shots using the blank hand gesture.
In this embodiment of the application, before the electronic device displays the shooting preview interface 408 shown in fig. 8A, the electronic device may also display the shooting preview interface 408 shown in fig. 7D, and specific contents thereof refer to the foregoing, which is not repeated herein.
As shown in fig. 9A to 9C, the animation display area 407a may also display a guidance animation that ends recording with an empty space.
As shown in fig. 9A, the electronic device is in the front-back shooting mode, and an image 409A shot by the rear camera and an image 409b shot by the front camera are displayed in the shooting preview interface 408, and the images 409A and 409b are spliced left and right. The electronic device may display the blank mirror icon 410 upon recognition of a hand-lifting gesture. As shown in fig. 9B, in the scenario shown in fig. 9A, the electronic device may display a capture preview interface 408 as shown in fig. 9C upon recognizing the gesture of "OK". Compared to the shooting preview interface 408 shown in fig. 9A, the shooting preview interface 408 shown in fig. 9C is the shooting preview interface after the recording state of the electronic device is finished.
Correspondingly, in fig. 9A-9C, the text prompt area 407b may display "close recording in the space, connect the thumb and the index finger into a circle after the open gesture icon 408 appears on the lift hand, and naturally bend the other fingers" to assist the user in understanding how to close recording using the open gesture.
In the guidance popup 407, the animation display area 407a may automatically play in turn and cyclically play guidance animation that starts recording at an interval, switches the dual mirror and the single mirror at an interval, starts/closes the picture-in-picture at an interval, switches the front and rear shots at an interval, and finishes recording at an interval. It should be noted that the playing sequence of the video recording method is not necessarily played in the order of starting recording in the air, switching between the dual mirror and the single mirror in the air, starting/closing the picture-in-picture in the air, switching between the front and rear lenses in the air, and ending recording in the air, but other playing sequences may exist, which is not limited specifically herein.
It should be noted that the above-mentioned blank gesture is only an example, and the blank gesture may also be other gestures, such as a "victory" gesture, a "left-right waving" gesture, and the like.
In this embodiment, the electronic device may further receive an operation of sliding the guide popup 407 by the user, and in response to the operation, the electronic device may switch the guide animation being played in the animation display area 407a and the prompt information displayed in the text display area 407 b. Illustratively, if the electronic device sequentially plays the corresponding guide animations according to the sequence of starting recording at an interval, switching between a double mirror and a single mirror at an interval, starting/closing picture-in-picture at an interval, switching between front and rear lenses at an interval, and ending recording at an interval. Then the electronic device may display a guidance animation and associated text explanation for the shot before and after the blank switch in the guidance popup 407, as shown in fig. 10. The electronic device may receive an operation of the user sliding the guide popup 407 to the left, and in response to the operation, as shown in fig. 9A, the electronic device switches the guide animation displayed in the guide popup 407 to the guide animation of the end-of-blank recording.
It should be noted that after the electronic device receives an operation of manually switching animations by a user, the electronic device does not automatically carousel multiple guidance animations. That is, once the animation is manually switched by the user, the electronic device can only switch the guidance animation played by the guidance popup 407 again after receiving the operation of switching the animation by the user again.
The confirmation option 407c is for the user to end the tutorial guidance in advance. For example, the confirmation option 407c may be "know". Illustratively, as shown in fig. 11A, the electronic device may receive an operation of clicking the confirmation option 407c by the user, and in response to the operation, the electronic device closes the guide popup 407 and displays the shooting preview interface 401 shown in fig. 11B. This shooting preview interface 401 is similar to the shooting preview interface 401 shown in fig. 4, except that: the shooting preview interface 401 no longer displays the prompt information 404. It should be noted that, when the user plays any one of the guidance animations, the user may click the confirmation option 407c to close the guidance popup 407.
Therefore, before the electronic equipment does not receive the instruction of switching the animations or the instruction of ending the guidance, the electronic equipment can automatically broadcast a plurality of guidance animations in turn. And after the electronic equipment receives the command of switching the animations, the electronic equipment does not automatically carousel a plurality of guide animations. After the electronic equipment receives the instruction of ending the guidance, the teaching guidance can be ended.
In this embodiment, if the electronic device does not detect that the user has clicked the teaching guidance control 403 and does not use the blank gesture, the electronic device may actively display the guidance pop-up window 407 when entering the multi-mirror recording mode for the second time, and actively guide the user to learn the blank gesture. The description of the guidance pop 407 refers to the related contents in the foregoing, and is not repeated herein. In an alternative embodiment, the electronic device may detect whether the shooting mode was switched in response to the first camera detecting the clear gesture. If the electronic device switches the shooting mode in response to the first camera detecting the blank gesture, it may be considered that the user uses the blank gesture. If the electronic device does not switch the shooting mode in response to the first camera detecting the air gesture, it may be considered that the user has not used the air gesture.
That is, the electronic device may prompt the user to view instructional guidance regarding the air-spaced gesture when entering the multi-mirror recording mode for the first time. If the user does not view the tutorial guidance when the electronic device first enters the multi-mirror recording mode (it may be understood that the electronic device does not detect that the user clicked on tutorial guidance control 403), the tutorial guidance for the clear gesture may be automatically played when the electronic device enters the multi-mirror recording mode for the second time. Through many times of guiding the user to watch the teaching guidance about the blank gesture, the probability that the user checks the teaching guidance can be improved, the user can learn the blank gesture as quickly as possible, and the human-computer interaction efficiency is improved.
The guide popup 407 is displayed when the electronic device first enters the multi-mirror recording, and its main purpose is to guide the user to learn how to use the blank gesture. However, after learning, the user may not actually use the space gesture frequently. Thus, the electronic device may also guide the user in the appropriate opportunity to use the clear gesture.
Specifically, when the time difference between the last time when the user uses the blank gesture (which may also be understood as the time when the electronic device switches the shooting mode in response to the first camera detecting the blank gesture, or the first time) and the current time is greater than the first preset time and the user is detected to have a need to use the blank gesture, the electronic device pops up a guidance popup window of the blank gesture, and displays the use method of the blank gesture to the user. The electronic equipment is in a recording state under multi-mirror video recording.
If the time difference between the last time when the user uses the air-separating gesture and the current time is greater than the first preset time, the user can be considered to use the air-separating gesture once, but the user may not use the air-separating gesture again for a long time due to the fact that the user forgets to use the air-separating gesture or forgets to use the air-separating gesture. Under the condition, the user is reminded of using the air-separating gesture, and the influence on the user experience caused by frequently reminding the user of using the air-separating gesture can be avoided. It should be noted that the blank gesture may be any one of a gesture for starting recording in blank space, a gesture for switching between a double mirror and a single mirror in blank space, a gesture for starting/closing a picture-in-picture in blank space, a gesture for switching between front and rear lenses in blank space, and a gesture for ending recording in blank space.
In an optional design, if the electronic device detects that the distance between the user in the front screen and the display screen is greater than or equal to the preset distance within the second preset time after the recording is started, it can be considered that the electronic device detects that the user has a demand for using the space gesture.
Wherein, the user in the front-facing picture can be understood as the user shot by the front-facing camera. The distance between the user and the display screen in the front-facing screen is greater than or equal to the preset distance, which means that the shooting preview interface of the electronic device is displaying the image shot by the front-facing camera, and the distance between the user and the display screen is greater than or equal to the preset distance. The preset distance may refer to a distance at which the user cannot touch the display screen without reducing the distance between the user and the display screen. For example, the preset distance may be a distance between the user and the display screen when the user uses the selfie stick-holding electronic device to record, for example, 60 cm.
It can be understood that, in the above scenario, if the user switches the shooting mode by touching the shooting mode switching control 406 on the display screen, the framing of the front camera is easily affected, and thus the quality of the video is affected. Therefore, in this case, it can be considered that the user has a need to use the space gesture, and the user can be guided to use the space gesture.
For example, the electronic device may display a shooting preview interface 401 as shown in fig. 12 when detecting that the distance between the user in the front screen and the display screen is greater than or equal to the preset distance within a second preset time after starting recording the video. This shooting preview interface 401 is similar to the shooting preview interface 401 shown in fig. 11B, except that: the images 401a and 401b in the shooting preview interface 401 are left-right stitched because the electronic device is laid out horizontally. Further, the shooting preview interface 401 further includes a guide popup 412 (may also be referred to as a first popup). The guide popup 412 is used to present a blank gesture to guide the user in using the blank gesture. For example, the guide popup 412 may display a gesture for blank switching of a double mirror and a single mirror, a gesture for blank switching of front and rear lenses, and a gesture for blank opening/closing of a picture-in-picture.
In an alternative design, if the electronic device detects that the first image is scaled down within a second preset time after the recording is started, where the first image is an area ratio of a face region in the first image, it is determined that there is a demand for using the blank gesture by the user, and the electronic device may display a guidance pop-up window 412 as shown in fig. 12.
In an alternative design, the electronic device may be considered to detect that a user has a need to use a clear gesture while the electronic device is connected to a selfie stick.
Wherein, modes such as electronic equipment accessible earphone hole, bluetooth, WIFI with be connected from rapping bar. For example, after the electronic device starts the bluetooth function, a pairing request sent from a joystick may be received. The pairing request may carry a device name, a media access control (mac) address, and a device type identification of the selfie stick. The device type identifier indicates that the device type of the sender of the pairing request is a selfie stick. So, electronic equipment pairs and connects the back with from rapping bar, can confirm self according to this equipment type sign and connect from rapping bar.
When the electronic equipment is connected with the selfie stick, the user can be considered to have the requirement of long-distance shooting, and under the condition, if the user switches the shooting mode through the shooting mode switching control 406 on the touch display screen, the framing of the front camera can be influenced, so that the quality of the video is influenced. Therefore, in this case, it can be considered that the user has a need to use the space gesture, and the user can be guided to use the space gesture.
When the electronic device is connected to the self-timer stick, a shooting preview interface 401 and a guide popup 412 as shown in fig. 12 may be displayed. For a detailed description of the shooting preview interface 401, refer to the foregoing, and are not repeated herein.
It should be noted that the foregoing illustrates that when the electronic device is in the recording state, if the connection of the selfie stick is detected, the electronic device may display a guidance pop-up window 412 as shown in fig. 12. In fact, when the electronic device is connected to the selfie stick, in response to receiving a first operation of the user, the electronic device starts recording a video, and the electronic device displays a guidance popup window 412 on the shooting preview interface 401. That is, the electronic device may first connect to the selfie stick and then display the first popup after starting to record the video. In other words, the electronic device may display the guidance popup 412 as long as two conditions that the electronic device is recording a video and is connected to the selfie stick are satisfied when a time difference between a last time (which may also be referred to as a first time) when the user used the blank gesture and a current time is greater than a first preset time.
In an alternative design, the electronic device may be considered to detect that the user has a need to use the clear gesture when the electronic device detects that the user switches the capture mode on the display screen.
Specifically, as shown in fig. 13A, the electronic device may receive an operation of the user clicking a shooting mode switching control 406 (which may also be referred to as a second control), and in response to the operation, the electronic device may display a shooting preview interface 401 as shown in fig. 13B. As shown in fig. 13B, the capture preview interface 401 may include a guide popup 412 and a capture mode selection area 413 (which may also be referred to as a third popup). For the contents of the guidance pop-up window 412, reference is made to the foregoing description, and details are not repeated herein. The photographing mode selection area 413 may show preview interfaces of a plurality of photographing modes, such as a preview interface 413a of a front photographing mode, a preview interface 413b of a rear photographing mode, a preview interface 413c of a picture-in-picture photographing mode, a preview interface 413d of a rear photographing mode, and a preview interface 413e of a front photographing mode and a rear photographing mode.
In short, when the electronic device is in the recording state of the multi-mirror video, if it is detected that the time difference between the last time of using the space gesture by the user and the current time is greater than the first preset time, and it is detected that the distance between the user and the display screen in the front screen is greater than or equal to the preset distance in the second preset time after the recording is started, the electronic device may display the guidance popup window 412 shown in fig. 12. When the electronic device is in a recording state of multi-mirror video, if it is detected that the time difference between the last time of using the space gesture by the user and the current time is greater than the first preset time and is connected to the selfie stick, the electronic device may display a guidance pop-up window 412 as shown in fig. 12. When the electronic device is in the multi-mirror recording state, if it is detected that the time difference between the last time of using the blank gesture by the user and the current time is greater than the first preset time and it is detected that the user switches the shooting mode on the display screen, the electronic device may display a guidance popup 412 shown in fig. 13B.
The above description only exemplifies the guidance and use method of the space gesture when the electronic device is in the front-back shooting mode. In fact, the electronic device may be in any shooting mode under multi-lens video recording, such as a picture-in-picture shooting mode, a front shooting mode, a rear shooting mode, and the like, and is not limited herein.
Therefore, the guiding and using method for the air-separating gesture can remind the user of learning the air-separating gesture when the electronic equipment enters the multi-mirror video for the first time. Particularly, when the time difference between the last time of using the air separating gesture by the user and the current time is larger than the first preset time and the user is detected to have the requirement of using the air separating gesture, the electronic device pops up the guidance popup window of the air separating gesture, shows the using method of the air separating gesture for the user, increases the possibility of using the air separating gesture by the user, and is beneficial to improving the human-computer interaction efficiency.
The present application further provides a chip system 1400, as shown in fig. 14, which includes at least one processor 1401 and at least one interface circuit 1402. The processor 1401 and the interface circuit 1402 may be interconnected by lines. For example, the interface circuit 1402 may be used to receive signals from other devices (e.g., a memory of an electronic device). Also for example, the interface circuit 1402 may be used to send signals to other devices, such as the processor 1401.
For example, the interface circuit 1402 may read instructions stored in a memory in the electronic device and send the instructions to the processor 1401. The instructions, when executed by the processor 1401, may cause the electronic device to perform the steps in the embodiments described above.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media that can store program code, such as flash memory, removable hard drive, read-only memory, random-access memory, magnetic or optical disk, etc.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered within the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (23)

1. A method for guiding use of an air gesture, the method being applied to an electronic device including a display screen, a first camera and a second camera, the first camera and the second camera being located on different sides of the display screen, the first camera being located on the same side of the electronic device as the display screen, the method comprising:
the electronic equipment displays a shooting preview interface, wherein the shooting preview interface comprises at least one of a first image and a second image, the first image is an image acquired by the first camera in real time, and the second image is an image acquired by the second camera in real time;
if the time difference between the first time and the current time is greater than a first preset time, responding to and detecting that the electronic equipment is recording videos, the electronic equipment is connected with a selfie stick, the electronic equipment is in a shooting preview interface displaying a first pop-up window, the first pop-up window comprises a gesture identification, the gesture identification is used for reminding a user to use an air-gap gesture to switch a shooting mode, and the first time is the time for the electronic equipment to switch the shooting mode last time in response to the first camera detecting the air-gap gesture.
2. The method of claim 1, wherein in response to detecting that the electronic device is recording a video, the electronic device being connected to a selfie stick, the electronic device displaying a first popup on the capture preview interface, comprises:
in response to receiving a first operation of a user, the electronic equipment starts to record a video;
in response to detecting connection of a selfie stick, the electronic equipment displays the first popup window on the shooting preview interface.
3. The method of claim 1, wherein in response to detecting that the electronic device is recording a video, the electronic device being connected to a selfie stick, the electronic device displaying a first popup in the capture preview interface, comprises:
the electronic equipment is connected with a selfie stick;
and responding to the received first operation of the user, starting to record a video by the electronic equipment, and displaying the first popup window on the shooting preview interface by the electronic equipment.
4. The method of any of claims 1-3, wherein the capture preview interface includes a first control, the method further comprising:
responding to the operation of a user on the first control, and displaying a second popup window on the shooting preview interface by the electronic equipment;
the electronic equipment circularly plays a plurality of guide videos in the second pop-up window according to a preset sequence, and each guide video in the plurality of guide videos is used for showing a using method of an empty gesture.
5. The method of claim 4, further comprising:
responding to the operation of sliding the second popup window left by the user, displaying a last guide video adjacent to the first video being played on the second popup window by the electronic equipment, and stopping circular playing;
or, in response to the operation of the user for sliding the second popup to the right, the electronic equipment displays the next guide video adjacent to the first video in the second popup and stops the circular playing.
6. The method according to claim 4, wherein the second popup comprises a first display area and a second display area, the first display area is used for playing the plurality of guide videos in a circulating manner, the second display area is used for displaying a plurality of prompt messages corresponding to the plurality of guide videos in a circulating manner, the plurality of guide videos and the plurality of prompt messages are in one-to-one correspondence, each prompt message is used for explaining a function and a use method of a blank gesture displayed by the corresponding guide video, and the guide video being displayed in the first display area corresponds to the prompt message being displayed in the second display area.
7. The method of claim 4, wherein the second popup comprises a confirmation option, the method further comprising:
and in response to the operation of the user on the confirmation option, the electronic equipment closes the second popup window.
8. The method according to any one of claims 4-7, further comprising:
if the electronic equipment detects that the user does not click the first control and does not respond to the first camera to detect the blank gesture to switch the shooting mode, the electronic equipment displays the second popup window on the shooting preview interface.
9. The method according to any one of claims 1-8, wherein the shooting preview interface further comprises a guidance prompt, the guidance prompt is disposed in a preset area of the first control, and the guidance prompt is used for instructing a user to click the first control to view a guidance video.
10. The method of any of claims 1-8, wherein the capture preview interface further comprises a second control, the method further comprising:
in response to detecting the operation of the user on the second control, the electronic equipment displays a third popup window on the shooting preview interface, wherein the third popup window comprises preview interfaces of multiple shooting modes;
if the electronic equipment is recording a video, the time difference between the first time and the current time is greater than a first preset time, the electronic equipment displays the first popup window on the shooting preview interface, and the first popup window is not overlapped with the third popup window.
11. The method according to any one of claims 1-10, wherein the gesture identification comprises: the first gesture identification, the second gesture identification and the third gesture identification, the first gesture identification indicates the gesture of moving to two sides, the second gesture identification indicates the gesture of turning over the palm, and the third gesture identification indicates the gesture of making a fist.
12. A method for guiding use of an air gesture, the method being applied to an electronic device including a display screen, a first camera and a second camera, the first camera and the second camera being located on different sides of the display screen, the first camera being located on the same side of the electronic device as the display screen, the method comprising:
the electronic equipment displays a shooting preview interface, wherein the shooting preview interface comprises at least one of a first image and a second image, the first image is an image acquired by the first camera in real time, and the second image is an image acquired by the second camera in real time;
in response to receiving a first operation of a user, the electronic equipment starts to record a video;
if the time difference between the first time and the current time is greater than a first preset time, responding to the fact that the distance between a user shot by the first camera and the display screen is greater than or equal to the preset distance in a second preset time after the video recording is started, the electronic equipment displays a first popup window on the shooting preview interface, the first popup window comprises a gesture mark, and the first time is the time for switching the shooting mode in the last time of the electronic equipment in response to the first camera detecting the blank gesture.
13. The method of claim 12, wherein the capture preview interface comprises a first control, the method further comprising:
responding to the operation of a user on the first control, and displaying a second popup window on the shooting preview interface by the electronic equipment;
the electronic equipment circularly plays a plurality of guide videos in the second pop-up window according to a preset sequence, and each guide video in the plurality of guide videos is used for showing a using method of an empty gesture.
14. The method of claim 13, further comprising:
responding to the operation of sliding the second popup to the left by the user, displaying the last guide video adjacent to the first video which is playing on the second popup by the electronic equipment, and stopping the circular playing;
or, in response to the operation of the user for sliding the second popup to the right, the electronic equipment displays the next guide video adjacent to the first video in the second popup and stops the circular playing.
15. The method according to claim 13, wherein the second popup comprises a first display area and a second display area, the first display area is used for playing the plurality of guide videos in a circulating manner, the second display area is used for displaying a plurality of prompt messages corresponding to the plurality of guide videos in a circulating manner, the plurality of guide videos correspond to the plurality of prompt messages in a one-to-one manner, each prompt message is used for explaining a function and a use method of a blank gesture displayed by the corresponding guide video, and the guide video being displayed in the first display area corresponds to the prompt message being displayed in the second display area.
16. The method of claim 13, wherein the second popup comprises a confirmation option, the method further comprising:
and in response to the operation of the user on the confirmation option, the electronic equipment closes the second popup window.
17. The method according to any one of claims 13-16, further comprising:
and if the electronic equipment detects that the user does not click the first control and does not respond to the first camera to detect the blank gesture to switch the shooting mode, the electronic equipment displays the second popup window on the shooting preview interface.
18. The method of any one of claims 12-17, wherein the capture preview interface further comprises a guidance prompt, the guidance prompt is disposed in a preset area of the first control, and the guidance prompt is used to instruct a user to click on the first control to view a guidance video.
19. The method of any of claims 12-17, wherein the capture preview interface further comprises a second control, the method further comprising:
in response to detecting the operation of the user on the second control, the electronic equipment displays a third popup window on the shooting preview interface, wherein the third popup window comprises preview interfaces of multiple shooting modes;
if the electronic equipment is recording a video, the time difference between the first time and the current time is greater than a first preset time, the electronic equipment displays the first popup window on the shooting preview interface, and the first popup window is not overlapped with the third popup window.
20. The method according to any one of claims 12-19, wherein the gesture identification comprises: the first gesture identification indicates gestures moving towards two sides, the second gesture identification indicates gestures turning over a palm, and the third gesture identification indicates gestures making a fist.
21. A method for guiding use of an air gesture is applied to an electronic device comprising a display screen, a first camera and a second camera, wherein the first camera and the second camera are located on different sides of the display screen, and the first camera and the display screen are located on the same side of the electronic device, and the method comprises the following steps:
the electronic equipment displays a shooting preview interface, wherein the shooting preview interface comprises at least one of a first image and a second image, the first image is an image acquired by the first camera in real time, and the second image is an image acquired by the second camera in real time;
in response to receiving a first operation of a user, the electronic equipment starts to record a video;
if the time difference between the first time and the current time is greater than first preset time, in response to the second preset time after the video begins to be recorded, the first image occupation ratio is detected to be reduced, the electronic equipment is used for displaying a first popup window on the shooting preview interface, the first popup window comprises a gesture identifier, the first time is the time when the electronic equipment detects the blank gesture in response to a first camera last time and switches the shooting mode, and the first image occupation ratio is the area occupation ratio of a face area in the first image.
22. An electronic device comprising a display screen, a first camera, a second camera, and a processor, the processor coupled with a memory, the memory storing program instructions that, when executed by the processor, cause the electronic device to implement the method of any of claims 1-21.
23. A computer-readable storage medium comprising computer instructions;
the computer instructions, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-21.
CN202111679527.8A 2021-06-16 2021-12-31 Guide use method of air separation gesture and electronic equipment Active CN115484394B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN2021106767093 2021-06-16
CN202110676709 2021-06-16
CN202111436311 2021-11-29
CN2021114363119 2021-11-29

Publications (2)

Publication Number Publication Date
CN115484394A true CN115484394A (en) 2022-12-16
CN115484394B CN115484394B (en) 2023-11-14

Family

ID=84420576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111679527.8A Active CN115484394B (en) 2021-06-16 2021-12-31 Guide use method of air separation gesture and electronic equipment

Country Status (1)

Country Link
CN (1) CN115484394B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130050425A1 (en) * 2011-08-24 2013-02-28 Soungmin Im Gesture-based user interface method and apparatus
EP3012732A1 (en) * 2014-10-24 2016-04-27 LG Electronics Inc. Mobile terminal and controlling method thereof
CN106055098A (en) * 2016-05-24 2016-10-26 北京小米移动软件有限公司 Air gesture operation method and apparatus
CN106250021A (en) * 2016-07-29 2016-12-21 维沃移动通信有限公司 A kind of control method taken pictures and mobile terminal
CN107613207A (en) * 2017-09-29 2018-01-19 努比亚技术有限公司 A kind of camera control method, equipment and computer-readable recording medium
CN110045819A (en) * 2019-03-01 2019-07-23 华为技术有限公司 A kind of gesture processing method and equipment
US10551995B1 (en) * 2013-09-26 2020-02-04 Twitter, Inc. Overlay user interface
CN111787223A (en) * 2020-06-30 2020-10-16 维沃移动通信有限公司 Video shooting method and device and electronic equipment
CN112394811A (en) * 2019-08-19 2021-02-23 华为技术有限公司 Interaction method for air-separating gesture and electronic equipment
CN112929558A (en) * 2019-12-06 2021-06-08 荣耀终端有限公司 Image processing method and electronic device
CN112954218A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130050425A1 (en) * 2011-08-24 2013-02-28 Soungmin Im Gesture-based user interface method and apparatus
US10551995B1 (en) * 2013-09-26 2020-02-04 Twitter, Inc. Overlay user interface
EP3012732A1 (en) * 2014-10-24 2016-04-27 LG Electronics Inc. Mobile terminal and controlling method thereof
CN106055098A (en) * 2016-05-24 2016-10-26 北京小米移动软件有限公司 Air gesture operation method and apparatus
CN106250021A (en) * 2016-07-29 2016-12-21 维沃移动通信有限公司 A kind of control method taken pictures and mobile terminal
CN107613207A (en) * 2017-09-29 2018-01-19 努比亚技术有限公司 A kind of camera control method, equipment and computer-readable recording medium
CN110045819A (en) * 2019-03-01 2019-07-23 华为技术有限公司 A kind of gesture processing method and equipment
CN112954218A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
CN112394811A (en) * 2019-08-19 2021-02-23 华为技术有限公司 Interaction method for air-separating gesture and electronic equipment
CN112929558A (en) * 2019-12-06 2021-06-08 荣耀终端有限公司 Image processing method and electronic device
CN111787223A (en) * 2020-06-30 2020-10-16 维沃移动通信有限公司 Video shooting method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘子慧;陈硕;: "鼠标手势的工效学研究进展", 人类工效学, no. 01 *

Also Published As

Publication number Publication date
CN115484394B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
WO2021000881A1 (en) Screen splitting method and electronic device
CN112887583B (en) Shooting method and electronic equipment
CN114915726A (en) Shooting method and electronic equipment
EP3226537A1 (en) Mobile terminal and method for controlling the same
WO2022095788A1 (en) Panning photography method for target user, electronic device, and storage medium
CN112383664B (en) Device control method, first terminal device, second terminal device and computer readable storage medium
EP4195707A1 (en) Function switching entry determining method and electronic device
CN115484380B (en) Shooting method, graphical user interface and electronic equipment
CN112527174A (en) Information processing method and electronic equipment
CN113010076A (en) Display element display method and electronic equipment
CN113746961A (en) Display control method, electronic device, and computer-readable storage medium
WO2022156473A1 (en) Video playing method and electronic device
CN112637477A (en) Image processing method and electronic equipment
WO2021254113A1 (en) Control method for three-dimensional interface and terminal
CN115529378A (en) Video processing method and related device
CN115484387B (en) Prompting method and electronic equipment
CN115484391B (en) Shooting method and electronic equipment
WO2023045597A1 (en) Cross-device transfer control method and apparatus for large-screen service
CN115484394B (en) Guide use method of air separation gesture and electronic equipment
CN116339568A (en) Screen display method and electronic equipment
CN114915722A (en) Method and device for processing video
CN115484393B (en) Abnormality prompting method and electronic equipment
CN115484390B (en) Video shooting method and electronic equipment
CN115484392B (en) Video shooting method and electronic equipment
CN115811656A (en) Video shooting method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant