CN111787223B - Video shooting method and device and electronic equipment - Google Patents

Video shooting method and device and electronic equipment Download PDF

Info

Publication number
CN111787223B
CN111787223B CN202010622234.5A CN202010622234A CN111787223B CN 111787223 B CN111787223 B CN 111787223B CN 202010622234 A CN202010622234 A CN 202010622234A CN 111787223 B CN111787223 B CN 111787223B
Authority
CN
China
Prior art keywords
gesture
video
user
electronic device
identifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010622234.5A
Other languages
Chinese (zh)
Other versions
CN111787223A (en
Inventor
韩桂敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010622234.5A priority Critical patent/CN111787223B/en
Publication of CN111787223A publication Critical patent/CN111787223A/en
Application granted granted Critical
Publication of CN111787223B publication Critical patent/CN111787223B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a video shooting method and device and electronic equipment, belongs to the technical field of communication, and can solve the problem that the video shooting convenience of the electronic equipment is poor. The video shooting method comprises the following steps: when a camera of the electronic equipment performs video shooting, if a first space gesture of a user relative to the electronic equipment is recognized, a target operation corresponding to the first space gesture is executed in response to the first space gesture, and the target operation comprises the following steps: starting video shooting, interrupting video shooting or stopping video shooting; deleting a first video frame sequence in a video acquired by a camera to obtain a first video; wherein the first video frame sequence is: the video collected by the camera comprises a video frame sequence of the gesture track of the first space gesture. The method and the device for executing the operation can be applied to the electronic equipment to execute the operation process according to the air-separating gesture of the user relative to the electronic equipment.

Description

Video shooting method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a video shooting method and device and electronic equipment.
Background
At present, in some shooting scenes, when a user needs to use the electronic device alone to shoot a video of the user, the user can place the electronic device at a position where the user can shoot the video, and input a shooting control in a display screen of the electronic device to control the electronic device to start shooting the video. When the user needs to interrupt video shooting, the user needs to move to the placing position of the electronic equipment and perform pause input on the shooting control so as to control the electronic equipment to interrupt video shooting.
However, since the electronic device is always performing video shooting during the process that the user moves to the electronic device placing position, a video clip which is not required by the user may exist in the video shot by the electronic device, and the user needs to edit the video for a long time so that the electronic device deletes the video clip.
Thus, the convenience of video shooting of the electronic equipment is poor.
Disclosure of Invention
The embodiment of the application aims to provide a video shooting method and device and electronic equipment, and the problem that the video shooting convenience of the electronic equipment is poor can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video shooting method, where the method includes: when a camera of the electronic equipment performs video shooting, if a first space gesture of a user relative to the electronic equipment is recognized, a target operation corresponding to the first space gesture is executed in response to the first space gesture, and the target operation comprises the following steps: starting video shooting, interrupting video shooting or stopping video shooting; deleting a first video frame sequence in a video acquired by a camera to obtain a first video; wherein the first video frame sequence is: the video collected by the camera comprises a video frame sequence of the gesture track of the first space gesture.
In a second aspect, an embodiment of the present application provides a video shooting apparatus, including: an execution module and a deletion module. The execution module is configured to, when a camera of the video capturing device captures a video, if a first blank gesture of a user with respect to the video capturing device is recognized, execute, in response to the first blank gesture, a target operation corresponding to the first blank gesture, where the target operation includes: start video capture, interrupt video capture, or stop video capture. And the deleting module is used for deleting the first video frame sequence in the video acquired by the camera to obtain the first video. Wherein the first video frame sequence is: the video collected by the camera comprises a video frame sequence of the gesture track of the first space gesture.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In this embodiment of the application, in the case of video shooting, if the electronic device recognizes a first clear gesture, the electronic device may execute a target operation (that is, start shooting a video, interrupt video shooting, or stop video shooting) according to the first clear gesture, and delete a first video frame sequence including a gesture trajectory of the first clear gesture in a captured video, to obtain a first video. Because the user can carry out the first clear gesture relative to the electronic equipment, so that the electronic equipment can start to shoot videos, interrupt video shooting or stop video shooting, delete a sequence of video frames which are not required by the user in the collected videos, and obtain videos which are required by the user, the user does not need to move to the placing position of the electronic equipment and input the electronic equipment, and edit the videos shot by the electronic equipment, and the convenience of video shooting of the electronic equipment can be improved.
Drawings
Fig. 1 is a schematic diagram of a video shooting method according to an embodiment of the present application;
fig. 2 is a second schematic diagram of a video shooting method according to an embodiment of the present application;
fig. 3 is an example schematic diagram of an interface of a mobile phone according to an embodiment of the present disclosure;
fig. 4 is a second schematic diagram of an example of an interface of a mobile phone according to an embodiment of the present disclosure;
fig. 5 is a third schematic diagram of an example of an interface of a mobile phone according to the embodiment of the present application;
fig. 6 is a fourth schematic view of an example of an interface of a mobile phone according to an embodiment of the present application;
fig. 7 is a fifth schematic view of an example of an interface of a mobile phone according to an embodiment of the present application;
fig. 8 is a third schematic diagram of a video shooting method according to an embodiment of the present application;
fig. 9 is a sixth schematic view of an example of an interface of a mobile phone according to an embodiment of the present application;
fig. 10 is a seventh schematic diagram illustrating an example of an interface of a mobile phone according to an embodiment of the present disclosure;
fig. 11 is a fourth schematic diagram of a video shooting method according to an embodiment of the present application;
fig. 12 is an eighth schematic diagram of an example of an interface of a mobile phone according to an embodiment of the present application;
fig. 13 is a ninth schematic diagram illustrating an example of an interface of a mobile phone according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a video camera according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 16 is a hardware schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video shooting method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
In the embodiment of the application, when a user uses the electronic device to shoot a video of the user, the user can place the electronic device at a position where the user can be shot, move the electronic device to the shooting position, and input an air gesture relative to the electronic device, so that the electronic device can start the video shooting when recognizing the air gesture. If the user needs to interrupt video shooting (or stop video shooting), the user does not need to move to the placement position of the electronic device in a traditional mode and pause input of the shooting control, but can perform the space gesture relative to the electronic device, so that the electronic device can interrupt video shooting (or stop video shooting) and delete the video clip containing the track of the space gesture in the video shot by the electronic device to obtain the video required by the user under the condition that the space gesture is recognized by the electronic device.
Through the scheme, the user does not need to move to the placing position of the electronic equipment and input the electronic equipment, so that the electronic equipment can interrupt video shooting (or stop video shooting), the user does not need to edit videos shot by the electronic equipment, video clips which are not required by the user in the videos are deleted, videos which are required by the user are obtained, and the user can perform an air-separating gesture relative to the electronic equipment, so that the electronic equipment can interrupt video shooting (or stop video shooting), and video clips which are not required by the user in the videos are deleted, and therefore the convenience of video shooting of the electronic equipment is poor.
Fig. 1 shows a flowchart of a video shooting method provided in an embodiment of the present application. As shown in fig. 1, a video shooting method provided in an embodiment of the present application may include steps 101 and 102 described below.
Step 101, when a camera of the electronic device performs video shooting, if a first space gesture of a user relative to the electronic device is recognized, the electronic device responds to the first space gesture and executes a target operation corresponding to the first space gesture.
In the embodiment of the present application, the target operation includes: start video capture, interrupt video capture, or stop video capture.
Optionally, in this embodiment of the application, a user may input an identifier (for example, an icon of a first application) of a first application query in the electronic device, so that the electronic device may run a first application program in a foreground, and control a camera of the electronic device to perform video shooting, so as to display a shooting preview interface, so that the electronic device may perform space gesture recognition on a video acquired by the camera, and execute a target operation corresponding to a first space gesture when recognizing the first space gesture of the user relative to the electronic device.
Optionally, in this embodiment of the application, the first application may specifically be an application having a video shooting function.
Optionally, in this embodiment of the application, in the case of video shooting, the electronic device may perform image recognition processing on a video acquired by the camera by using an image recognition algorithm to obtain a plurality of first images (each first image includes an image of a hand of a user), then the electronic device may perform binarization processing on each first image in the plurality of first images to obtain a plurality of second images after binarization processing, so that the electronic device may determine a plurality of hand images (one second image corresponds to one hand image) according to the plurality of second images, and perform image feature extraction on the plurality of hand images to obtain a plurality of gesture feature matrices (one gesture feature matrix corresponds to one hand image, one element in one gesture feature matrix corresponds to one feature of one hand image (for example, a bending angle of one finger joint, a bending angle of a finger, Geometric moments of the image, moment of inertia of the image, etc.), such that the electronic device may determine whether a first clear gesture of the user with respect to the electronic device is recognized based on the plurality of gesture feature matrices.
It should be noted that, for the description of the image recognition algorithm, reference may be made to specific descriptions in the prior art, and details are not repeated herein in the embodiments of the present application. For the description of the image binarization processing, reference may be made to specific descriptions in the prior art, and details of the embodiments of the present application are not repeated herein. For the description of image feature extraction, reference may be made to specific descriptions in the prior art, and details are not repeated herein in the embodiments of the present application.
Optionally, in this embodiment of the application, if any gesture feature matrix in the plurality of gesture feature matrices matches the first preset gesture feature matrix (that is, the similarity between any gesture feature matrix and the first preset gesture feature matrix is greater than or equal to the preset threshold), it may be considered that the user performs the first spaced gesture with respect to the electronic device, and therefore the electronic device may respond to the first spaced gesture and execute the target operation corresponding to the first spaced gesture.
It should be noted that the "blank gesture" may be understood as: a gesture performed by a user with respect to a camera of an electronic device without touching the electronic device.
Optionally, in this embodiment of the application, the first space-sharing gesture may be a default gesture of the electronic device, or may be a gesture preset by the user from among a plurality of space-sharing gestures preset in the electronic device, or may be a gesture for triggering the electronic device to control the camera to capture by the user.
It will be appreciated that the user may make a clear gesture input with respect to the electronic device without touching the electronic device (e.g., remote from the electronic device), such that the electronic device may interrupt video capture (or stop video capture) if the first clear gesture is recognized, without the user having to move to a location where the electronic device may be touched.
Optionally, in this embodiment of the application, the first spaced gesture may include a plurality of sub-gestures.
Optionally, in this embodiment of the application, in a case that the first clear gesture includes a plurality of sub-gestures, the first clear gesture may include a first sub-gesture and a second sub-gesture, the first sub-gesture corresponds to one target operation (for example, starting video shooting), and the second sub-gesture corresponds to another target operation (for example, interrupting video shooting, or stopping video shooting).
And 102, deleting a first video frame sequence in the video acquired by the camera by the electronic equipment to obtain a first video.
In an embodiment of the present application, the first video frame sequence is: the video collected by the camera comprises a video frame sequence of the gesture track of the first space gesture.
It is understood that between the moment when the user starts to perform the blank gesture input and the moment when the electronic device recognizes the first blank gesture, the camera of the electronic device still performs video shooting, which may result in that a video frame sequence (e.g., the first video frame sequence in the following embodiments) that is not required by the user may still exist in the video captured by the camera, and therefore, the electronic device may delete the first video frame sequence to obtain the video of the video frame sequence required by the user.
Optionally, in this embodiment of the application, the starting video frame of the first video frame sequence is: a first video frame in the video containing a gesture trajectory of the first clear gesture; the ending video frame of the first video frame sequence is: the electronic device identifies a video frame acquired by the camera when the first air gesture is detected.
It should be noted that the above "the first video frame containing the gesture track of the first blank gesture" may be understood as: a video frame containing a starting point of a gesture trajectory of the first clear gesture.
Optionally, in this embodiment of the application, the electronic device may determine, as an ending video frame of the first video frame sequence, a video frame corresponding to a first gesture feature matrix (that is, a video frame where a first image corresponding to the gesture feature matrix is located) that is matched with the first preset gesture feature matrix, from among the plurality of gesture feature matrices.
Optionally, in this embodiment of the application, after the electronic device determines the ending video frame, the electronic device may acquire T video frames (T is a positive integer) before the ending video frame, and perform image recognition processing on the T video frames by using an image recognition algorithm to obtain a plurality of third images (each third image at least includes an image of a hand of a user), and then the electronic device may process the plurality of third images to determine a plurality of hand images (one third image corresponds to one hand image) according to the plurality of third images, so that the electronic device may determine a first motion parameter of the hand of the user according to the plurality of hand images, and determine a first video frame in the video, which includes a gesture track of a first space gesture, according to the first motion parameter.
Optionally, in this embodiment of the application, the electronic device may determine a video frame in which a first motion parameter in the video matches a preset parameter as a starting video frame of the first video frame sequence.
Optionally, in this embodiment of the application, the first motion parameter may include at least one of: acceleration value of the motion and direction of the motion.
Optionally, in this embodiment of the application, in a case that the first motion parameter is a motion acceleration value, the electronic device may determine a video frame in the video, where the first motion acceleration value is greater than or equal to a preset acceleration threshold, as a starting video frame of the first video frame sequence; in the case that the first motion parameter is a motion direction, the electronic device may determine a video frame in the video, which has a first motion direction that is the same as a preset direction, as a starting video frame of the first video frame sequence.
It should be noted that the above "the moving direction is the same as the preset direction" may be understood as: the moving direction is the same as the preset direction, or the angle difference between the moving direction and the preset direction is smaller than or equal to a preset angle threshold value.
It can be understood that when the user starts to perform the blank gesture input, the user moves the hand of the user (for example, the user lifts the hand of the user) and moves some parts of the hand of the user (for example, fingers, etc.) to form the gesture, so that the first motion parameter of the hand of the user is different from the motion parameter before the user starts to perform the blank gesture input, and thus, in the case that the first motion parameter matches the preset parameter threshold, the user may be considered to start to perform the blank gesture input.
In the embodiment of the application, the electronic device can directly delete the first video frame sequence in the video acquired by the camera (that is, the video frame sequence from the first video frame containing the gesture track of the first blank gesture to the video frame acquired by the camera when the electronic device recognizes the first blank gesture) under the condition that the first blank gesture is recognized, so as to obtain the video of the video frame sequence required by the user, and the user does not need to perform multiple inputs, so that the video shooting efficiency of the electronic device can be improved.
In the embodiment of the application, when a user needs to shoot a video of the user independently, the user can place the electronic device at a position where the user can be shot, move the electronic device to the shooting position, and perform the spaced gesture input (namely, the first sub-gesture) relative to the electronic device, so that the electronic device can shoot the video of the user at the position. If the user needs to interrupt video shooting (or stop video shooting), the user can perform an idle gesture input (i.e., a second sub-gesture) with respect to the electronic device, so that the electronic device executes a target operation corresponding to a first idle gesture if the first idle gesture is recognized.
The embodiment of the application provides a video shooting method, wherein in the case of video shooting, if an electronic device recognizes a first space gesture, a target operation (that is, starting to shoot a video, interrupting video shooting, or stopping video shooting) can be executed according to the first space gesture, and a first video frame sequence including a gesture track of the first space gesture in a captured video is deleted to obtain a first video. Because the user can carry out the first clear gesture relative to the electronic equipment, so that the electronic equipment can start to shoot videos, interrupt video shooting or stop video shooting, delete a sequence of video frames which are not required by the user in the collected videos, and obtain videos which are required by the user, the user does not need to move to the placing position of the electronic equipment and input the electronic equipment, and edit the videos shot by the electronic equipment, and the convenience of video shooting of the electronic equipment can be improved.
Optionally, in this embodiment of the application, the first air-separating gesture is a gesture preset by a user from among a plurality of air-separating gestures preset in the electronic device. Specifically, referring to fig. 1, as shown in fig. 2, before step 101, the video shooting method according to the embodiment of the present application may further include steps 201 to 205 described below.
Step 201, displaying a gesture setting interface by the electronic equipment.
In this embodiment of the application, the gesture setting interface includes X identifiers, and for each identifier in the X identifiers, one identifier corresponds to one target operation, and X is a positive integer.
In the embodiment of the application, under the condition that the electronic device foreground runs the first application, the user can input the setting option in the interface of the first application, so that the electronic device can update the interface of the first application to the setting interface, and therefore the user can input the intelligent gesture remote control setting option in the setting interface, and the electronic device can display the gesture setting interface.
Optionally, in this embodiment of the application, when the electronic device displays the gesture setting interface, the user may input the "intelligent gesture remote control" option, so that the electronic device may display X identifiers in the gesture setting interface.
The electronic device is taken as a mobile phone for illustration. As shown in fig. 3, the mobile phone displays a gesture setting interface (e.g., interface 10), the interface 10 includes a "smart gesture remote control" option 11, and the user can input the "smart gesture remote control" option 11, so that the mobile phone can display X identifiers (e.g., a "pause gesture" identifier 12, a "start gesture" identifier 13, and an "end gesture" identifier 14) in the interface 10.
Optionally, in this embodiment of the application, for each target operation in the X types of target operations, one type of target operation may be any of the following: interrupting video shooting, starting video shooting, ending video shooting and adjusting video shooting parameters.
Optionally, in this embodiment of the application, for each identifier in the X identifiers, one identifier may be any one of the following: a name of a target operation and an icon of a target operation, etc.
Step 202, the electronic device receives a fourth input of the first identifier of the X identifiers from the user.
In this embodiment of the application, the user may perform a fourth input on a first identifier of the X identifiers, so that the electronic device may display at least one gesture identifier, and thus the user may perform an input for a target gesture identifier of the at least one gesture identifier, so that the electronic device may establish a mapping relationship between a gesture indicated by the target gesture identifier and a target operation indicated by the first identifier.
Optionally, in this embodiment of the application, the fourth input may specifically be a click input of the first identifier by the user.
Step 203, the electronic device displays at least one gesture identification in response to the fourth input.
Optionally, in this embodiment of the application, in response to the fourth input, the electronic device may update the gesture setting interface to a gesture selection interface, where the gesture selection interface includes at least one gesture identifier, so as to display the at least one gesture identifier.
Illustratively, in conjunction with FIG. 3, the user may make a fourth input to a first one of the "pause gesture" indicia 12, the "start gesture" indicia 13, and the "end gesture" indicia 14 (e.g., the "pause gesture" indicia 12); as shown in fig. 4, after the user makes the fourth input, the cell phone may update the interface 10 to a gesture selection interface (e.g., interface 15) including at least one gesture identifier (e.g., identifier 16, identifier 17, and identifier 18), where the identifier 16 indicates one gesture, the identifier 17 indicates another gesture, and the identifier 18 indicates yet another gesture.
Further illustratively, in conjunction with FIG. 3, the user may make a fourth input to a first one of the "pause gesture" indicia 12, the "start gesture" indicia 13, and the "end gesture" indicia 14 (e.g., the "start gesture" indicia 13); as shown in fig. 5, after the user makes the fourth input, the cell phone may update the interface 10 to a gesture selection interface (e.g., interface 19) including at least one gesture identifier (e.g., identifier 20, identifier 21, and identifier 22), where the identifier 20 indicates one gesture, the identifier 21 indicates another gesture, and the identifier 22 indicates yet another gesture.
Further illustratively, in conjunction with FIG. 3, the user may make a fourth input to a first one of the "pause gesture" indicia 12, the "start gesture" indicia 13, and the "end gesture" indicia 14 (e.g., the "end gesture" indicia 23); as shown in fig. 6, after the user makes the fourth input, the cell phone may update the interface 10 to a gesture selection interface (e.g., interface 24) that includes at least one gesture identifier (e.g., identifier 25, identifier 26, and identifier 27), where the identifier 25 indicates one gesture, the identifier 26 indicates another gesture, and the identifier 27 indicates yet another gesture.
Optionally, in this embodiment of the application, the at least one gesture identifier may be any one of: at least one gesture icon and at least one gesture name.
In the embodiment of the present application, for each gesture identifier in the at least one gesture identifier, one gesture identifier indicates one gesture.
Step 204, the electronic device receives a fifth input of the user to a target gesture identification in the at least one gesture identification.
Optionally, in this embodiment of the application, the target gesture identifier may include one gesture identifier or multiple gesture identifiers.
Optionally, in this embodiment of the application, in a case that the target gesture identifier includes one gesture identifier, the fifth input may specifically be a click input of the target gesture identifier by the user; in a case that the target gesture identifier includes a plurality of gesture identifiers, the fifth input may specifically be a click input performed by the user on the plurality of gesture identifiers in sequence, or may be a click input performed by the user on a "full selection" control in the gesture selection interface.
Step 205, the electronic device responds to the fifth input, and establishes a mapping relation between the gesture indicated by the target gesture identification and the target operation indicated by the first identification.
In this embodiment of the application, the first space gesture includes a gesture indicated by a target gesture identifier, and the target operation corresponding to the first space gesture includes a target operation indicated by the first identifier.
Optionally, in this embodiment of the application, in a case that the target gesture identifier includes a plurality of gesture identifiers, for each gesture identifier in the plurality of gesture identifiers, the electronic device may establish a mapping relationship between one gesture indicated by one gesture identifier and the target operation indicated by the first identifier, so as to establish a mapping relationship between a plurality of gestures indicated by the plurality of gestures and the target operation indicated by the first identifier.
It can be understood that after the electronic device establishes the mapping relationship between the gesture indicated by the target gesture identification and the target operation indicated by the first identification, when the electronic device recognizes the gesture indicated by the target gesture identification, the electronic device may perform the target operation indicated by the first identification.
Optionally, in this embodiment of the application, each of the mapping relationships respectively corresponds to an operation mode of the target operation indicated by the first identifier, and the multiple target operation modes may include any one of the following: a delayed mode of operation, a delete video frame sequence mode of operation, and a stop mode of operation.
It should be noted that the above "delayed operation mode" can be understood as: and delaying the time corresponding to the number indicated by the gesture, and executing the target operation mode. The above-mentioned "operation mode for deleting a sequence of video frames" can be understood as: deleting a video frame sequence including a gesture track of the gesture in the video acquired by the camera, and executing a target operation mode. The above-mentioned "stop operation mode" can be understood as: the method directly stops target operation and does not process the video collected by the camera.
For example, it is assumed that the plurality of mappings includes a mapping 1 and a mapping 2, the mapping 1 is a mapping of one gesture indicated by one gesture identifier (for example, a "blow box" gesture) and a target operation indicated by a first identifier (for example, stopping video shooting), the mapping 1 corresponds to an operation mode of one target operation (for example, stopping operation mode), the mapping 2 is a mapping of another gesture indicated by another gesture (for example, an "ok" gesture) and a target operation indicated by the first identifier (for example, stopping video shooting), and the mapping 2 corresponds to an operation mode of one target operation (for example, deleting video frame sequence operation mode). When the electronic equipment identifies the 'fist holding bulletin' gesture, the electronic equipment can directly stop video shooting and does not process the video collected by the camera; when the electronic device recognizes the "ok" gesture, the electronic device may stop video capturing and delete a sequence of video frames in the video captured by the camera that includes the gesture trajectory of the "ok" gesture.
Optionally, in this embodiment of the application, the electronic device may determine a gesture feature matrix corresponding to the gesture indicated by the target gesture identifier as a preset gesture feature matrix, so as to establish a mapping relationship between the gesture indicated by the target gesture identifier and the target operation indicated by the first identifier.
It can be understood that the electronic device may recognize whether the air gesture of the user relative to the electronic device is a preset air gesture (e.g., a first air gesture) according to a preset gesture feature matrix (e.g., a first preset gesture feature matrix) corresponding to the gesture indicated by the target gesture identifier.
Optionally, in this embodiment of the application, after the electronic device establishes the mapping relationship between the gesture indicated by the target gesture identifier and the target operation indicated by the first identifier, the electronic device may update the gesture selection interface to be the gesture setting interface, and mark the first identifier in a first marking manner.
Optionally, in this embodiment of the application, the first marking manner may include at least one of: a dotted frame marking mode, a highlight marking mode, a color marking mode, a gray marking mode, a prompt information marking mode, a preset transparency marking mode, a flashing marking mode and the like.
For example, referring to fig. 4, as shown in fig. 7, after the user makes a fifth input on the target gesture identifier, the mobile phone may update the interface 15 to the interface 10, and mark the first identifier (i.e., the "pause gesture" identifier 12) in a prompt information marking manner (e.g., an "already-on" prompt information marking manner), that is, display the "already-on" prompt information in the display area of the "pause gesture" identifier 12.
In the embodiment of the application, if a user needs to set an empty gesture, the electronic device may be triggered to display a gesture setting interface, and the first identifier in the gesture setting interface is input, so that the electronic device may display at least one gesture identifier, and thus a user may input a target gesture identifier in the at least one gesture identifier according to a use requirement (or a use habit), so that the electronic device may establish a mapping relationship between a gesture indicated by the target gesture identifier and a target operation indicated by the first identifier.
In the embodiment of the application, because the electronic device can establish the mapping relationship between the gesture indicated by the target gesture identifier and the target operation indicated by the first identifier according to the input of the user (i.e., the fourth input and the fifth input), that is, the user can set the gesture for the target operation indicated by the first identifier according to the use requirement (or the use habit) of the user, when the user needs to trigger the electronic device to execute the target operation, the electronic device can be triggered to execute the target operation through the blank gesture required (or the use habit) of the user quickly, and the user does not need to recall for a long time (or move the hand of the user for a long time to form the blank gesture), the time consumed by the user for executing the blank gesture can be reduced, and thus the efficiency of the electronic device in executing the target operation can be improved.
Optionally, in this embodiment of the application, the first clear gesture is a gesture that the user triggers the electronic device to control the camera to capture. Specifically, with reference to fig. 1, as shown in fig. 8, before step 101, the video shooting method according to the embodiment of the present application may further include steps 301 to 305 described below.
Step 301, the electronic device displays a gesture setting interface.
In this embodiment of the application, the gesture setting interface includes X identifiers, and for each identifier in the X identifiers, one identifier corresponds to one target operation, and X is a positive integer.
It should be noted that, for the description of the electronic device displaying the gesture setting interface, reference may be made to the specific description in step 201, and details are not repeated here in the embodiment of the present application.
Step 302, the electronic device receives a sixth input of the user to the second identifier of the X identifiers.
Optionally, in this embodiment of the application, the sixth input may specifically be a click input of the second identifier by the user.
And step 303, the electronic device responds to the sixth input and displays the first control.
Optionally, in this embodiment of the application, in response to the sixth input, the electronic device may update the gesture setting interface to a gesture selection interface, where the gesture selection interface includes at least one gesture identifier and the first control, so as to display the first control.
Optionally, in this embodiment of the application, the electronic device may display the first control in a blank area in the gesture selection interface.
It should be noted that the above "blank area in the gesture selection interface" may be understood as: the gesture selects an area of the interface where no content is displayed.
In this embodiment, the first control is used to control the camera to capture a user-defined gesture.
It should be noted that the "custom gesture" may be understood as: and triggering the gestures collected by the electronic equipment by the user according to the use requirement (or use habit).
For example, as shown in fig. 9, after the user makes the sixth input, the mobile phone may update the interface 10 to a gesture selection interface (e.g., the interface 28) including at least one gesture identifier (e.g., the identifier 29, the identifier 30, and the identifier 31) and a first control (e.g., the control 32), where the control 32 is used to control the camera to capture the custom gesture.
And step 304, the electronic device receives a seventh input of the first control from the user.
Optionally, in this embodiment of the application, the seventh input is specifically a click input of the first control by the user.
And 305, the electronic equipment responds to the seventh input, acquires the custom gesture, and establishes a mapping relation between the custom gesture and the target operation indicated by the second identifier.
In this embodiment of the application, the user-defined gesture is a gesture acquired by a camera or a gesture included in a target picture, the first space-free gesture includes the user-defined gesture, and the target operation corresponding to the first space-free gesture includes a target operation indicated by the second identifier.
Optionally, in this embodiment of the application, when the user-defined gesture is a gesture acquired by the camera, the electronic device responds to the seventh input, and may update the gesture selection interface to the gesture acquisition interface and control the camera to be in the start state, so that the user may input the shooting control in the gesture acquisition interface, so that the electronic device may acquire the user-defined gesture of the user through the camera, and establish a mapping relationship between the user-defined gesture and the target operation indicated by the second identifier.
Optionally, in this embodiment of the application, when the custom gesture is a gesture included in the target picture, the electronic device, in response to the seventh input, may update the gesture selection interface to an image selection interface, where the image selection interface includes at least one image saved in the electronic device, so that the user may input the target image in the at least one image, so that the electronic device may determine the custom gesture of the user according to the target image, and establish a mapping relationship between the custom gesture and the target operation indicated by the second identifier.
Optionally, in this embodiment of the application, the electronic device may determine the gesture feature matrix corresponding to the user-defined gesture as a preset gesture feature matrix, so as to establish a mapping relationship between the user-defined gesture and the target operation indicated by the second identifier.
Optionally, in this embodiment of the application, after the electronic device establishes the mapping relationship between the user-defined gesture and the target operation indicated by the second identifier, the electronic device may update the gesture collection interface (or the image selection interface) to a gesture selection interface, where the gesture selection interface includes the first gesture identifier, and the first gesture identifier indicates the user-defined gesture.
Illustratively, in conjunction with fig. 9, as shown in fig. 10, after the cell phone establishes a mapping relationship between the custom gesture and the target operation indicated by the second identifier, the cell phone may update the gesture capture interface (or the image selection interface) to the interface 28, where the first gesture identifier (e.g., identifier 33) is included in the interface 28.
In the embodiment of the application, if the user needs to set the custom gesture, the electronic device may be triggered to display a gesture setting interface, and the first control is input, so that the electronic device may collect the custom gesture of the user (or determine the custom gesture of the user from an image selected by the user), and establish a mapping relationship between the custom gesture and the target operation indicated by the second identifier.
In the embodiment of the application, because the electronic device can establish the mapping relationship between the custom gesture and the target operation indicated by the second identifier according to the input (i.e. the sixth input and the seventh input) of the user, that is, the user can trigger the electronic device to control the camera to capture the user-defined gesture according to the user's use requirement (or use habit) (or trigger the electronic device to determine the user-defined gesture from the image saved in the electronic device selected by the user), therefore, when the user needs to trigger the electronic equipment to execute the target operation, the electronic equipment can be rapidly triggered to execute the target operation through the blank gesture required (or used) by the user, the user does not need to recall for a long time (or move the hand of the user for a long time to form the blank gesture), so that the time consumption of the user for the blank gesture can be reduced, and the efficiency of executing the target operation by the electronic equipment can be improved.
Optionally, in this embodiment of the application, the gesture setting interface includes a second control. Specifically, referring to fig. 2, as shown in fig. 11, after the step 201, the video shooting method according to the embodiment of the present application may further include the following steps 401 and 402.
Step 401, the electronic device receives an eighth input of the second control from the user.
It is understood that the above steps 202 to 205 may be replaced by steps 401 and 402.
In this embodiment of the present application, the second control is used for setting a target operation.
Optionally, in this embodiment of the application, the electronic device may display the second control in a blank area in the gesture setting interface.
Illustratively, as shown in fig. 12, the mobile phone displays a gesture setting interface (e.g., interface 34), the interface 34 includes a "smart gesture remote control" option 35, and the user can input the "smart gesture remote control" option 35, so that the mobile phone can display X identifiers (e.g., a "pause gesture" identifier 36, a "start gesture" identifier 37, and an "end gesture" identifier 38) and a second control (e.g., control 39) in the interface 34, so that the user can make an eighth input to the control 39.
Optionally, in this embodiment of the application, the eighth input may include one input or multiple inputs.
Optionally, in this embodiment of the application, when the eighth input includes one input, the eighth input may specifically be a click input of the second control by the user.
Optionally, in this embodiment of the application, in a case that the eighth input includes multiple inputs, the eighth input may include a first sub-input, a second sub-input, and a third sub-input, where the first sub-input is a click input of a second control by a user, the first sub-input is used to update the gesture setting interface to a control operation selection interface, the control operation selection interface includes Y fourth identifiers, each fourth identifier corresponds to one type of first operation, the second sub-input is a click input of a third identifier of the Y fourth identifiers by the user, the second sub-input is used to update the control operation selection interface to a gesture selection interface, the gesture selection interface includes at least one gesture identifier, the third sub-input is a click input of a second gesture identifier of the at least one gesture identifier, and Y is a positive integer.
Optionally, in this embodiment of the present application, the Y types of first operations include at least one of: and adjusting the focal length parameter of the camera down, the focal length parameter of the camera up, the exposure parameter of the camera down, the camera switching and the like.
Step 402, in response to the eighth input, the electronic device adds a third identifier in the gesture setting interface, and establishes a mapping relationship between the gesture indicated by the second gesture identifier and the first operation indicated by the third identifier.
In an embodiment of the application, the third identifier is an identifier of the first operation corresponding to the eighth input, and the second gesture identifier is an identifier of the gesture corresponding to the eighth input.
Optionally, in this embodiment of the application, in response to the eighth input, the electronic device may establish a mapping relationship between the gesture indicated by the second gesture identifier and the first operation indicated by the third identifier, and update the gesture selection interface to the gesture setting interface, so as to add the third identifier to the gesture setting interface.
For example, in conjunction with fig. 12, as shown in fig. 13, after the user makes an eighth input to the control 39, the cell phone may establish a mapping relationship between the gesture indicated by the second gesture identifier and the first operation indicated by the third identifier, and update the gesture selection interface to the interface 34 to add the third identifier (e.g., the identifier 40) to the interface 34.
In the embodiment of the application, when a user needs to set the first operation of the camera, the user can input the second control in the gesture setting interface, so that the electronic device can update the gesture setting interface into the control operation selection interface, the user may then make a selection input in the control action selection interface for an identification of the user's desire (i.e. an identification of a first action), so that the electronic equipment can update the control operation selection interface to be the gesture selection interface, so that the user can select and input one gesture identification required by the user in the gesture selection interface, so that the electronic device can establish a mapping relation between the gesture indicated by the gesture identification and the first operation indicated by the gesture identification, and updating the gesture selection interface into a gesture setting interface so as to add the identifier of the first operation indicated by the identifier in the gesture setting interface.
In the embodiment of the application, because the electronic device can add the third identifier (i.e., the identifier of the first operation corresponding to the eighth input) in the gesture setting interface according to the eighth input of the user to the second control, that is, the user can set the gesture for the first operation corresponding to the input of the user according to the use requirement, when the user requirement triggers the electronic device to execute the first operation, the electronic device can be rapidly triggered to execute the first operation through the air gesture, and the user does not need to move to the placement position of the electronic device and input the electronic device, so that the efficiency of executing the operation by the electronic device can be improved.
Optionally, in this embodiment of the present application, the target operation includes: interrupting or stopping video capture. Specifically, after the step 102, the video shooting method according to the embodiment of the present application may further include the following step 501.
Step 501, if a second air-separating gesture of the user relative to the electronic device is recognized, the electronic device responds to the second air-separating gesture, and deletes a second video frame sequence corresponding to the second air-separating gesture in the first video to obtain a second video.
Optionally, in this embodiment of the application, after obtaining the first video, the electronic device may control the camera to collect the video, display the collected video in the interface of the first application, and perform image recognition processing on the collected video to obtain a plurality of gesture feature matrices, so that the electronic device may determine whether to recognize a second air-spaced gesture of the user relative to the electronic device according to the plurality of gesture feature matrices.
It should be noted that, for the electronic device to perform image recognition processing on the acquired video to obtain a description of the multiple gesture feature matrices, reference may be made to the specific description in the foregoing embodiments, and details of the embodiments of the present application are not repeated herein.
Optionally, in this embodiment of the application, if any gesture feature matrix in the plurality of gesture feature matrices matches the second preset gesture feature matrix (that is, the similarity between any gesture feature matrix and the second preset gesture feature matrix is greater than or equal to the preset threshold), it may be considered that the user performs the second spaced gesture with respect to the electronic device, so that the electronic device may delete the second video frame sequence corresponding to the second spaced gesture in the first video in response to the second spaced gesture, and obtain the second video.
Optionally, in this embodiment of the application, the second spaced gesture may be a default gesture of the electronic device, for example, a user double-hand double-stroke digit (or a user double-hand cross and one hand double-stroke "V" gesture).
It is understood that if the electronic device recognizes a second spaced gesture of the user relative to the electronic device, it may be considered that the user may require the electronic device to delete some of the sequence of video frames in the first video, and therefore the electronic device may determine, from the first video, a sequence of video frames corresponding to the second spaced gesture (i.e., the second sequence of video frames) and delete the sequence of video frames to obtain the second video required by the user.
Optionally, in this embodiment of the application, if the second space gesture indicates a first number N, where the first number N indicates a first preset duration, the second video frame sequence is: and the video frame sequence is acquired within a first preset time before the target time, wherein the target time is the time when the electronic equipment identifies the second air gesture.
Illustratively, assuming that the second hold-off gesture is a "two-handed double-stroke number 3" gesture, and the "two-handed double-stroke number 3" gesture indicates a first number (e.g., number 3), which number 3 indicates 3 seconds, the second video frame sequence is: before the electronic device recognizes the time when the gesture of 'double-hand stroke number 3' is performed, the video frame sequence acquired within 3 seconds, that is, before the time when the gesture of 'double-hand stroke number 3' is recognized, the video frame sequence acquired within 3 seconds (namely, forward 'backspace' 3 seconds) can be deleted by the electronic device.
Optionally, in this embodiment of the application, the electronic device may determine, in the multiple gesture feature matrices, a time that is matched with a second preset gesture feature matrix (that is, a gesture feature matrix corresponding to a second spaced gesture), as a time when the electronic device recognizes the second spaced gesture (that is, a target time), so that the electronic device may determine, as a second video frame sequence, a video frame sequence that is acquired within a first preset time period before the target time, and delete the second video frame sequence, to obtain a second video that is required by the user.
Optionally, in this embodiment of the application, the second space gesture includes a target sub-gesture, and the second video frame sequence is: the first video comprises video frames of gesture tracks of the target sub-gesture.
Illustratively, assuming the second spaced-apart gesture is a "cross-two-hand, and one-hand victory/jeer" gesture, the second video frame sequence is: the video frame of the first video containing the gesture trajectory of the "victory/jean" gesture, i.e., the video frame of the first video containing the "victory/jean" gesture, may be deleted by the electronic device.
Optionally, in this embodiment of the application, the starting video frame of the second video frame sequence is: a first video frame in the video containing the target sub-gesture; the ending video frame of the second video frame sequence is: and the electronic equipment identifies the video frame collected by the camera when the target sub-gesture is identified.
It should be noted that, for the description that the electronic device determines the ending video frame of the second video frame sequence, reference may be made to the specific description that the electronic device determines the ending video frame of the first video frame sequence in the foregoing embodiment, and details are not described herein again in this embodiment of the application. For the description that the electronic device determines the starting video frame of the second video frame sequence, reference may be made to the specific description that the electronic device determines the starting video frame of the first video frame sequence in the foregoing embodiment, and details are not repeated herein in this embodiment of the application.
It can be understood that after the electronic device obtains the first video, if a user needs to delete a certain video frame sequence in the first video, the user may directly perform a blank gesture input (i.e., a target sub-gesture) with respect to the electronic device, so that when the electronic device recognizes the target sub-gesture, the video frame sequence corresponding to the target sub-gesture in the first video is deleted.
In the embodiment of the application, because the electronic device can directly delete the sequence of video frames in the first video without the need of moving to the placement position of the electronic device by the user and inputting to the electronic device when recognizing the second space gesture of the user relative to the electronic device, the time consumed for deleting the sequence of video frames in the first video by the user is reduced, so that the efficiency of processing the video by the electronic device is improved.
Optionally, in this embodiment of the present application, the target operation includes: interrupting or stopping video capture. Specifically, after the step 102, the video shooting method according to the embodiment of the present application may further include the following step 601.
Step 601, if a third air-separating gesture of the user relative to the electronic equipment is recognized, the electronic equipment responds to the third air-separating gesture and controls the camera to continue video shooting.
It should be noted that, for the description of the electronic device identifying the third air-break gesture of the user relative to the electronic device, reference may be made to the specific description of the electronic device identifying the first air-break gesture of the user relative to the electronic device in the foregoing embodiment, and details of the embodiment of the present application are not repeated herein.
Optionally, in this embodiment of the application, the third spaced gesture may be a default gesture of the electronic device.
Optionally, in this embodiment of the application, when the target operation is to interrupt a video, the electronic device may perform editing processing on the first video and a video captured by the electronic device by continuing video shooting, where the editing processing may include at least one of: adding transition effects, adding audio material, adding filter effects, etc.
In the embodiment of the application, because the electronic device can directly control the camera to continue video shooting under the condition that the third spaced gesture of the user relative to the electronic device is recognized, the user does not need to move to the placing position of the electronic device, and the electronic device is input, the efficiency of shooting videos by the electronic device can be improved.
Optionally, in this embodiment of the application, the step 601 may be specifically implemented by the following step 601 a.
Step 601a, if a third air-separating gesture of the user relative to the electronic device is recognized, responding to the third air-separating gesture, and if the third air-separating gesture indicates a second number M, and the second number M indicates a second preset time length, controlling the camera to continue to shoot the video after the second preset time length.
For example, assuming that the second clear gesture is a "number 5" gesture, and the "number 5" gesture indicates a second number (e.g., number 5), the number 5 indicating 5 seconds, the electronic device may continue video capture after 5 seconds (delayed by 5 seconds) if the "number 5" gesture of the user with respect to the electronic device is recognized.
In the embodiment of the application, because the electronic device can control the camera to continue video shooting after a certain time length when recognizing a third air gesture of the user relative to the electronic device and the third air gesture indicates the certain number (the certain number indicates the certain time length), the user does not need to move to the placement position of the electronic device, and the electronic device is input for multiple times, the video shooting efficiency of the electronic device can be improved.
It should be noted that, in the video shooting method provided in the embodiment of the present application, the execution subject may be a video shooting device, or a control module in the video shooting device for executing the method of loading video shooting. In the embodiment of the present application, a video shooting device is taken as an example to execute a method for loading video shooting, and a method for video shooting provided in the embodiment of the present application is described.
Fig. 14 shows a schematic diagram of a possible structure of the video camera according to the embodiment of the present application. As shown in fig. 14, the video photographing device 60 may include: an execution module 61 and a deletion module 62.
The executing module 61 is configured to, when a camera of the video capturing apparatus captures a video, if a first clear gesture of a user with respect to the video capturing apparatus 60 is recognized, execute a target operation corresponding to the first clear gesture in response to the first clear gesture, where the target operation includes: start video capture, interrupt video capture, or stop video capture. And the deleting module 62 is configured to delete the first video frame sequence in the video acquired by the camera to obtain the first video. Wherein the first video frame sequence is: the video collected by the camera comprises a video frame sequence of the gesture track of the first space gesture.
In a possible implementation manner, the starting video frame of the first video frame sequence is: a first video frame in the video containing a gesture trajectory of the first clear gesture; the ending video frame of the first video frame sequence is: the video shooting device identifies a video frame collected by the camera when the first air gesture is detected.
In one possible implementation, the target operation includes: interrupting or stopping video capture. The deleting module 62 is further configured to delete the first video frame sequence in the video acquired by the camera to obtain the first video, and if a second spacing gesture of the user with respect to the video shooting device is identified, delete the second video frame sequence corresponding to the second spacing gesture in the first video in response to the second spacing gesture to obtain the second video.
In one possible implementation manner, if the second blanking gesture indicates a first number N indicating a first preset duration, the second video frame sequence is: and the video frame sequence is acquired within a first preset time before the target time, wherein the target time is the time when the electronic equipment identifies the second air gesture.
In a possible implementation manner, the second clear gesture includes a target sub-gesture, and the second video frame sequence is: the first video comprises video frames of gesture tracks of the target sub-gesture.
In one possible implementation, the target operation includes: interrupting or stopping video capture. The video camera 60 provided in the embodiment of the present application may further include: and a control module. The control module is configured to delete the first video frame sequence in the video acquired by the camera at the deletion module 62 to obtain the first video, and if a third air-break gesture of the user relative to the video shooting device is identified, control the camera to continue to shoot the video in response to the third air-break gesture.
In a possible implementation manner, the control module is specifically configured to, if the third space gesture indicates a second number M, where the second number M indicates a second preset time duration, control the camera to continue video shooting after the second preset time duration, where M is a positive integer.
In a possible implementation manner, the video capturing apparatus 60 provided in the embodiment of the present application may further include: the device comprises a display module, a receiving module and an establishing module. The display module is configured to, under the condition that the execution module 61 performs video shooting on a camera of the video shooting device, if a first space gesture of a user with respect to the video shooting device is identified, respond to the first space gesture and before a target operation corresponding to the first space gesture is executed, display a gesture setting interface, where the gesture setting interface includes X identifiers, one identifier corresponds to one target operation, and X is a positive integer. And the receiving module is used for receiving fourth input of the first identifier in the X identifiers displayed by the display module by the user. The display module is further used for responding to the fourth input received by the receiving module and displaying at least one gesture identification. The receiving module is further used for receiving a fifth input of a target gesture identifier in the at least one gesture identifier displayed by the display module. And the establishing module is used for responding to the fifth input received by the receiving module, and establishing a mapping relation between the gesture indicated by the target gesture identification and the target operation indicated by the first identification, wherein the first space gesture comprises the gesture indicated by the target gesture identification, and the target operation corresponding to the first space gesture comprises the target operation indicated by the first identification.
In a possible implementation manner, the display module is further configured to, when the execution module 61 performs video shooting on a camera of the video shooting device, if a first blank gesture of a user with respect to the video shooting device is recognized, respond to the first blank gesture and execute a target operation corresponding to the first blank gesture, display a gesture setting interface, where the gesture setting interface includes X identifiers, one identifier corresponds to one target operation, and X is a positive integer. The video camera 60 provided in the embodiment of the present application may further include: and a receiving module. The receiving module is used for receiving a sixth input of the user to a second identifier in the X identifiers. The display module is further configured to display the first control in response to the sixth input received by the receiving module. The receiving module is further configured to receive a seventh input of the first control displayed by the display module from the user. The establishing module is further configured to obtain a user-defined gesture in response to the seventh input received by the receiving module, and establish a mapping relationship between the user-defined gesture and the target operation indicated by the second identifier. The user-defined gesture is a gesture collected by the camera or a gesture contained in the target picture, the first space-free gesture comprises the user-defined gesture, and the target operation corresponding to the first space-free gesture comprises target operation indicated by the second identifier.
In one possible implementation, the gesture setting interface includes a second control. The receiving module is further configured to receive an eighth input of the second control from the user after the display module displays the gesture setting interface. The video camera 60 provided in the embodiment of the present application may further include: modules are added. And the adding module is used for responding to the eighth input received by the receiving module and adding a third identifier in the gesture setting interface. The establishing module is further configured to establish a mapping relationship between the gesture indicated by the second gesture identifier and the first operation indicated by the third identifier. And the third identifier is an identifier of the first operation corresponding to the eighth input, and the second gesture identifier is an identifier of the gesture corresponding to the eighth input.
The video shooting device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The video capture device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The video shooting device provided in the embodiment of the present application can implement each process implemented by the video shooting device in the method embodiments of fig. 1 to 13, and is not described herein again to avoid repetition.
The embodiment of the application provides a video shooting device, because the user can carry out first blank gesture for video shooting device, so that video shooting device can begin to shoot the video, interrupt video shooting or stop video shooting, and delete the video frame sequence that the user does not need in the video of gathering, obtain the video of user's demand, and need not the user and remove the place position of placing to video shooting device and carry out the input to video shooting device, and edit the video that video shooting device shot, consequently can promote the convenience that video shooting device's video was shot.
Optionally, as shown in fig. 15, an electronic device 70 is further provided in this embodiment of the present application, and includes a processor 72, a memory 71, and a program or an instruction stored in the memory 71 and executable on the processor 72, where the program or the instruction is executed by the processor 72 to implement each process of the video shooting embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 16 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 16 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted.
The processor 110 is configured to, when a camera of the electronic device performs video shooting, if a first idle gesture of a user with respect to the electronic device is recognized, execute a target operation corresponding to the first idle gesture in response to the first idle gesture, where the target operation includes: starting video shooting, interrupting video shooting or stopping video shooting; deleting a first video frame sequence in the video acquired by the camera to obtain a first video; wherein the first video frame sequence is: the video collected by the camera comprises a video frame sequence of the gesture track of the first space gesture.
The embodiment of the application provides an electronic device, because the user can carry out the first space gesture for electronic device to make electronic device can begin to shoot the video, interrupt video shooting or stop video shooting, and delete the video frame sequence that the user does not need in the video of gathering, obtain the video of user's demand, and need not the user and remove to electronic device's the position of placing and input electronic device, and edit the video of shooting electronic device, consequently can promote electronic device's video shooting's convenience.
Optionally, the processor 110 is further configured to delete the first video frame sequence in the video acquired by the camera to obtain the first video, and if a second gap gesture of the user with respect to the electronic device is identified, delete the second video frame sequence corresponding to the second gap gesture in the first video in response to the second gap gesture to obtain the second video.
In the embodiment of the application, because the electronic device can directly delete the sequence of video frames in the first video without the need of moving to the placement position of the electronic device by the user and inputting to the electronic device when recognizing the second space gesture of the user relative to the electronic device, the time consumed for deleting the sequence of video frames in the first video by the user is reduced, so that the efficiency of processing the video by the electronic device is improved.
Optionally, the processor 110 is further configured to, after deleting the first video frame sequence in the video acquired by the camera to obtain the first video, if a third gap gesture of the user with respect to the electronic device is identified, control the camera to continue to shoot the video in response to the third gap gesture.
In the embodiment of the application, because the electronic device can directly control the camera to continue video shooting under the condition that the third spaced gesture of the user relative to the electronic device is recognized, the user does not need to move to the placing position of the electronic device, and the electronic device is input, the efficiency of shooting videos by the electronic device can be improved.
Optionally, the processor 110 is specifically configured to, if the third space gesture indicates a second number, and the second number M indicates a second preset time duration, control the camera to continue video shooting after the second preset time duration, where M is a positive integer.
In the embodiment of the application, because the electronic device can recognize the third air gesture of the user relative to the electronic device, and the third air gesture indicates a certain number (the certain number indicates a second preset duration), after the second preset duration, the camera is controlled to continue to shoot the video without the need of moving to the placement position of the electronic device by the user, and the electronic device is input for multiple times, so that the video shooting efficiency of the electronic device can be improved.
Optionally, the display unit 106 is configured to, under the condition that a camera of the electronic device performs video shooting, if a first space gesture of a user with respect to the electronic device is identified, respond to the first space gesture, and display a gesture setting interface before executing a target operation corresponding to the first space gesture, where the gesture setting interface includes X identifiers, one identifier corresponds to one target operation, and X is a positive integer.
A user input unit 107, configured to receive a fourth input of the first identifier of the X identifiers from the user.
The display unit 106 is further configured to display at least one gesture identifier in response to a fourth input.
The user input unit 107 is further configured to receive a fifth input of the target gesture identifier of the at least one gesture identifier from the user.
The processor 110 is further configured to, in response to the fifth input, establish a mapping relationship between the gesture indicated by the target gesture identification and the target operation indicated by the first identification. The first space gesture comprises a gesture indicated by a target gesture identification, and the target operation corresponding to the first space gesture comprises a target operation indicated by the first identification.
In the embodiment of the application, because the electronic device can establish the mapping relationship between the gesture indicated by the target gesture identifier and the target operation indicated by the first identifier according to the input of the user (i.e., the fourth input and the fifth input), that is, the user can set the gesture for the target operation indicated by the first identifier according to the use requirement (or the use habit) of the user, when the user needs to trigger the electronic device to execute the target operation, the electronic device can be triggered to execute the target operation through the blank gesture required (or the use habit) of the user quickly, and the user does not need to recall for a long time (or move the hand of the user for a long time to form the blank gesture), the time consumed by the user for executing the blank gesture can be reduced, and thus the efficiency of the electronic device in executing the target operation can be improved.
Optionally, the display unit 107 is further configured to, under the condition that a camera of the electronic device performs video shooting, if a first space gesture of a user with respect to the electronic device is identified, respond to the first space gesture, and display a gesture setting interface before performing a target operation corresponding to the first space gesture, where the gesture setting interface includes X identifiers, one identifier corresponds to one target operation, and X is a positive integer.
The user input unit 107 is further configured to receive a sixth input of the second identifier of the X identifiers from the user.
And the display unit 107 is also used for responding to a sixth input and displaying the first control.
The user input unit 107 is further configured to receive a seventh input of the first control from the user.
And the processor 110 is further configured to, in response to the seventh input, obtain the custom gesture, and establish a mapping relationship between the custom gesture and the target operation indicated by the second identifier.
The user-defined gesture is a gesture collected by the camera or a gesture contained in the target picture, the first space-free gesture comprises the user-defined gesture, and the target operation corresponding to the first space-free gesture comprises target operation indicated by the second identifier.
In the embodiment of the application, because the electronic device can establish the mapping relationship between the custom gesture and the target operation indicated by the second identifier according to the input (i.e. the sixth input and the seventh input) of the user, that is, the user can trigger the electronic device to control the camera to capture the user-defined gesture according to the user's use requirement (or use habit) (or trigger the electronic device to determine the user-defined gesture from the image saved in the electronic device selected by the user), therefore, when the user needs to trigger the electronic equipment to execute the target operation, the electronic equipment can be rapidly triggered to execute the target operation through the blank gesture required (or used) by the user, the user does not need to recall for a long time (or move the hand of the user for a long time to form the blank gesture), so that the time consumption of the user for the blank gesture can be reduced, and the efficiency of executing the target operation by the electronic equipment can be improved.
Optionally, the user input unit 107 is further configured to receive an eighth input to the second control from the user after the gesture setting interface is displayed.
And the display unit 107 is further used for responding to the eighth input, adding a third identifier in the gesture setting interface, and establishing a mapping relation between the gesture indicated by the second gesture identifier and the first operation indicated by the third identifier.
The third identifier is an identifier of a target operation corresponding to the eighth input, and the second gesture identifier is an identifier of a gesture corresponding to the eighth input.
In the embodiment of the application, because the electronic device can add the third identifier (i.e., the identifier of the first operation corresponding to the seventh input) in the gesture setting interface according to the eighth input of the user to the second control, that is, the user can set the gesture for the first operation corresponding to the input of the user according to the use requirement, when the user requirement triggers the electronic device to execute the first operation, the electronic device can be rapidly triggered to execute the first operation through the air-separating gesture without the need of the user moving to the placement position of the electronic device and inputting the electronic device, and therefore, the efficiency of executing the operation by the electronic device can be improved.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A method of video capture, the method comprising:
when a camera of an electronic device performs video shooting, if a first air-separating gesture of a user relative to the electronic device is recognized, a target operation corresponding to the first air-separating gesture is executed in response to the first air-separating gesture, wherein the target operation comprises the following steps: starting video shooting, interrupting video shooting or stopping video shooting;
deleting a first video frame sequence in the video collected by the camera to obtain a first video;
wherein the first sequence of video frames is: the video collected by the camera comprises a video frame sequence of the gesture track of the first space gesture;
the starting video frame of the first video frame sequence is: a first video frame in the video containing a gesture trajectory of the first clear gesture; the ending video frame of the first sequence of video frames is: the electronic equipment identifies a video frame acquired by the camera when the first air gesture is detected.
2. The method of claim 1, wherein the target operation comprises: interrupting or stopping video capture;
after the deleting the first video frame sequence in the video acquired by the camera to obtain the first video, the method further includes:
and if a second air-separating gesture of the user relative to the electronic equipment is recognized, responding to the second air-separating gesture, deleting a second video frame sequence corresponding to the second air-separating gesture in the first video, and obtaining a second video.
3. The method of claim 2, wherein if the second clear gesture indicates a first number N indicating a first preset duration, the second sequence of video frames is: and acquiring a video frame sequence within a first preset time before a target moment, wherein the target moment is the moment when the electronic equipment identifies the second air gesture.
4. The method of claim 2, wherein the second clear gesture comprises a target sub-gesture, and wherein the second sequence of video frames is: the first video comprises video frames of gesture tracks of the target sub-gesture.
5. The method of claim 1, wherein the target operation comprises: interrupting or stopping video capture;
after the deleting the first video frame sequence in the video acquired by the camera to obtain the first video, the method further includes:
and if a third air-separating gesture of the user relative to the electronic equipment is recognized, responding to the third air-separating gesture, and controlling the camera to continue video shooting.
6. The method of claim 5, wherein the controlling the camera to continue video capture comprises:
and if the third space gesture indicates a second number M, and the second number M indicates a second preset time length, controlling the camera to continue video shooting after the second preset time length, wherein M is a positive integer.
7. The method according to claim 1, wherein when a camera of an electronic device performs video shooting, if a first clear gesture of a user with respect to the electronic device is recognized, before performing a target operation corresponding to the first clear gesture in response to the first clear gesture, the method further comprises:
displaying a gesture setting interface, wherein the gesture setting interface comprises X identifiers, one identifier corresponds to one target operation, and X is a positive integer;
receiving a fourth input of a user to a first identifier in the X identifiers;
displaying at least one gesture identification in response to the fourth input;
receiving a fifth input of a target gesture identification in the at least one gesture identification by the user;
in response to the fifth input, establishing a mapping relation between the gesture indicated by the target gesture identification and the target operation indicated by the first identification, wherein the first space gesture comprises the gesture indicated by the target gesture identification, and the target operation corresponding to the first space gesture comprises the target operation indicated by the first identification.
8. The method according to claim 1, wherein when a camera of an electronic device performs video shooting, if a first clear gesture of a user with respect to the electronic device is recognized, before performing a target operation corresponding to the first clear gesture in response to the first clear gesture, the method further comprises:
displaying a gesture setting interface, wherein the gesture setting interface comprises X identifiers, one identifier corresponds to one target operation, and X is a positive integer;
receiving a sixth input of a user to a second identifier of the X identifiers;
in response to the sixth input, displaying a first control;
receiving a seventh input of the first control by a user;
responding to the seventh input, acquiring a user-defined gesture, and establishing a mapping relation between the user-defined gesture and the target operation indicated by the second identifier;
the user-defined gesture is a gesture collected by the camera or a gesture contained in the target picture, the first space gesture comprises the user-defined gesture, and the target operation corresponding to the first space gesture comprises the target operation indicated by the second identifier.
9. The method of claim 7, wherein the gesture setting interface comprises a second control;
after the displaying the gesture setting interface, the method further comprises:
receiving an eighth input of the second control by the user;
responding to the eighth input, adding a third identifier in the gesture setting interface, and establishing a mapping relation between the gesture indicated by the second gesture identifier and the first operation indicated by the third identifier;
the third identifier is an identifier of a first operation corresponding to the eighth input, and the second gesture identifier is an identifier of a gesture corresponding to the eighth input.
10. A video camera, characterized in that the video camera comprises: an execution module and a deletion module;
the execution module is configured to, when a camera of a video capturing device performs video capturing, if a first blank gesture of a user with respect to the video capturing device is recognized, execute a target operation corresponding to the first blank gesture in response to the first blank gesture, where the target operation includes: starting video shooting, interrupting video shooting or stopping video shooting;
the deleting module is used for deleting a first video frame sequence in the video collected by the camera to obtain a first video;
wherein the first sequence of video frames is: the video collected by the camera comprises a video frame sequence of the gesture track of the first space gesture;
the starting video frame of the first video frame sequence is: a first video frame in the video containing a gesture trajectory of the first clear gesture; the ending video frame of the first sequence of video frames is: and the video shooting device identifies the video frame collected by the camera when the first air gesture is detected.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the video capture method of any of claims 1-9.
CN202010622234.5A 2020-06-30 2020-06-30 Video shooting method and device and electronic equipment Active CN111787223B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010622234.5A CN111787223B (en) 2020-06-30 2020-06-30 Video shooting method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010622234.5A CN111787223B (en) 2020-06-30 2020-06-30 Video shooting method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111787223A CN111787223A (en) 2020-10-16
CN111787223B true CN111787223B (en) 2021-07-16

Family

ID=72760544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010622234.5A Active CN111787223B (en) 2020-06-30 2020-06-30 Video shooting method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111787223B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112380990A (en) * 2020-11-13 2021-02-19 咪咕文化科技有限公司 Picture adjusting method, electronic device and readable storage medium
CN112887623B (en) * 2021-01-28 2022-11-29 维沃移动通信有限公司 Image generation method and device and electronic equipment
CN112905008B (en) * 2021-01-29 2023-01-20 海信视像科技股份有限公司 Gesture adjustment image display method and display device
CN115484394B (en) * 2021-06-16 2023-11-14 荣耀终端有限公司 Guide use method of air separation gesture and electronic equipment
CN113794833B (en) * 2021-08-16 2023-05-26 维沃移动通信(杭州)有限公司 Shooting method and device and electronic equipment
CN114245005A (en) * 2021-11-29 2022-03-25 荣耀终端有限公司 Control method and device of electronic equipment and related equipment
CN114911397A (en) * 2022-05-18 2022-08-16 北京五八信息技术有限公司 Data processing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105391964A (en) * 2015-11-04 2016-03-09 广东欧珀移动通信有限公司 Video data processing method and apparatus
CN105493496A (en) * 2014-12-14 2016-04-13 深圳市大疆创新科技有限公司 Video processing method, and device and image system
CN105807900A (en) * 2014-12-30 2016-07-27 丰唐物联技术(深圳)有限公司 Non-contact type gesture control method and intelligent terminal
CN107911614A (en) * 2017-12-25 2018-04-13 腾讯数码(天津)有限公司 A kind of image capturing method based on gesture, device and storage medium
CN110012251A (en) * 2018-01-04 2019-07-12 腾讯科技(深圳)有限公司 Video recording method, device and readable storage medium storing program for executing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160227285A1 (en) * 2013-09-16 2016-08-04 Thomson Licensing Browsing videos by searching multiple user comments and overlaying those into the content
US10096337B2 (en) * 2013-12-03 2018-10-09 Aniya's Production Company Device and method for capturing video
US10057483B2 (en) * 2014-02-12 2018-08-21 Lg Electronics Inc. Mobile terminal and method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105493496A (en) * 2014-12-14 2016-04-13 深圳市大疆创新科技有限公司 Video processing method, and device and image system
CN105807900A (en) * 2014-12-30 2016-07-27 丰唐物联技术(深圳)有限公司 Non-contact type gesture control method and intelligent terminal
CN105391964A (en) * 2015-11-04 2016-03-09 广东欧珀移动通信有限公司 Video data processing method and apparatus
CN107911614A (en) * 2017-12-25 2018-04-13 腾讯数码(天津)有限公司 A kind of image capturing method based on gesture, device and storage medium
CN110012251A (en) * 2018-01-04 2019-07-12 腾讯科技(深圳)有限公司 Video recording method, device and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN111787223A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN111787223B (en) Video shooting method and device and electronic equipment
CN110572575A (en) camera shooting control method and device
CN113093968B (en) Shooting interface display method and device, electronic equipment and medium
CN112954210B (en) Photographing method and device, electronic equipment and medium
CN111901896A (en) Information sharing method, information sharing device, electronic equipment and storage medium
CN112954214B (en) Shooting method, shooting device, electronic equipment and storage medium
CN112911147B (en) Display control method, display control device and electronic equipment
CN112099707A (en) Display method and device and electronic equipment
CN112333382B (en) Shooting method and device and electronic equipment
CN113794834B (en) Image processing method and device and electronic equipment
CN112486390A (en) Display control method and device and electronic equipment
CN111669495B (en) Photographing method, photographing device and electronic equipment
CN112672061B (en) Video shooting method and device, electronic equipment and medium
CN112083854A (en) Application program running method and device
CN111656313A (en) Screen display switching method, display device and movable platform
CN113010738B (en) Video processing method, device, electronic equipment and readable storage medium
CN114025092A (en) Shooting control display method and device, electronic equipment and medium
CN111880660B (en) Display screen control method and device, computer equipment and storage medium
CN112437231A (en) Image shooting method and device, electronic equipment and storage medium
CN112764561A (en) Electronic equipment control method and device and electronic equipment
CN112416172A (en) Electronic equipment control method and device and electronic equipment
CN112148185A (en) Image display method and device
CN113271378B (en) Image processing method and device and electronic equipment
CN113794833B (en) Shooting method and device and electronic equipment
CN113672745A (en) File storage method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant