CN114257737B - Shooting mode switching method and related equipment - Google Patents

Shooting mode switching method and related equipment Download PDF

Info

Publication number
CN114257737B
CN114257737B CN202111437276.2A CN202111437276A CN114257737B CN 114257737 B CN114257737 B CN 114257737B CN 202111437276 A CN202111437276 A CN 202111437276A CN 114257737 B CN114257737 B CN 114257737B
Authority
CN
China
Prior art keywords
camera
gesture
user
image
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111437276.2A
Other languages
Chinese (zh)
Other versions
CN114257737A (en
Inventor
冯文瀚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202111437276.2A priority Critical patent/CN114257737B/en
Publication of CN114257737A publication Critical patent/CN114257737A/en
Application granted granted Critical
Publication of CN114257737B publication Critical patent/CN114257737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)
  • Telephone Function (AREA)

Abstract

The application discloses a camera mode switching method, which can be applied to electronic equipment, wherein the electronic equipment comprises a plurality of cameras. Specifically, the electronic device obtains an operation of opening a first camera by a user, opens the first camera and a second camera according to the operation, displays a picture acquired by the first camera in a display screen, then obtains a gesture image of the user under the second camera, recognizes the gesture image, obtains a gesture of the user, and then switches the electronic device from a first shooting mode to a second shooting mode according to the gesture and a corresponding relation between the gesture and the shooting mode. Therefore, the switching of the shooting mode is realized through the identification of the gesture image and the corresponding relation between the gesture and the shooting mode, and the user is not required to trigger a switching control in the screen, so that the operation of the user is simplified, and the use experience of the user is improved.

Description

Shooting mode switching method and related equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method for switching imaging modes and an electronic device.
Background
With the continuous development of camera technology and the reduction of camera cost, more and more electronic devices are configured with multiple cameras. For example, many handsets are configured with front and rear cameras, and some handsets are even configured with multiple rear cameras to meet different shooting needs of users.
For electronic devices with multiple cameras, it is often necessary to switch the imaging mode. The switching of the shooting mode comprises switching of the camera and/or switching of the state of the camera. For example, when a user wants to switch to a recording mode of a rear camera, the user typically needs to click on a camera switching control to switch the camera to the rear camera and click on a state switching control to switch the shooting state from the shooting state to the recording state. The above-described mode switching process is relatively complicated.
Especially in the scene of using the selfie stick to carry out shooting, the user also needs to withdraw the selfie stick to click corresponding control and carry out the switching of shooting modes, and then restore the selfie stick to the initial position, and carry out shooting in the shooting modes after switching. Therefore, the complexity of the camera shooting mode switching is further increased, and the user experience is affected.
Disclosure of Invention
The method can simplify the operation process of the user for the image capturing mode switching, reduce the complexity of the image capturing mode switching and improve the use experience of the user. The application also provides a device, equipment, a computer readable storage medium and a computer program product corresponding to the image capturing mode switching method.
In order to achieve the above purpose, the present application adopts the following technical scheme:
in a first aspect, the present application provides a method for switching an imaging mode, which is applied to an electronic device. The electronic device comprises a first camera and a second camera, wherein the shooting directions of the first camera and the second camera are different. Specifically, the electronic device obtains an operation of opening the first camera by a user, opens the first camera and the second camera according to the operation, and displays a picture acquired by the first camera in the display screen. When a user performs a corresponding gesture under the second camera, the electronic device acquires a gesture image of the user under the second camera, then identifies the gesture image, obtains a gesture of the user, and switches the shooting mode of the electronic device from the first shooting mode to the second shooting mode according to the corresponding relation between the gesture and the shooting mode and the gesture of the user. Therefore, the user can switch the shooting mode of the electronic equipment by performing gestures under the second camera, so that the operation process of the user on the switching of the shooting mode is simplified, the complexity of the switching of the shooting mode is reduced, and the use experience of the user is improved.
In one possible design, switching the image capturing mode of the electronic device from the first image capturing mode to the second image capturing mode according to the correspondence between the gesture and the image capturing mode and the gesture of the user includes:
According to the corresponding relation between the gesture and the shooting mode and the gesture of the user, switching the second camera of the electronic equipment into a shooting state through a gesture control module of the electronic equipment; or,
and switching the first camera of the electronic equipment from a first shooting state to a second shooting state through a gesture control module of the electronic equipment according to the corresponding relation between the gesture and the shooting mode and the gesture of the user.
Thus, the electronic device can change the shooting state of the second camera or change the shooting state of the first camera through the gesture control module according to the gesture of the user and the corresponding relation between the gesture and the shooting mode.
In one possible design, the second image capturing mode includes:
the first camera and the second camera of the electronic equipment are in a shooting state; or,
the first camera of the electronic device is in a non-camera shooting state, and the second camera is in a camera shooting state.
Therefore, the electronic equipment can change the shooting state into the shooting state of the first camera and the second camera through the gesture control module according to the gesture of the user and the corresponding relation between the gesture and the shooting mode, or the first camera is in a non-shooting state, and the second camera is in a shooting state.
In one possible design, the second camera is turned on silently.
Thus, the electronic device can acquire the gesture image of the user under the second camera through the second camera opened in a silent mode.
In one possible design, the switching the image capturing mode of the electronic device from the first image capturing mode to the second image capturing mode according to the correspondence between the gesture and the image capturing mode and the gesture of the user includes:
when the gesture of the user is a turnover gesture, the gesture control module of the electronic equipment is used for switching the first camera from the shooting state to the non-shooting state and switching the second camera from the non-shooting state to the shooting state according to the corresponding relation between the gesture and the shooting mode.
Thus, when the gesture of the user is a flip gesture, the electronic device can change the imaging state to a non-imaging state of the first camera and an imaging state of the second camera.
In one possible design, the switching the image capturing mode of the electronic device from the first image capturing mode to the second image capturing mode according to the correspondence between the gesture and the image capturing mode and the gesture of the user includes:
And when the gesture of the user is a left-right sliding gesture, switching the second camera from a non-shooting state to a shooting state through a gesture control module of the electronic equipment according to the corresponding relation between the gesture and the shooting mode.
In this way, when the gesture of the user is a slide gesture, the electronic device may change the image capturing state to the second camera being the image capturing state.
In one possible design, the first camera is a rear camera and the second camera is a front camera.
Therefore, when the user shoots by using the rear camera, the gesture image of the user is captured by the front camera, so that the shooting mode of the electronic device is switched.
In one possible design, the electronic device is placed in a holding device.
In this way, when the electronic device is located in the clamping device such as the selfie stick, the mode switching method is used to switch the mode of the electronic device through the gesture under the second camera.
In a second aspect, the present application provides an imaging mode switching apparatus, including:
the acquisition module is used for acquiring the operation of opening the first camera by a user;
The device management module is used for opening the first camera and the second camera according to the operation, displaying pictures acquired by the first camera in a display screen, and the shooting directions of the first camera and the second camera are different;
the device management module is further configured to obtain, through the second camera, a gesture image of the user under the second camera;
the hardware abstraction layer is used for identifying the gesture image and obtaining the gesture of the user;
and the gesture control module is used for switching the shooting mode of the electronic equipment from the first shooting mode to the second shooting mode according to the corresponding relation between the gesture and the shooting mode and the gesture of the user.
In one possible design, the gesture control module may be used to:
according to the corresponding relation between the gesture and the shooting mode and the gesture of the user, switching the second camera of the electronic equipment into a shooting state through a gesture control module of the electronic equipment; or,
and switching the first camera of the electronic equipment from a first shooting state to a second shooting state through a gesture control module of the electronic equipment according to the corresponding relation between the gesture and the shooting mode and the gesture of the user.
In one possible design, the second image capturing mode includes:
the first camera and the second camera of the electronic equipment are in a shooting state; or,
the first camera of the electronic device is in a non-camera shooting state, and the second camera is in a camera shooting state.
In one possible design, the second camera is turned on silently.
In one possible design, the gesture control module may be used to:
when the gesture of the user is a turnover gesture, the gesture control module of the electronic equipment is used for switching the first camera from the shooting state to the non-shooting state and switching the second camera from the non-shooting state to the shooting state according to the corresponding relation between the gesture and the shooting mode.
In one possible design, the gesture control module may be used to:
and when the gesture of the user is a left-right sliding gesture, switching the second camera from a non-shooting state to a shooting state through a gesture control module of the electronic equipment according to the corresponding relation between the gesture and the shooting mode.
In one possible design, the first camera is a rear camera and the second camera is a front camera.
In one possible design, the electronic device is placed in a holding device.
In a third aspect, the present application provides a terminal comprising one or more processors, a memory, a touch screen, a first camera, and a second camera; the touch screen is used for receiving the operation of opening the first camera by a user; the first camera is used for collecting pictures; the second camera is used for acquiring gesture images of the user; one or more computer programs are stored in the memory, the one or more computer programs comprising instructions; the instructions, when executed by the processor, cause the terminal to perform the application launch method as described in any one of the possible designs of the first aspect above.
In a fourth aspect, the present application provides a computer storage medium comprising computer instructions which, when run on a terminal, perform an imaging mode switching method as described in any one of the possible designs of the first aspect above.
In a fifth aspect, the present application provides a computer program product for performing the image capturing mode switching method described in any one of the possible designs of the first aspect above when the computer program product is run on a computer.
It should be appreciated that the description of technical features, aspects, benefits or similar language in this application does not imply that all of the features and advantages may be realized with any single embodiment. Conversely, it should be understood that the description of features or advantages is intended to include, in at least one embodiment, the particular features, aspects, or advantages. Therefore, the description of technical features, technical solutions or advantageous effects in this specification does not necessarily refer to the same embodiment. Furthermore, the technical features, technical solutions and advantageous effects described in the present embodiment may also be combined in any appropriate manner. Those of skill in the art will appreciate that an embodiment may be implemented without one or more particular features, aspects, or benefits of a particular embodiment. In other embodiments, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments.
Drawings
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 2 is a schematic diagram of camera distribution of a terminal according to an embodiment of the present application;
fig. 3 is a schematic diagram of a software system architecture of a terminal according to an embodiment of the present application;
Fig. 4 is a schematic diagram of a terminal executing software executing image capturing mode switching according to an embodiment of the present application;
fig. 5 is a flow chart of a method for switching imaging modes according to an embodiment of the present application;
fig. 6 is a schematic diagram of a multi-screen display mode according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another multi-frame display mode according to an embodiment of the present disclosure;
fig. 8 is a schematic view of a scene of a method for switching an imaging mode according to an embodiment of the present application;
fig. 9 is a schematic diagram of a display interface of a method for switching a camera mode according to an embodiment of the present application;
fig. 10 is a schematic diagram of a display interface of another image capturing mode switching method according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an image capturing mode switching device according to an embodiment of the present application.
Detailed Description
The terms first, second, third and the like in the description and in the claims and drawings are used for distinguishing between different objects and not for limiting the specified sequence.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
For clarity and conciseness in the description of the following embodiments, a brief description of the related art will be given first:
the shooting mode is used for indicating the state of one or more cameras. The states of the camera comprise a shooting state and a non-shooting state, and further, the shooting state comprises a shooting state and a video recording state. For example, one image capturing mode of the electronic device may be a mode in which the first camera is in a photographing state and the second camera is in a non-photographing state. For another example, another image capturing mode of the electronic device may be a mode in which the first camera is in a non-image capturing state and the second camera is in a video recording state, and so on. When the electronic device only comprises a front camera and a rear camera, the first camera and the second camera can be the front camera and the rear camera respectively. In some possible implementations, the electronic device may include a plurality of front cameras and a plurality of rear cameras, so the first camera and the second camera may be both front cameras, the first camera and the second camera may also be both rear cameras, and the first camera and the second camera may also be one of the front cameras and one of the rear cameras, respectively. The pictures captured by the first camera and the second camera at the same moment are different.
For electronic devices with multiple cameras, it is often necessary to switch the imaging mode of the electronic device, including in particular the switching of the cameras and the switching of the states of the cameras. For example, the shooting mode of the first camera in the shooting state is switched to the shooting mode of the second camera in the shooting state, or the shooting mode of the first camera in the shooting state is switched to the shooting mode of the first camera in the video recording state, or the shooting mode of the first camera in the shooting state is switched to the shooting mode of the second camera in the video recording state, etc. When a user needs to switch the camera mode, the user often needs to click on a switch control in the screen, for example, a camera switch control or a camera state switch control.
When the first camera is in a shooting state and the user needs to switch the current shooting mode, for example, the first camera is switched to the second camera, or the video recording state is switched to a shooting state, or the non-shooting state is switched to the video recording state, the user needs to manually click a switching control in the electronic equipment. But there may be situations where the user is inconvenient to click on the switch control, such as when the user is using a self-timer stick, has water on the user's hand, wears gloves, etc. Or there may be a situation that the user is using a distant view to record video, and changing the position of the terminal may cause that unnecessary content of the user is recorded in the video, so that the content which is unnecessarily recorded in the video needs to be deleted through video processing, thereby increasing the operation of the user and affecting the use experience of the user.
In view of this, the present application provides a camera mode switching method that can be applied to an electronic device including a plurality of cameras. Specifically, when the electronic device detects that a first camera in the plurality of cameras is in a shooting state, a gesture image of a user is acquired through a second camera, then the gesture image is identified, a gesture of the user is obtained, and then the electronic device is switched from a first shooting mode to a second shooting mode according to the gesture and the corresponding relation between the gesture and the shooting mode.
According to the first aspect, the terminal can realize the switching of the shooting mode according to the gesture image of the user acquired by the second camera through the identification of the gesture image and the corresponding relation between the gesture and the shooting mode, and the user is not required to trigger a switching control in the screen, so that the operation of the user is simplified, and the use experience of the user is improved.
On the other hand, when the first cameras of the cameras in the electronic equipment are in a shooting state, gesture images of the user are acquired through the second cameras, gestures for switching shooting modes in the second camera capturing area of the user cannot be recorded by the first cameras, therefore irrelevant contents needing to be subsequently deleted cannot be generated, the operation of the user is further simplified, and the use experience of the user is improved.
The electronic device may be a terminal. Terminals include, but are not limited to, smart phones, tablet computers, notebook computers, personal digital assistants (personal digital assistant, PDAs), smart home devices or smart wearable devices, etc. The intelligent wearing equipment comprises an intelligent watch, an intelligent bracelet or intelligent glasses and the like.
In the following, an electronic device is taken as an example of a terminal, and for the electronic device in this embodiment, the structure of the terminal may be shown in fig. 1, and fig. 1 is a schematic structural diagram of a terminal provided in an embodiment of the present application.
As shown in fig. 1, the terminal 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display 194, a subscriber identity module (subscriber identification module, SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the structure illustrated in the embodiments of the present invention does not constitute a specific limitation on the terminal 100. In other embodiments of the present application, terminal 100 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
In the embodiment of the present application, the processor 110 may receive the gesture image of the user acquired by the second camera, and then identify the gesture image to obtain the gesture of the user. For example, the processor recognizes a gesture image of the user acquired by the second camera as a fist-making gesture, and then switches the terminal from the first shooting mode to the second shooting mode according to the gesture and the corresponding relation between the gesture and the shooting mode. For example, the second camera mode corresponding to the gesture "fist-making" is that the first camera and the second camera are displayed in the form of picture-in-picture on the display screen 194, and the processor switches the current camera mode to the camera mode that the first camera and the second camera are displayed in the form of picture-in-picture on the display screen 194 according to the obtained fist-making gesture and the corresponding relationship between the fist-making gesture and the picture-in-picture.
The external memory interface 120 may be used to connect an external nonvolatile memory to realize expansion of the memory capability of the terminal. The external nonvolatile memory communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and video are stored in an external nonvolatile memory.
The internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (NVM). The random access memory may be read directly from and written to by the processor 110, may be used to store executable programs (e.g., machine instructions) for an operating system or other on-the-fly programs, may also be used to store data for users and applications, and the like. The nonvolatile memory may store executable programs, store data of users and applications, and the like, and may be loaded into the random access memory in advance for the processor 110 to directly read and write.
The random access memory may include static random-access memory (SRAM), dynamic random-access memory (dynamic random access memory, DRAM), synchronous dynamic random-access memory (synchronous dynamic random access memory, SDRAM), double data rate synchronous dynamic random-access memory (double data rate synchronous dynamic random access memory, DDR SDRAM, e.g., fifth generation DDR SDRAM is commonly referred to as DDR5 SDRAM), etc.
The nonvolatile memory may include a disk storage device, a flash memory (flash memory). The FLASH memory may include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc. divided according to an operation principle, may include single-level memory cells (SLC), multi-level memory cells (MLC), triple-level memory cells (TLC), quad-level memory cells (QLC), etc. divided according to a memory specification, may include universal FLASH memory (universal FLASH storage, UFS), embedded multimedia memory cards (embedded multimedia Card, eMMC), etc. divided according to a memory specification.
In the embodiment of the present application, the internal memory 121 may store a picture file or a recorded video file or the like that is shot by the terminal in the shooting mode or the photographing mode through the camera.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the terminal. The charging management module 140 may also supply power to the terminal through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the terminal can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the terminal may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G or the like applied on the terminal. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc. applied on the terminal. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, the terminal's antenna 1 is coupled to the mobile communication module 150 and the antenna 2 is coupled to the wireless communication module 160 so that the terminal can communicate with the network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The terminal may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc. The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In this application embodiment, different cameras can correspond different microphones, and when different cameras are in the state of making a video recording, different microphones carry out voice acquisition. Different cameras can also correspond to the same microphone, and when the different cameras are in a shooting state, the same microphone is used for voice acquisition.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The terminal can listen to music through the speaker 170A or to hands-free conversations.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the terminal picks up a call or voice message, the voice can be picked up by placing the receiver 170B close to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The terminal may be provided with at least one microphone 170C. In other embodiments, the terminal may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the terminal may be further provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify the source of sound, implement directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The earphone interface 170D may be a USB interface 130 or a 3.5mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The terminal determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display 194, the terminal detects the intensity of the touch operation according to the pressure sensor 180A. The terminal may also calculate the location of the touch based on the detection signal of the pressure sensor 180A.
The gyro sensor 180B may be used to determine a motion gesture of the terminal. In some embodiments, the angular velocity of the terminal about three axes (i.e., x, y, and z axes) may be determined by the gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 180B detects the angle of the terminal shake, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the terminal by the reverse motion, thereby realizing anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, the terminal calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The terminal can detect the opening and closing of the flip cover using the magnetic sensor 180D. In some embodiments, when the terminal is a flip machine, the terminal may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the terminal in various directions (typically three axes). The magnitude and direction of gravity can be detected when the terminal is stationary. The method can also be used for identifying the gesture of the terminal, and is applied to the applications such as horizontal and vertical screen switching, pedometers and the like.
A distance sensor 180F for measuring a distance. The terminal may measure the distance by infrared or laser. In some embodiments, the terminal may range using the distance sensor 180F to achieve fast focusing.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The terminal emits infrared light outwards through the light emitting diode. The terminal uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the terminal. When insufficient reflected light is detected, the terminal may determine that there is no object in the vicinity of the terminal. The terminal can detect that the user holds the terminal close to the ear by using the proximity light sensor 180G so as to automatically extinguish the screen to achieve the purpose of saving electricity. The proximity light sensor 180G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 180L is used to sense ambient light level. The terminal may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect if the terminal is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The terminal can utilize the fingerprint characteristic of gathering to realize fingerprint unblock, visit application lock, fingerprint is photographed, fingerprint answer incoming call etc..
The temperature sensor 180J is for detecting temperature. In some embodiments, the terminal performs a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the terminal performs a reduction in performance of a processor located near the temperature sensor 180J in order to reduce power consumption for implementing thermal protection. In other embodiments, the terminal heats the battery 142 when the temperature is below another threshold to avoid the terminal from being abnormally shut down due to low temperatures. In other embodiments, when the temperature is below a further threshold, the terminal performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the terminal at a different location than the display 194.
In this embodiment of the present application, the touch sensor 180K may detect a touch operation performed by a user on a location where an icon of the camera application program is located, and transmit information of the touch operation to the processor 110, and the processor analyzes a function corresponding to the touch operation, for example, opens the camera application program.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The terminal may receive key inputs, generating key signal inputs related to user settings of the terminal as well as function controls.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, which may be used to indicate a state of charge, a change in charge, a message, a missed call, a notification, etc.
The terminal may implement photographing functions through a camera 193, an isp, a video codec, a GPU, a display screen 194, an application processor, and the like.
The camera 193 is used to capture images. Specifically, the object generates an optical image through a lens and projects the optical image onto a photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format.
In the embodiment of the present application, the terminal 100 may include N cameras 193, where N is a positive integer greater than 1.
The type of camera 193 can be distinguished according to physical location. For example, a plurality of cameras included in the camera 193 may be disposed on the front and rear surfaces of the terminal, respectively, a camera disposed on the surface of the display screen 194 of the terminal may be referred to as a front camera, and a camera disposed on the surface of the non-display screen 194 of the terminal may be referred to as a rear camera. The content of the images collected by different cameras is different in that: the front camera is used for collecting scenery facing the display screen 194 of the terminal, and the rear camera is used for collecting scenery facing the non-display screen 194 of the terminal. For example, a front-facing camera may capture an image of a user who is using the terminal. The first camera and the second camera in the embodiment of the application can be a front camera and a rear camera respectively, and can also be both the front camera or both the rear cameras.
In some possible implementations, the camera may also be located on the side of the terminal. Different cameras of the same terminal acquire different images at the same time. The first camera and the second camera in the embodiment of the application may also be side cameras. The plurality of cameras may be positioned as shown in FIG. 2, wherein the front cameras include 193-1 and 193-2 on the front of the terminal and the rear cameras include 193-3, 193-4, 193-5 and 193-6 on the rear of the terminal.
The number of pixels of the different cameras 193 may be the same or different. In general, the number of pixels of the rear camera may be higher than that of the front camera, and the number of pixels of the plurality of rear cameras may be different. Due to the different numbers of pixels, the sharpness of the different cameras on the screen 194 is different. In this embodiment of the present application, when the terminal is switched from the first camera to the second camera, the sharpness on the display screen 194 screen changes according to the pixel values of the first camera and the second camera.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the terminal selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, etc.
Video codecs are used to compress or decompress digital video. The terminal may support one or more video codecs. In this way, the terminal may play or record video in multiple encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the terminal can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The terminal implements display functions through the GPU, the display screen 194, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display 194 (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the terminal may include 1 or N displays 194, N being a positive integer greater than 1.
In the present embodiment, the display 194 may be used to display images captured by any one of the cameras 193 or any plurality of cameras. When the camera is in a shooting state, the display screen 194 displays an image captured by the camera, and when the camera is in a non-shooting state, the display screen 194 does not display an image captured by the camera. Specifically, the display screen 194 may display an image captured by the camera in a shooting state in a preview frame corresponding to the camera, and display an image captured by the camera in a recording state in a recording corresponding preview frame. For images captured by two cameras, the display screen 194 may display a first screen corresponding to the first camera and a second screen corresponding to the second camera in a 1:1 format, or may display the first screen and the second screen in a picture-in-picture format.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195 or withdrawn from the SIM card interface 195 to achieve contact and separation with the terminal. The terminal may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The terminal interacts with the network through the SIM card to realize the functions of communication, data communication and the like. In some embodiments, the terminal employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the terminal and cannot be separated from the terminal.
The structure of the terminal is described above, and the software system architecture of the terminal is described below.
As shown in fig. 3, the software system architecture of the terminal is divided into an Application (APP) layer, a Framework (FWK) layer, a hardware abstraction layer (hardware abstraction layer, HAL) and a Hardware (HW) layer from top to bottom. The layers communicate with each other through a software interface. In this embodiment, the software is Camera (CAM) application software.
By way of example of a camera application, the camera application may specifically include a plurality of modules such as a user interface, a multi-mirror frame, and camera management.
The application layer of the camera application includes modules such as a User Interface (UI), a multi-camera framework, and camera management. The user interface module comprises a photographing module, a video recording module, a multi-mirror module and the like. The multi-lens module further comprises a multi-lens photographing module and a multi-lens video module. The multi-lens photographing module is a module for displaying pictures captured by at least two cameras in a photographing state, and the multi-lens video recording module is a module for displaying pictures captured by at least two cameras in a video recording state. In some possible implementations, the multi-lens photographing module may also be used to display the captured images of any one camera.
The multi-shot framework comprises a switching control module, a multi-shot coding module, a gesture control module and the like. The switching control module is used for controlling switching of the shooting modes, such as switching of cameras, switching of states of the cameras, and switching of display pages corresponding to different cameras. The multi-shot coding module is used for coding in a shooting state or a video state to generate a corresponding picture file or video file. The gesture control module is used for sending the gesture image captured by the second camera to the HAL for recognition, and sending a mode switching instruction to the equipment management module according to the gesture recognition result returned by the HAL and the corresponding relation between the gesture and the shooting mode.
Camera management includes device management, session (session) management, and surface layer (surface) management modules. The device management module is configured to manage that the application corresponds to a device, for example, a plurality of cameras corresponding to the embodiment.
The framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. Specifically, camera management (camera management), camera device (camera device), session (session), and output (output) are included. The Camera manager is a management class of the Camera equipment, and can query the Camera equipment information of the equipment through the class of objects to obtain a Camera equipment object. The Camera device provides a series of fixed parameters related to the Camera device, such as basic settings and output formats.
The hardware abstraction layer is located between the operating system kernel and the hardware circuitry for abstracting the hardware. Specifically, the HAL hides the hardware interface details of a specific platform, provides a virtual hardware platform for an operating system, enables the operating system to have hardware independence, and can be transplanted on various platforms. The HAL includes a three-dimensional image processing library (camera three dimensional device, cam3 Dev), stream (stream), capture (capture), gesture node (gesturenode), and the like. In this embodiment, the HAL receives a gesture recognition algorithm configured by the device management module for the second camera, then performs gesture recognition according to the collected gesture image, and returns a gesture recognition result to the gesture control module.
The hardware layer is the hardware at the lowest level of the operating system. In the present embodiment, HW may include a camera 1, a camera 2, a camera 3, and the like. The cameras 1, 2, 3 may correspond to a plurality of cameras on a terminal.
In this embodiment, as shown in fig. 4, when a user uses the rear camera to take a picture or record a video, that is, when the rear camera is required to be in a shooting state, the device management module opens the front camera and the rear camera according to an instruction sent by the user to open the rear camera. The rear camera is in a shooting state, the front camera is started but is in a non-shooting state, and therefore pictures captured by the front camera are not displayed on the display screen. The device management module further sends instructions for opening the front camera and the rear camera to the HAL through the camera device module, the HAL opens the front camera and the rear camera, and returns a reply that the front camera and the rear camera are opened to the device management module through the camera device. And the device management module displays the opened and camera shooting state of the rear camera to a user through a display screen.
The equipment management module is further used for configuring a gesture recognition algorithm for the front-end camera, and sending the gesture algorithm to the HAL through the gesture control module and the camera device module. And, the device management module captures an image through the rear camera and sends the image to the HAL, and then the HAL returns the image captured by the rear camera to the display screen. The configuration of the terminal to the front camera and the presentation of the image captured by the rear camera can be performed simultaneously.
In some possible implementations, when the user performs a corresponding gesture in the capturing area of the front-end camera, the gesture control module sends the gesture image captured by the front-end camera to the HAL, the HAL recognizes the gesture image through a gesture recognition algorithm, and returns a gesture recognition result to the gesture control module. And the gesture control module sends a mode switching instruction to the equipment management module according to the gesture recognition result and the corresponding relation between the gesture and the shooting mode. When the switching mode is to switch the shooting state of the rear camera into the shooting state of the rear camera and the front camera, the equipment management module sends an instruction for opening the rear camera and the front camera to the HAL, the HAL opens the rear camera and the front camera, and returns the result that the rear camera and the front camera are both opened and are in the shooting state to the display screen, so that images captured by the rear camera and the front camera respectively are displayed in the display screen. Wherein the camera device module only performs an information transfer function, and does not relate to specific business processing.
The software architecture of the present application is briefly described above, and a terminal is taken as an example, and specific steps of the image capturing mode switching method in the embodiment of the present application are described with reference to fig. 5.
S502: when the terminal detects that the first camera is in a shooting state, gesture images of a user are acquired through the second camera.
In this embodiment, the terminal includes a plurality of cameras, and the plurality of cameras include a photographing state and a non-photographing state when turned on. The first camera and the second camera can be any two cameras with different shooting angles, for example, the first camera can be a front camera, the second camera can be a rear camera, and when shooting is carried out through the front camera, the terminal can acquire gesture images of a user through the rear camera. The first camera can also be a rear camera, the second camera is a front camera, and when shooting is carried out through the rear camera, the terminal can acquire gesture images of a user through the front camera. Further, the first camera may be a side camera, and the second camera may be a front camera or a rear camera. Correspondingly, the second camera can also be a side camera, and the first camera is a front camera or a rear camera and the like.
The image capturing state includes a photographing state and a video recording state. In the image capturing state, the content captured by the camera is displayed in the display screen, and the user can preview the content captured by the camera through the display screen. In the non-photographing state, the camera is turned on, but the content captured by the camera is not displayed in the display screen and is not present in the photographed photo or the recorded video.
The user can trigger the operation of opening the first camera, and after receiving the operation of opening the first camera sent by the user, the device management module opens the first camera and the second camera, wherein the first camera is in a camera shooting state, and the user can acquire the content captured by the first camera through the display screen. The second camera is in a silent and open non-camera state, and the captured content of the second camera is not presented to the user through the display screen.
In some possible implementations, when the user opens the camera application, that is, the default user sends an instruction to open the first camera, the device management module correspondingly opens the first camera and the second camera according to the instruction of the user. In some possible implementations, the terminal needs to obtain authorization of the user for the camera module, that is, obtain permission of the user to allow the first camera to be turned on while the second camera is turned on in silence.
In other possible implementations, the user may manually turn on the first camera and the second camera, and place the first camera in the imaging state and the second camera in the non-imaging state.
The first camera may be any one of a plurality of cameras of the terminal, and the second camera may be any one of a plurality of cameras of the terminal except the first camera. Typically, the terminal includes a front camera and a rear camera, and the first camera and the second camera may be the front camera and the rear camera, respectively. When the terminal comprises a front camera and a plurality of rear cameras, the first camera and the second camera can be the front camera and the rear camera respectively, and can also be different rear cameras. The captured pictures of the first camera and the second camera at the same moment are different.
When the terminal detects that the first camera is in a shooting state, the terminal can acquire gesture images of a user through the second camera because the second camera is started. The gesture image of the user can be a gesture image with similar preset gestures, and when the second camera detects that the gesture image of the user is similar to the preset gesture image, the second camera collects the gesture image of the user and sends the collected gesture image to the HAL for gesture recognition.
The preset gesture image may be a flip gesture, a slide gesture, a fist gesture, or the like. In some possible implementations, the terminal may set the number of gesture images corresponding to different gestures according to the recognition accuracy. The gesture images corresponding to the roll-over gesture can comprise an initial gesture image and an end gesture image, and the second camera collects the initial gesture image and the end gesture image of the user and sends the initial gesture image and the end gesture image to the HAL for recognition through a gesture recognition algorithm. The gesture images corresponding to the turning gesture can also comprise an initial gesture image, an intermediate gesture image and an end gesture image, and the second camera collects the initial gesture image, the intermediate gesture image and the end gesture image of the user and sends the initial gesture image, the intermediate gesture image and the end gesture image to the HAL to be identified through a gesture identification algorithm. The gesture image corresponding to the fist-making gesture can only comprise a fist-making gesture image, and the second camera collects the fist-making gesture image of the user and sends the fist-making gesture image to the HAL for recognition through a gesture recognition algorithm.
S504: and the terminal recognizes the gesture image and obtains the gesture of the user.
When a user sends an instruction for opening the first camera, the equipment management module sends the instruction for opening the first camera and the second camera to the HAL, the HAL opens the first camera and the second camera, and returns a reply that the first camera and the second camera are opened to the equipment management module. And the equipment management module displays the first camera which is started and in a shooting state to a user through a display screen. Then, the device management module further configures a gesture recognition algorithm for the second camera and sends the gesture algorithm to the HAL through the gesture control module. In this way, the terminal can recognize the gesture image of the user acquired by the second camera through the gesture recognition algorithm in the HAL, and the gesture of the user is obtained.
The gesture recognition algorithm is used for recognizing gestures of a user according to the gesture image. The principles of gesture recognition algorithms may include edge contour extraction, centroid finger multi-feature combination, finger joint tracking, and the like. Specifically, the terminal can preprocess data in the gesture image, denoise and enhance information through preprocessing, and then obtain gestures of the user through gesture segmentation, gesture analysis and gesture recognition.
Gesture segmentation is used to separate gestures from the background. Common gesture segmentation methods include detection segmentation based on motion information, detection segmentation based on visual features, detection segmentation based on multi-model fusion, and the like. Gesture analysis includes feature detection and parameter estimation. Feature detection is used to extract image feature parameters from the segmented gestures, the image features mainly including visual features including colors, textures, and contours, and semantic features including understanding of the image content. Methods of gesture recognition include traditional machine learning based methods and neural network based machine learning.
S506: and the terminal switches the terminal from the first shooting mode to the second shooting mode according to the corresponding relation between the gesture and the shooting mode and the gesture of the user.
The first shooting mode is the current shooting mode of the terminal. Specifically, the first image capturing mode is that the first camera is in an image capturing state, and specifically may be that the first camera is in a photographing state or a video recording state. The second shooting mode is different from the current shooting mode, can be different from the current shooting mode by using a camera, can be different from the current shooting mode by using a shooting state, and can be different from the current shooting mode by using the camera.
The terminal may store the correspondence between the gesture and the image capturing mode in advance. The correspondence between the gestures and the shooting modes comprises correspondence between the gestures and the second shooting modes, and different gestures correspond to different second shooting modes. And the terminal switches the terminal into a second shooting mode according to the gesture obtained by recognition and the corresponding relation between the gesture and the shooting mode. For example, the second image capturing mode corresponding to the left-right sliding gesture is an image capturing mode in which the front camera and the rear camera perform image capturing together in a 1:1 size in the screen, and the second image capturing mode corresponding to the fist-holding gesture after switching is an image capturing mode in which the front camera and the rear camera perform image capturing together in a picture-in-picture mode in the screen. When the first camera shooting mode is to record a video through the front camera, the gesture obtained through recognition is a left-right sliding gesture, and then the switched second camera shooting mode is a camera shooting mode that the front camera and the rear camera jointly record the video in a screen in a size of 1:1; and recognizing that the obtained gesture is a fist-making gesture, and then the switched second shooting mode is a shooting mode of jointly carrying out video recording in a picture-in-picture mode in a screen through the front camera and the rear camera. When the first camera shooting mode is to record a video through the rear camera, the gesture obtained through recognition is a left-right sliding gesture, and then the switched second camera shooting mode is a shooting mode that the front camera and the rear camera jointly record the video in a screen in a size of 1:1; and recognizing that the obtained gesture is a fist-making gesture, and then the switched second shooting mode is a shooting mode of jointly carrying out video recording in a picture-in-picture mode in a screen through the front camera and the rear camera.
Further, the correspondence between the gesture and the image capturing mode includes a correspondence between a first image capturing mode (current image capturing mode), the gesture, and a second image capturing mode. When the second image capturing modes are different, the same gesture may correspond to the different second image capturing modes. For example, when the first image capturing mode is a mode of recording images by a rear camera, the second image capturing mode after switching corresponding to the up-down sliding gesture is a mode of capturing images by the rear camera; the second image pickup mode after switching corresponding to the turning gesture is an image pickup mode for recording through the front camera. When the first shooting mode is a shooting mode of shooting through the front-facing camera, the second shooting mode after switching corresponding to the up-down sliding gesture is a shooting mode of shooting through the front-facing camera; the second image pickup mode after switching corresponding to the turning gesture is an image pickup mode for recording through the rear camera.
In this embodiment, since the current imaging mode is that the first camera is in the imaging state and the second camera is in the non-imaging state, the second imaging mode may be that the second camera is in the imaging state. Further, the second camera mode may be that the first camera and the second camera are both in a camera shooting state, or the first camera is in a non-camera shooting state, and the second camera is in a camera shooting state. In addition, since the first camera is in the shooting state in the current shooting mode, the shooting state comprises a first shooting state and a second shooting state, wherein the first shooting state and the second shooting state can be a video shooting state and a shooting state respectively. Therefore, when the first camera is in the video recording state, the switched second camera can be in the photographing state. Likewise, when the first camera is in the photographing state, the switched second camera may be in the video recording state.
Further, the second camera in the embodiment may also be in an imaging state. When the first camera and the second camera are both in the shooting state, the display screen can display the picture captured by the first camera and the picture captured by the second camera in a 1:1 mode, as shown in fig. 6.
The first image capturing mode in this embodiment may be: the first camera is in a shooting state, for example, can be in a video recording state or a shooting state. The second camera can be in a shooting state or a non-shooting state, and further, the second camera can be in a video recording state, a shooting state or a non-shooting state. For example, when the first camera is in the video recording state and the second camera is in the video recording state, the second camera may be in the video recording state and the second camera is in the non-video recording state, or in the photographing state and the second camera, or in the photographing state and the non-video recording state, or in the non-video recording state and the photographing state.
Further, the mode switching may also be switching between different display modes in the display screen. When the first picture captured by the first camera and the second picture captured by the second camera are displayed in the display screen at the same time, the first picture and the second picture can be displayed in the display screen in a 1:1 display mode, and the first picture and the second picture can also be displayed in a picture-in-picture display mode. The displaying of the first screen and the second screen in the display screen in a 1:1 form may further include displaying the first screen on the left side and displaying the second screen on the left side. Likewise, the picture-in-picture display mode also includes two display modes, a first screen outside and a second screen inside. As shown in fig. 7, the first picture in picture is in the out display mode.
For example, when the first camera and the second camera are the front camera and the rear camera, respectively, the correspondence between the gesture and the camera mode may be: when the first camera mode is a front camera and a rear camera are both in a camera shooting state, and the corresponding picture is a display mode of 1:1, the second camera shooting mode after switching corresponding to the upward sliding gesture and the leftward sliding gesture is that the front camera is in a camera shooting state and the rear camera is in a non-camera shooting state; the second shooting mode after the switching corresponding to the downward sliding gesture and the rightward sliding gesture is that the front camera is in a non-shooting state and the rear camera is in a shooting state; the second image pickup mode after switching corresponding to the turning gesture is that the front camera and the rear camera are in image pickup states, and the positions of a first image and a second image in the first image pickup mode are converted, namely the first image in the first image pickup mode is positioned on the left side of a display screen, the second image is positioned on the right side of the display screen, the first image in the second image pickup mode is positioned on the right side of the display screen, and the second image is positioned on the left side of the display screen; the second image pickup mode after switching corresponding to the fist-making gesture is a display mode that the front camera and the rear camera are in image pickup states and the corresponding picture is picture-in-picture. When the first camera is in a shooting state and the rear camera is in a non-shooting state, the switched second camera mode corresponding to the up-and-down sliding gesture and the left-and-right sliding gesture is a display mode of which the front camera and the rear camera are in shooting states and the corresponding pictures are 1:1; the second camera mode after switching corresponding to the turning gesture is that the front camera is in a non-camera shooting state and the rear camera is in a camera shooting state; the second image pickup mode after switching corresponding to the fist-making gesture is a display mode that the front camera and the rear camera are in image pickup states and the corresponding picture is picture-in-picture. When the first camera is in a non-camera shooting state and the rear camera is in a camera shooting state, the switched second camera shooting mode corresponding to the up-and-down sliding gesture and the left-and-right sliding gesture is a display mode of which the front camera and the rear camera are in the camera shooting state and the corresponding picture is 1:1; the second image pickup mode after switching corresponding to the turning gesture is that the front camera is in an image pickup state and the rear camera is in a non-image pickup state; the second image pickup mode after switching corresponding to the fist-making gesture is a display mode that the front camera and the rear camera are in image pickup states and the corresponding picture is picture-in-picture. When the first camera shooting mode is a front camera shooting state and the rear camera shooting state, and the corresponding picture is a picture-in-picture display mode, the second camera shooting mode after switching corresponding to the up-down sliding gesture and the left-right sliding gesture is a display mode in which the front camera shooting state and the rear camera shooting state are both adopted, and the corresponding picture is 1:1; the second image pickup mode after switching corresponding to the turning gesture is that the front camera and the rear camera are in image pickup states, and the positions of a first image and a second image in the first image pickup mode are converted, namely the first image in the first image pickup mode is positioned outside a display screen, the second image is positioned inside the display screen, the first image in the second image pickup mode is positioned inside the display screen, and the second image is positioned outside the display screen; the second camera shooting mode after switching corresponding to the fist-making gesture is that the front camera is in a shooting state and the rear camera is in a non-shooting state.
Further, when the image capturing state is a video recording state, the video recording can be started or ended through gesture control, for example, when the first image capturing mode is a rear camera in a recording mode and the recording is not started, the second image capturing mode corresponding to the capturing gesture can be the rear camera in the recording mode and the recording is started. When the first shooting mode is the rear camera is the recording mode and recording is started, the second shooting mode corresponding to the grabbing gesture can be the rear camera is the recording mode and recording is finished.
Through the above description, the embodiment of the application provides an image capturing mode switching method. Specifically, when the terminal detects that a first camera in the plurality of cameras is in a shooting state, a gesture image of a user is acquired through a second camera, then the gesture image is identified, a gesture of the user is obtained, and then the terminal is switched from a first shooting mode to a second shooting mode according to the gesture and the corresponding relation between the gesture and the shooting mode. Therefore, the terminal can realize the switching of the shooting mode through the identification of the gesture image and the corresponding relation between the gesture and the shooting mode according to the gesture image of the user acquired by the second camera, and the user is not required to trigger a switching control in the screen. And when the first cameras of the cameras in the terminal are in a shooting state, gesture images of the user are acquired through the second camera, and gestures for switching shooting modes in the second camera by the user are not recorded by the first camera, so that irrelevant contents needing to be subsequently deleted are not generated, the operation of the user is further simplified, and the use experience of the user is improved.
The above description is made for the steps of image capturing mode switching in the embodiment of the present application, and the following description is made for the scenario in the embodiment of the present application.
Fig. 8 shows a user taking a short video in the field through a clamping device such as a hand-held selfie stick through a rear camera of the terminal, wherein the rear camera is in a shooting state.
When a user needs to switch the shooting mode into a shooting state of the front camera and a shooting state of the rear camera and display the shooting state according to the proportion of 1:1, the user holds the automatic racket, and the manual triggering of the switching control in the display screen needs to retract the automatic racket, the manual triggering of the switching control, the recovery of the automatic racket and other operations. In addition, during the recovery process of the selfie stick, the position of the camera may be changed, so that the shot video includes content which is not needed by the user, and the user needs more operations to delete the content.
The terminal can realize the switching of the terminal shooting display mode through the shooting mode switching method. When the terminal shoots a video through the rear camera, the content in the terminal display screen may be as shown in fig. 9 (a). When the user needs to switch the image capturing mode of the terminal, the user may perform a corresponding switch gesture in the capturing area of the front camera, as shown in fig. 9 (b), for example, the switch gesture is sliding from left to right. The front camera acquires a gesture image of a user and sends the gesture image to the HAL for gesture recognition. When a user opens the rear camera, the equipment management module configures a gesture recognition algorithm for the HAL. The HAL may gesture-identify the gesture image according to a gesture-identification algorithm. The HAL returns a gesture recognition result to the gesture control module, and then the gesture recognition module switches the image capturing mode of the terminal to the front-back double-image capturing mode according to the corresponding relationship between the gesture and the image capturing mode, as shown in (c) in fig. 9. Therefore, the user only needs to perform corresponding switching gestures in the acquisition range of the front camera, and the terminal can switch to a corresponding shooting mode.
Further, the user can also convert the shooting mode into the mode shot by the front camera through gestures under the condition that the terminal shoots by the rear camera. The contents in the terminal display may be as shown in fig. 10 (a). When the user needs to switch the shooting mode of the terminal, the user can execute a corresponding switching gesture in the capturing area of the front camera, as shown in fig. 10 (b), for example, the switching gesture is a fist. The front camera acquires a gesture image of a user and sends the gesture image to the HAL for gesture recognition. When a user opens the rear camera, the equipment management module configures a gesture recognition algorithm for the HAL. The HAL may gesture-identify the gesture image according to a gesture-identification algorithm. The HAL returns a gesture recognition result to the gesture control module, and then the gesture recognition module switches the image capturing mode of the terminal to the image capturing mode through the front camera according to the corresponding relationship between the gesture and the image capturing mode, as shown in (c) of fig. 10. Therefore, the user only needs to perform corresponding switching gestures in the acquisition range of the front camera, and the terminal can switch to a corresponding shooting mode.
The embodiment of the application also provides a terminal, as shown in fig. 11, the terminal may include: a touch screen 1110, one or more processors 1120, memory 1130, one or more computer programs 1140, a first camera 1160, and a second camera 1170, which may be connected by one or more communication buses 1150. Wherein the one or more computer programs 1140 are stored in the memory 1130 and configured to be executed by the one or more processors 1120, the one or more computer programs 1140 include instructions that can be used to perform the various steps performed by the terminal in the corresponding embodiment of fig. 5.
The embodiment of the application may divide the functional modules of the terminal according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
Fig. 12 shows a schematic diagram of a possible composition of the terminal involved in the above-described and embodiments, which performs the steps in any of the method embodiments of the present application, in the case where the respective functional modules are divided with the respective functions. As shown in fig. 12, the terminal may include: an obtaining module 1202, configured to obtain an operation of opening the first camera by a user; the device management module 1204 is configured to open the first camera and the second camera according to the operation, and display a picture acquired by the first camera on a display screen, where shooting directions of the first camera and the second camera are different; the device management module 1204 is further configured to obtain, through the second camera, a gesture image of the user under the second camera; a hardware abstraction layer 1206, configured to identify the gesture image, and obtain a gesture of the user; the gesture control module 1208 is configured to switch the image capturing mode of the electronic device from the first image capturing mode to the second image capturing mode according to the correspondence between the gesture and the image capturing mode and the gesture of the user.
In one possible design, gesture control module 1208 may be configured to:
according to the corresponding relation between the gesture and the shooting mode and the gesture of the user, switching the second camera of the electronic equipment into a shooting state through a gesture control module of the electronic equipment; or,
and switching the first camera of the electronic equipment from a first shooting state to a second shooting state through a gesture control module of the electronic equipment according to the corresponding relation between the gesture and the shooting mode and the gesture of the user.
In one possible design, the second image capturing mode includes:
the first camera and the second camera of the electronic equipment are in a shooting state; or,
the first camera of the electronic device is in a non-camera shooting state, and the second camera is in a camera shooting state.
In one possible design, the second camera is turned on silently.
In one possible design, gesture control module 1208 may be configured to:
when the gesture of the user is a turnover gesture, the gesture control module of the electronic equipment is used for switching the first camera from the shooting state to the non-shooting state and switching the second camera from the non-shooting state to the shooting state according to the corresponding relation between the gesture and the shooting mode.
In one possible design, gesture control module 1208 may be configured to:
and when the gesture of the user is a left-right sliding gesture, switching the second camera from a non-shooting state to a shooting state through a gesture control module of the electronic equipment according to the corresponding relation between the gesture and the shooting mode.
In one possible design, the first camera is a rear camera and the second camera is a front camera.
In one possible design, the electronic device is placed in a holding device.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the terminal, so that the terminal executes the corresponding method, which is not described herein.
The present embodiment also provides a computer readable storage medium comprising instructions which, when executed on a terminal, cause the terminal to perform the relevant method steps of fig. 5 to implement the method of the above embodiment.
The present embodiment also provides a computer program product comprising instructions which, when run on a terminal, cause the terminal to perform the relevant method steps as in fig. 5 to implement the method in the above embodiments.
In the several embodiments provided in this embodiment, it should be understood that the disclosed terminal and method may be implemented in other manners. For example, the modules or units may be divided into only one logic function, and there may be other division manners in which the modules or units may be actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present embodiment may be integrated in one processing unit, each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present embodiment may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform all or part of the steps of the method described in the respective embodiments. And the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic or optical disk, and the like.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. The method for switching the shooting modes is characterized by being applied to electronic equipment, wherein the electronic equipment comprises a first camera and a second camera, the shooting directions of the first camera and the second camera are different, and the method comprises the following steps:
acquiring an operation of opening the first camera by a user;
opening the first camera and the second camera according to the operation, displaying the picture acquired by the first camera in a display screen, and not displaying the picture acquired by the second camera;
collecting a first gesture image of a user through the second camera;
obtaining a first gesture by identifying the first gesture image;
responding to the first gesture, acquiring pictures by using the first camera and the second camera, and displaying the pictures acquired by the first camera and the pictures acquired by the second camera in the display screen;
collecting a second gesture image of the user through the second camera;
obtaining a second gesture by identifying the second gesture image;
responding to the second gesture, and switching the position of the picture acquired by the first camera and the position of the picture acquired by the second camera;
Acquiring a third gesture image of the user through the second camera;
obtaining a third gesture by recognizing the third gesture image;
responsive to the third gesture, starting video recording;
acquiring a fourth gesture image of the user through the second camera;
obtaining a fourth gesture by recognizing the fourth gesture image;
ending video recording in response to the fourth gesture; acquiring a fifth gesture image of the user through the second camera;
obtaining a fifth gesture by recognizing the fifth gesture image;
and responding to the fifth gesture, acquiring a picture by using the first camera, displaying the picture acquired by the first camera in the display screen, and not displaying the picture acquired by the second camera.
2. The method as recited in claim 1, further comprising:
and switching the first camera of the electronic equipment from a first shooting state to a second shooting state through a gesture control module of the electronic equipment according to the corresponding relation between the gesture and the shooting mode and the gesture of the user.
3. The method of claim 1, wherein the first gesture is a side-to-side swipe gesture.
4. The method of claim 1, wherein the second gesture is a swipe right gesture or a swipe down gesture.
5. The method of claim 1, wherein the first camera is a rear camera and the second camera is a front camera.
6. The method according to any one of claims 1 to 5, wherein the electronic device is placed in a holding device.
7. An image pickup mode switching apparatus for performing the image pickup mode switching method according to any one of claims 1 to 6.
8. A computer storage medium comprising computer instructions which, when run on a terminal, perform the camera mode switching method of any one of claims 1-6.
CN202111437276.2A 2021-11-29 2021-11-29 Shooting mode switching method and related equipment Active CN114257737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111437276.2A CN114257737B (en) 2021-11-29 2021-11-29 Shooting mode switching method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111437276.2A CN114257737B (en) 2021-11-29 2021-11-29 Shooting mode switching method and related equipment

Publications (2)

Publication Number Publication Date
CN114257737A CN114257737A (en) 2022-03-29
CN114257737B true CN114257737B (en) 2023-05-02

Family

ID=80793506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111437276.2A Active CN114257737B (en) 2021-11-29 2021-11-29 Shooting mode switching method and related equipment

Country Status (1)

Country Link
CN (1) CN114257737B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115643485B (en) * 2021-11-25 2023-10-24 荣耀终端有限公司 Shooting method and electronic equipment
CN118555481A (en) * 2023-02-27 2024-08-27 荣耀终端有限公司 Mode switching method and terminal equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596316A (en) * 2020-04-30 2021-11-02 华为技术有限公司 Photographing method, graphical user interface and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713767B (en) * 2014-05-30 2019-07-23 深圳市秋然科技发展有限公司 A kind of smart phone, tablet computer or net book
CN106231175A (en) * 2016-07-09 2016-12-14 东莞市华睿电子科技有限公司 A kind of method that terminal gesture is taken pictures
CN106303260A (en) * 2016-10-18 2017-01-04 北京小米移动软件有限公司 Photographic head changing method, device and terminal unit
CN106899765A (en) * 2017-03-10 2017-06-27 上海传英信息技术有限公司 Mobile terminal dual camera changing method and device
CN111123959B (en) * 2019-11-18 2023-05-30 亿航智能设备(广州)有限公司 Unmanned aerial vehicle control method based on gesture recognition and unmanned aerial vehicle adopting method
CN112532833A (en) * 2020-11-24 2021-03-19 重庆长安汽车股份有限公司 Intelligent shooting and recording system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596316A (en) * 2020-04-30 2021-11-02 华为技术有限公司 Photographing method, graphical user interface and electronic equipment

Also Published As

Publication number Publication date
CN114257737A (en) 2022-03-29

Similar Documents

Publication Publication Date Title
CN110445978B (en) Shooting method and equipment
CN110072070B (en) Multi-channel video recording method, equipment and medium
US11785329B2 (en) Camera switching method for terminal, and terminal
CN110035141B (en) Shooting method and equipment
WO2020073959A1 (en) Image capturing method, and electronic device
CN113132620A (en) Image shooting method and related device
CN115484380B (en) Shooting method, graphical user interface and electronic equipment
CN113542580B (en) Method and device for removing light spots of glasses and electronic equipment
CN114257737B (en) Shooting mode switching method and related equipment
CN115567630B (en) Electronic equipment management method, electronic equipment and readable storage medium
CN115967851A (en) Quick photographing method, electronic device and computer readable storage medium
CN114880251B (en) Memory cell access method, memory cell access device and terminal equipment
CN113747058A (en) Image content shielding method and device based on multiple cameras
CN114490174A (en) File system detection method, electronic device and computer readable storage medium
CN113518189B (en) Shooting method, shooting system, electronic equipment and storage medium
CN113542574A (en) Shooting preview method under zooming, terminal, storage medium and electronic equipment
CN114302063B (en) Shooting method and equipment
CN114079725B (en) Video anti-shake method, terminal device, and computer-readable storage medium
CN116782023A (en) Shooting method and electronic equipment
CN113867520A (en) Device control method, electronic device, and computer-readable storage medium
CN117750186A (en) Camera function control method, electronic device and storage medium
CN118524286A (en) Focusing method and focusing device
CN118695083A (en) Shooting method, graphical user interface and electronic equipment
CN116582743A (en) Shooting method, electronic equipment and medium
CN114691066A (en) Application display method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant