CN113709354A - Shooting method and electronic equipment - Google Patents

Shooting method and electronic equipment Download PDF

Info

Publication number
CN113709354A
CN113709354A CN202010432123.8A CN202010432123A CN113709354A CN 113709354 A CN113709354 A CN 113709354A CN 202010432123 A CN202010432123 A CN 202010432123A CN 113709354 A CN113709354 A CN 113709354A
Authority
CN
China
Prior art keywords
image
camera
photo
user
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010432123.8A
Other languages
Chinese (zh)
Inventor
崔瀚涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010432123.8A priority Critical patent/CN113709354A/en
Publication of CN113709354A publication Critical patent/CN113709354A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

A shooting method and electronic equipment relate to the technical field of electronics, can realize convenient and efficient tracking shooting of moving targets and improve shooting experience of users in a high-power zooming magnification shooting scene, and specifically comprise the following steps: after the electronic equipment enters a long-focus shooting mode, a first camera is used for collecting a main preview image and an auxiliary preview image, when a moving shooting target is detected or a user indicates to enter the moving shooting mode, the electronic equipment continues to collect the main preview image by the first camera, but collects the auxiliary preview image by a second camera, and the view-finding range of the second camera is larger than that of the first camera.

Description

Shooting method and electronic equipment
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a shooting method and an electronic device.
Background
At present, a mobile phone is generally configured with a plurality of cameras for satisfying various shooting scenes of a user. The plurality of cameras may include cameras of multiple focal segments, such as may include short-focus (wide-angle) cameras, medium-focus cameras, and long-focus cameras. Wherein different cameras correspond to different viewing ranges and zoom magnifications. When a user shoots, the mobile phone can shoot by switching cameras with different focal lengths (namely optical zooming), and sometimes the shot picture is processed by combining a software processing mode of digital zooming so as to meet various shooting scenes with high magnification.
However, in a shooting scene with high magnification, the viewing range of the mobile phone is only a part of the shooting scene, and is usually small. When the shooting target is in a moving state, the shooting target is likely to move out of the view finding range, so that the mobile phone loses the shooting target. In such a scenario, the shooting target in the motion state is always in a moving position, and the user continuously moves the mobile phone to search for the shooting target, so that the user is difficult to realize tracking shooting of the shooting target in the motion state.
Disclosure of Invention
According to the shooting method, the moving target can be conveniently and efficiently tracked and shot in a high-power zooming ratio shooting scene, and the shooting experience of a user is improved.
In order to achieve the above object, the embodiments of the present application provide the following technical solutions:
in a first aspect, a method for shooting a moving object is provided, including: responding to the operation of starting the camera application, and displaying a first view frame by the electronic equipment, wherein the first view frame is used for displaying pictures acquired by a first camera of the electronic equipment; in response to detecting that a user increases the zoom magnification of the camera application, a first view frame is used for displaying a first picture acquired by a second camera of the electronic equipment, and the view range of the second camera is smaller than that of the first camera; when the zoom magnification of the camera is larger than or equal to the preset magnification, the electronic equipment also displays a second view-finding frame which is used for displaying a second picture acquired by a second camera; the framing range corresponding to the second picture is larger than the framing range corresponding to the first picture, and the second framing frame comprises a first mark frame and is used for marking the framing range of the first framing frame in the picture displayed by the second framing frame; in response to detecting the moving object or receiving an operation of entering a motion shooting mode instructed by a user, the first view frame is used for displaying a picture acquired by the second camera, the second view frame is used for displaying the picture acquired by the first camera, and the second view frame further comprises a second mark frame, wherein the second mark frame is used for marking the moving object in the picture displayed by the second view frame.
That is, the first view frame is used to display the main preview image, and the second view frame is used to display the sub preview image. Therefore, the embodiment of the application provides a method for shooting a moving target in a telephoto shooting scene, that is, a secondary preview image is acquired by switching a camera with a larger field angle, so that the viewing range of the secondary preview image is larger, and the viewing range displayed by a main preview image is marked in the secondary preview image. Therefore, the user can track the moving target by means of the auxiliary preview image with a larger viewing range, the user can clearly determine the direction and the position of the mobile phone to be moved, and the tracking efficiency of the moving target is improved.
In a possible implementation, the method further includes: when the moving target is detected to move out of the picture displayed by the second view frame or the operation that the user indicates to close the motion shooting mode is detected, the first view frame is used for displaying a third picture acquired by the second camera, and the second view frame is used for displaying a fourth picture acquired by the second camera; the view range corresponding to the fourth picture is larger than that corresponding to the third picture; or when it is detected that the zoom magnification applied by the camera is smaller than the preset magnification, the first view frame is used for displaying the picture acquired by the first camera, and the electronic equipment does not display the second view frame any more.
Thus, several methods of the electronic device exiting the motion capture mode are provided.
In a possible implementation, the method further includes: when the moving target is detected to move out of the picture displayed by the second view-finding frame, the first view-finding frame is used for displaying the picture acquired by the second camera, and the second view-finding frame is used for displaying the picture acquired by the third camera; and the framing range of the third camera is larger than that of the first camera.
After the moving target moves out of the picture displayed by the second viewing frame, the electronic equipment can be switched to the third camera with a larger viewing angle to acquire the auxiliary preview image, so that the moving target can be observed in the auxiliary preview image with a larger viewing range, and the tracking of the moving target is realized.
In a possible implementation, the method further includes: and the electronic equipment displays prompt information or plays voice according to the relative position of the first mark frame and the second mark frame in the second viewing frame, so as to guide the user to move the electronic equipment.
In a possible implementation, the method further includes: in response to the fact that the user executes the photographing operation, the electronic equipment generates a first photo for a first image collected by the second camera; when the center point of the moving object in the first image is located outside the first area of the first image, the center point of the moving object in the first picture is located in the first area of the first picture.
That is, in the motion capture mode, when the moving object in the captured picture is not located in the first region of the picture, for example, the central region of the image, the electronic device may automatically re-crop the picture according to the full-size image captured by the camera, so that the moving object in the cropped picture is located in the first region of the image. Namely, the shot pictures are automatically subjected to secondary composition, and the shooting experience of a user is improved.
In a possible implementation, the method further includes: in response to the fact that the user executes the photographing operation, the electronic equipment generates a first photo for a first image collected by the second camera; when the central point of the moving object in the first image is located outside the first area of the first image, the electronic equipment also saves the first image.
In another embodiment, in the motion shooting mode, when the moving object in the shot picture is not located in the first area of the picture, for example, the central area of the picture, the electronic device may save the full-size image captured by the camera, so as to facilitate subsequent re-cropping of the full-size image according to the instruction of the user.
In a possible implementation, the method further includes: in response to the fact that the user executes the photographing operation, the electronic equipment generates a first photo for a first image collected by the second camera; when the center point of the moving object in the first image is located outside the first area of the first image and within the second area, the center point of the moving object in the first photo is located within the first area of the first photo; when the central point of the moving object in the first image is located outside the second area of the first image, the electronic equipment also saves the first image.
For example, when it is determined that the center point of the moving object is outside the first region of the current image (e.g., the range of the image center 1/2) but within the second region of the current image (e.g., the range of the image center 3/4), fine adjustment of the composition of the captured photograph may be automatically performed. If the central point of the moving object is judged to be out of the second range (for example, the range of the image center 3/4) of the current image, the composition is considered to be unreasonable, and the user is required to manually perform the secondary composition. That is, after the shooting, the mobile phone obtains two images, one image is obtained by cutting and zooming the image collected by the telephoto camera according to the zoom magnification adopted during the shooting, and the image can be viewed by the user when the user browses the photo. The other image is a full-size image acquired by the long-focus camera, and the image can be called when the user manually performs secondary composition processing.
In a possible implementation, the method further includes: when a user browses a first photo through a gallery application, a browsing interface of the first photo comprises a first control; in response to detecting the operation of the user on the first control, the electronic equipment displays a modification interface of the first photo; the modification interface comprises a first image and a third mark frame, and the third mark frame is used for marking a framing range of the first photo on the first image; in response to detecting a dragging operation of the third mark frame by the user, the electronic equipment moves the position of the third mark frame on the first image; and in response to the detection that the user operates the determination control, the electronic equipment generates a second photo according to the image in the third mark frame, wherein the zoom magnification of the second photo is the same as that of the first photo, and the moving object in the second photo is in the first area of the second photo.
Therefore, the method for manually composing the picture for the second time by the user is provided, the problem that the picture composition of the picture obtained when the moving object is shot is unreasonable is solved, and the shooting experience of the user is improved.
In a possible implementation, the method further includes: in response to the fact that the user executes continuous shooting operation, the electronic equipment generates corresponding third photos for a plurality of second images collected by the second camera; when the center point of the moving object in any one of the second images is located outside the first area of the image, the center point of the moving object in the third photo corresponding to the image is located in the first area of the third photo corresponding to the image.
In a possible implementation, the method further includes: in response to the fact that the user executes continuous shooting operation, the electronic equipment generates corresponding third photos for a plurality of second images collected by the second camera; when the central point of the moving object in any one of the second images is positioned outside the first area of the image, the electronic equipment saves the image.
In a possible implementation, the method further includes: in response to the fact that the user executes continuous shooting operation, the electronic equipment generates corresponding third photos for a plurality of second images collected by the second camera; when the center point of the moving object in any one of the second images is located outside the first area of the image and within the second area, the center point of the moving object in the third photo corresponding to the image is located within the first area of the third photo corresponding to the image; when the central point of the moving object in any one of the second images is positioned outside the second area of the image, the electronic equipment saves the image.
In a possible implementation, the method further includes: when the user browses the third photo through the gallery application, a selection interface of the third photo comprises a first control; in response to detecting the operation of the user on the first control, the electronic equipment displays a modification interface of the selected third photo; the modification interface comprises a second image corresponding to the selected third photo and a fourth mark frame, and the fourth mark frame is used for marking a view range of the selected third photo on the second image corresponding to the third photo; in response to detecting that the user drags the fourth mark frame, the electronic equipment moves the position of the fourth mark frame on the second image corresponding to the selected third photo; and in response to the detection that the user operates the determination control, the electronic equipment generates a fourth photo according to the image in the fourth mark frame, wherein the zoom magnification of the fourth photo is the same as that of the third photo, and the moving object in the fourth photo is in the first area of the fourth photo.
In one possible implementation, the first camera is a mid-focus camera and the second camera is a tele-focus camera. The third camera is a wide-angle camera or a super wide-angle camera.
In a second aspect, an electronic device is provided, which includes: a processor, a memory and a touch screen, the memory and the touch screen being coupled to the processor, the memory being configured to store computer program code comprising computer instructions which, when read by the processor from the memory, cause the electronic device to perform the method as set forth in the above aspects and any one of its possible implementations.
In a third aspect, an apparatus is provided, where the apparatus is included in an electronic device, and the apparatus has a function of implementing a behavior of the electronic device in any one of the methods in the foregoing aspects and possible implementation manners. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module or unit corresponding to the above functions. For example, a detection module or unit, a display module or unit, and a processing module or unit, etc.
A fourth aspect provides a computer-readable storage medium comprising computer instructions which, when executed on a terminal, cause the terminal to perform the method as described in the above aspect and any one of its possible implementations.
A fifth aspect provides a graphical user interface on an electronic device with a display screen, a camera, a memory, and one or more processors to execute one or more computer programs stored in the memory, the graphical user interface comprising graphical user interfaces displayed when the electronic device performs the methods of the preceding aspects and any one of their possible implementations.
A sixth aspect provides a computer program product for causing a computer to perform the method as described in the above aspects and any one of the possible implementations when the computer program product runs on the computer.
A seventh aspect provides a chip system, which includes a processor, and when the processor executes the instructions, the processor executes the method described in the above aspect and any possible implementation manner of the aspect.
Drawings
Fig. 1 is a first schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of a user interface of some electronic devices provided by embodiments of the present application;
FIG. 4 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
FIG. 5 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
FIG. 6 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
FIG. 7 is a schematic view of a user interface of further electronic devices according to an embodiment of the present application;
fig. 8 is a first schematic diagram illustrating an image processing method in a shooting method according to an embodiment of the present disclosure;
fig. 9 is a second schematic diagram of an image processing method in a shooting method according to an embodiment of the present application;
FIG. 10A is a schematic view of a user interface of yet another electronic device according to an embodiment of the present application;
FIG. 10B is a schematic view of a user interface of yet another electronic device according to an embodiment of the present application;
FIG. 10C is a schematic view of a user interface of another electronic device according to an embodiment of the present application;
fig. 11 is a schematic flowchart of a shooting method according to an embodiment of the present disclosure;
fig. 12 is a schematic flowchart of another shooting method provided in the embodiment of the present application;
fig. 13 is a schematic structural diagram of a chip system according to an embodiment of the present disclosure.
Detailed Description
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present application, "a plurality" means two or more unless otherwise specified.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The shooting method provided by the embodiment of the application can be applied to an electronic device provided with a camera, the electronic device can be, for example, a mobile phone, a tablet computer, a Personal Computer (PC), a Personal Digital Assistant (PDA), a smart watch, a netbook, a wearable electronic device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, an on-board device, a smart screen, a smart car, a smart audio, a robot, and the like, and the specific form of the electronic device is not particularly limited by the application.
Fig. 1 shows a schematic structural diagram of an electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
It should be understood that the interface connection relationship between the modules shown in fig. 1 is only illustrative and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1. In other embodiments, camera 193 is a liftable camera.
In some embodiments of the present application, the electronic device 100 includes at least one tele camera, and one mid or short-focus (wide) camera. And in the long-focus shooting mode, the mobile phone generates a main preview image and an auxiliary preview image according to the long-focus camera. When a moving target is further detected or a moving shooting mode is entered according to user instructions, the mobile phone continues to generate a main preview image according to the long-focus camera, and simultaneously generates an auxiliary preview image by using the middle-focus camera or the short-focus (wide-angle) camera. It should be noted that the tele camera, and the mid camera (or the short-focus (wide) camera) herein should be located on the same side of the electronic device 100, e.g., on the back of the screen of the electronic device 100. The following will describe a specific embodiment in detail.
In other embodiments of the present application, in the telephoto shooting mode, when it is detected that the user performs a shooting operation, the mobile phone performs clipping and zooming according to an image acquired by the telephoto camera to obtain an image corresponding to a zoom magnification used in shooting, that is, the obtained image after shooting. In the motion shooting mode, after the shot image is obtained by the mobile phone, the full-size image of the telephoto camera can be reserved in some scenes for fine adjustment of the composition of the shot image or recomposition of the image according to the instruction of a user, so that the effect of tracking and shooting the motion target is improved.
In some examples, the ISP includes a front-end processing unit and a back-end processing unit. After the electronic device 100 enters the motion capture mode, if it is detected that the user performs a capture operation, the front-end processing unit of the ISP selects a corresponding image (i.e., a frame) from the plurality of images cached by the camera 193 according to the capture operation performed by the user, and performs Raw domain processing and the like on the selected image. The back-end processing unit of the ISP, on the one hand, performs processing such as cropping, digital zooming, YUV domain processing, and encoding on the image frame processed by the front-end processing unit according to the zoom magnification used by the electronic device 100, to obtain a captured image. On the other hand, the back-end processing unit of the ISP also holds a copy of the image processed by the front-end processing unit, taking note that the image is an uncut image. Subsequently, when the back-end processing unit is idle or after the shot image is obtained through processing, the back-end processing unit performs YUV domain processing and the like on the image processed by the front end to obtain a backup full-size image, and the backup full-size image is used for a subsequent user to perform composition re-use on the image obtained through shooting. The following will describe a specific embodiment in detail.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
In some embodiments of the present application, the NPU may perform image recognition on the image captured from the camera 193, identify an object included in the image, and/or a location of the identified object, to determine whether a moving object is included in the image. If the moving target is detected, the mobile phone automatically enters a moving shooting mode or prompts a user to enter the moving shooting mode.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110. The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call. The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person. The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The touch sensor 180K is also called a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100. The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc. The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1.
The software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention uses an Android system with a layered architecture as an example to exemplarily illustrate a software structure of the electronic device 100.
Fig. 2 is a block diagram of a software configuration of the electronic apparatus 100 according to the embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The technical solutions in the following embodiments can be implemented in the electronic device 100 having the above hardware architecture and software architecture.
The electronic device 100 is an example of a mobile phone, and details of technical solutions provided by embodiments of the present application are described with reference to the drawings.
1. Starting camera
For example, a user may instruct the cell phone to start the camera application by touching a particular control on the cell phone screen, pressing a particular physical key or combination of keys, inputting voice, or a blank gesture. And after the instruction of opening the camera by the user is received, starting the camera by the mobile phone and displaying a shooting interface.
For example: as shown in (1) in fig. 3, the user may instruct the mobile phone to open the camera application by clicking a "camera" application icon 301 on the desktop of the mobile phone, and the mobile phone displays a shooting interface as shown in (2) in fig. 3. For another example: when the mobile phone is in the screen locking state, the user may also instruct the mobile phone to start the camera application by a gesture of sliding rightward on the screen of the mobile phone, and the mobile phone may also display a shooting interface as shown in (2) in fig. 3. Or, when the mobile phone is in the screen lock state, the user may instruct the mobile phone to open the camera application by clicking the shortcut icon of the "camera" application on the screen lock interface, and the mobile phone may also display the shooting interface shown in (2) in fig. 3.
For another example, when the mobile phone runs other applications, the user may also enable the mobile phone to open the camera application to take a picture by clicking the corresponding control. Such as: when the user is using an instant messaging application (such as a WeChat application), the user can also instruct the mobile phone to start the camera application to take pictures and take videos by selecting a control of the camera function.
As shown in (2) of fig. 3, the camera shooting interface generally includes a first viewfinder 302, shooting controls, and other function controls ("iris, portrait, photograph, and video). The first view frame 302 can be used to preview an image (or picture) captured by the camera, and the user can decide a timing for instructing the mobile phone to perform a shooting operation based on the image (or picture) in the first view frame 302. The user may instruct the mobile phone to perform the shooting operation, for example, the user may click a shooting control, or the user may press a volume key. In some embodiments, a zoom magnification indication 303 may also be included in the capture interface. In general, the default zoom magnification of the mobile phone is "1 ×" as the basic magnification.
The zoom magnification is understood to mean that the focal length of the current camera corresponds to the zoom magnification of the reference focal length. The reference focal length is usually the focal length of the main camera of the mobile phone.
For example, a mobile phone is described as an example in which three cameras, namely a short-focus (wide-angle) camera, a middle-focus camera and a long-focus camera, are integrated. Under the condition that the relative position of the mobile phone and the shot object is not changed, the focal length of the short-focus (wide-angle) camera is minimum, the field angle is maximum, and the size of the object in the shot image is minimum. The focal length of the middle-focus camera is larger than that of the short-focus (wide-angle) camera, the field angle is smaller than that of the short-focus (wide-angle) camera, and the size of an object in a shot image is larger than that of the short-focus (wide-angle) camera. The telephoto camera has the largest focal length, the smallest field angle, and the largest size of the object in the photographed image. The field angle is used for indicating the maximum angle range which can be shot by the camera in the process of shooting the image by the mobile phone. It is understood that the "field angle" may be replaced by the terms "field range", "field area", "imaging range" or "imaging field".
In general, a user uses the most scenes of the middle focus camera, and thus, the middle focus camera is generally set as the main camera. The focal length of the main camera is set to a reference focal length, and the zoom magnification is "1 ×". In some embodiments, the image captured by the main camera may be digitally zoomed (or referred to as digital zoom), i.e., each pixel area of the image captured by the main camera "1 x" is enlarged by the ISP or other processor in the cell phone and the frame of the image is correspondingly reduced, so that the processed image appears equivalent to an image captured by the main camera at other zoom magnifications (e.g., "2 x"). That is, an image captured using the main camera may correspond to a zoom magnification of one section, for example: from "1X" to "5X". Similarly, the focal length of the telephoto camera and the focal length of the main camera are multiples of the zoom magnification of the telephoto camera. For example, the focal length of the tele camera may be 5 times the focal length of the main camera, i.e., the zoom magnification of the tele camera is "5 ×". Similarly, digital zooming may also be performed on images captured by a tele camera. That is, an image captured using a telephoto camera may correspond to a zoom magnification of another section, for example: "5X" to "50X". Similarly, the multiple of the focal length of the short-focus (wide-angle) camera and the focal length of the main camera can be used as the zoom magnification of the short-focus (wide-angle) camera. For example, the focal length of the short focus camera may be 0.5 times the focal length of the main camera, i.e., the zoom magnification of the long focus camera is "0.5 ×". Similarly, digital zooming may also be performed on images captured by a short-focus (wide-angle) camera. That is, an image captured using a telephoto camera may correspond to a zoom magnification of another section, for example: "0.5X" to "1X".
Of course, the mobile phone may use any one of the cameras as the main camera, and use the focal length of the main camera as the reference focal length, which is not specifically limited in this application.
2. Enter a tele photographing mode
In some embodiments, the user can manually adjust the zoom magnification used when shooting with the mobile phone. For example: as shown in (2) in fig. 3, the user can adjust the zoom magnification used by the mobile phone by operating a zoom magnification instruction 303 in the shooting interface. Such as: when the zoom magnification used by the current mobile phone is "1 ×", the user can change the zoom magnification used by the mobile phone to "5 ×" by clicking the zoom magnification indication 303 one or more times, that is, the mobile phone displays a shooting interface as shown in (1) in fig. 4. In the imaging interface shown in (1) in fig. 4, the imaging range of the image previewed in the first finder frame 401 is obviously smaller than the imaging range in the first finder frame 302 in (2) in fig. 3, but the size of the subject to be imaged (for example, a bird) previewed in the first finder frame 401 is larger than the subject to be imaged previewed in the first finder frame 302. In some examples, the shooting interface described in fig. 4 may further continue to display a zoom magnification indication 402, at which time the current zoom magnification is displayed as "5 ×" so that the user knows the current zoom magnification. Another example is: as shown in (3) in fig. 3, the user may decrease the zoom magnification used by the mobile phone through a gesture of pinching two fingers (or three fingers) in the photographing interface, or increase the zoom magnification used by the mobile phone through a gesture of sliding two fingers (or three fingers) outward (in the opposite direction to the pinching). Another example is: as shown in (4) in fig. 3, the user may also change the zoom magnification used by the mobile phone by dragging the zoom scale 304 in the shooting interface. Another example is: the user can also change the zoom magnification of the mobile phone by switching the currently used camera in the shooting interface or the shooting setting interface. For example, if the user selects to switch to a telephoto camera, the mobile phone automatically increases the zoom magnification. Another example is: the user may change the zoom magnification of the mobile phone by selecting an option of a telephoto shooting scene or an option of a long-distance shooting scene in the shooting interface or the shooting setting interface.
In other embodiments, the mobile phone may also automatically identify a specific scene of the image captured by the camera, and automatically adjust the zoom magnification according to the identified specific scene. Such as: if the mobile phone recognizes that the image captured by the camera is a scene with a large visual field range, such as a sea, a mountain, a forest and the like, the zoom factor can be automatically reduced. For another example: if the mobile phone recognizes that the image captured by the camera is a distant object, for example, a distant bird, a player on a sports ground, and the like, the zoom factor may be automatically increased, which is not limited in the present application.
In the application, when the zoom magnification used by the mobile phone is greater than or equal to the preset magnification (for example: "10 ×"), the mobile phone is considered to enter the telephoto shooting mode. In the telephoto shooting mode, in addition to the preview image (or preview screen) of the first finder frame, a preview image (or preview screen) of the second finder frame may be displayed on the shooting interface of the mobile phone. The angle of view of the preview image of the second finder frame is larger than that of the preview image of the first finder frame, and the same or approximately the same area as that of the preview image of the first finder frame is marked in the preview image of the second finder frame. Therefore, the user can compose the preview image of the first view frame or perform tracking shooting on a certain moving object by means of the preview image in the second view frame. For convenience of distinction, the preview image (or preview screen) displayed in the first finder frame is referred to as a main preview image (or main preview screen), and the preview image (or preview screen) displayed in the second finder frame is referred to as a sub preview image (sub preview screen).
For example, as shown in (1) in fig. 4, the mobile phone shooting interface displays that the current zoom magnification is "5 ×", and only the first finder frame 401 is displayed in the shooting interface at this time. When an instruction to increase the zoom magnification by the user is detected (for example, the zoom magnification is increased to "10 ×"), the mobile phone displays a shooting interface as shown in (2) in fig. 4. In the shooting interface shown in (2) in fig. 4, in addition to the first finder frame 403, a second finder frame 405 is displayed. As can be seen, the main preview image displayed in the first view frame 403 is a part of the auxiliary preview image displayed in the second view frame 405. For example, the main preview image is an image of a center area (or substantially the center area) of the auxiliary preview image. The image in the second view frame 405 may include a first mark frame 406 to prompt the user of the position of the image in the first view frame 403 in the image of the second view frame 405, to facilitate the user to use the second view frame 405 to frame and compose the image of the first view frame 403, and to photograph a moving subject. Optionally, the second view frame 405 may further include other functional controls such as a close control 407 and an enlarge control 408, so as to support other operations of the user on the second view frame 403, which is not limited in this embodiment of the application.
In one technical scheme, when the mobile phone enters a long-focus shooting mode, the mobile phone uses a long-focus camera to collect images. And on one hand, the mobile phone collects a full-size image from the long-focus camera to be cut, and the full-size image is used as an auxiliary preview image. On the other hand, the mobile phone cuts and digitally zooms and magnifies the full-size image collected by the telephoto camera according to the currently used zoom magnification to obtain the main preview image. In general, the zoom magnification of the auxiliary preview image is smaller than that of the main preview image, that is, the viewing range of the auxiliary preview image is larger than that of the main preview image.
It should be noted that the size of the full-size image collected by the camera of the mobile phone is larger than the image displayed by the mobile phone according to the full-size image. That is, even if the zoom magnification ratio of the image displayed by the mobile phone is the same as the zoom magnification ratio corresponding to the full-size image collected by the camera of the mobile phone, that is, the full-size image is not zoomed and enlarged, the image displayed by the mobile phone is smaller than the full-size image. For example, the zoom magnification corresponding to a full-size image captured by a telephoto camera of a mobile phone is "8 ×", and when the mobile phone needs to display a preview image with the zoom magnification of "8 ×", the full-size image also needs to be cropped.
In some examples, the cell phone may also directly use the full-size image captured by the tele camera as the main preview. In this way, even if the zoom magnification of the subsidiary preview image is the same as that of the main preview image, the viewing range of the subsidiary preview image is larger than that of the main preview image, and therefore, the subsidiary preview image can be used to assist the user in composing the main preview image or performing tracking shooting of a moving object.
In other examples, the mobile phone may further zoom in and crop the full-size image collected by the telephoto camera according to a certain zoom ratio to obtain the auxiliary preview image. Of course, the zoom magnification corresponding to the auxiliary preview image is smaller than that of the main preview image, that is, the viewing range of the auxiliary preview image is larger than that of the main preview image.
3. Enter a motion capture mode
Further, when the user follows and shoots the moving object, such as a flying bird, a running car, a running child, etc., the position of the moving object is moved all the time, the user may quickly move out of the main preview image of the first view frame and then move out of the auxiliary preview image of the second view frame. If the user moves the mobile phone to track the moving target only by feeling, the efficiency is low and the effect is not good. Therefore, the embodiment of the application provides a method for shooting a moving object in a long-focus shooting scene, that is, a secondary preview image is acquired by switching a camera with a larger field angle, so that the viewing range of the secondary preview image is larger, and the viewing range displayed by a main preview image is marked in the secondary preview image. Therefore, the user can track the moving target by means of the auxiliary preview image with a larger viewing range, the user can clearly determine the direction and the position of the mobile phone to be moved, and the tracking efficiency of the moving target is improved.
In some embodiments, the mobile phone may use an image recognition algorithm (e.g., a target detection algorithm, a target recognition algorithm, a moving target detection algorithm, etc.) to recognize an image currently captured by the camera, and if a moving target is recognized, the mobile phone automatically starts a moving shooting mode. Or after the mobile phone identifies the moving target, prompting the user whether the moving shooting mode needs to be started or not. And if the user selects to start the motion shooting mode or does not receive the instruction of the user within a preset time period, starting the motion shooting mode by the mobile phone. Otherwise, the mobile phone does not start the motion shooting mode. In the motion shooting mode, the mobile phone can switch cameras with different view angles according to the movement of the moving target to be used for collecting the auxiliary preview image, and therefore a user can be helped to quickly and accurately track the shooting target.
For example, after the mobile phone enters the telephoto shooting mode, the mobile phone performs a target detection algorithm on M images acquired by the camera to determine a shot object contained in the M images and a position of the shot object in the M images. Wherein M is an integer greater than or equal to 2. If the position of a certain shooting object in the M images changes, the mobile phone can think that the moving shooting object is detected, and then the motion shooting mode is automatically started. Or, the mobile phone asks the user whether to turn on the motion shooting mode. For example, the interface shown in (1) of fig. 5 includes a prompt message 501. And if the moving shooting object detected by the mobile phone is the shooting target of the user, the user indicates the mobile phone to start the moving shooting mode. And if the moving shooting object detected by the mobile phone is not the shooting target of the user, the user indicates the mobile phone not to start the moving shooting mode.
Optionally, whether the mobile phone moves may be further determined by combining with a motion sensor (e.g., an acceleration sensor, a gyroscope) configured in the mobile phone, so as to eliminate a situation that positions of shot objects in the M images change due to the mobile phone being moved by a hand of a user, thereby improving accuracy of automatically recognizing a moving target by the mobile phone.
The target detection algorithm may be any one or a combination of several of Fast R-CNN, R-FCN, YOLO, SSD, and RetinaNet, or other neural network algorithms or non-neural network algorithms, which is not limited in the embodiments of the present application.
For another example, after the mobile phone enters the telephoto shooting mode, the mobile phone performs a target recognition algorithm on the P images collected by the camera to determine a shooting object included in the P images. Wherein P is an integer greater than or equal to 1. And further judging that the shot object in the P images contains an object capable of moving. If the shot objects in the P images include objects capable of moving, such as cars, birds, airplanes, balloons, soccer, etc., the mobile phone can automatically start the moving shooting mode when the mobile phone thinks that the moving shot objects are detected. Or, the mobile phone asks the user whether to turn on the motion shooting mode.
Or identifying the scene of the acquired P images. If the shooting scene of the P images is a motion field, a road and the like, it can be considered that the shooting target of the user is most likely to be a moving shooting object, and the user can automatically enter a motion shooting mode or inquire whether to enter the motion shooting mode.
Alternatively, after recognizing the photographic subject in the P images, it may be further determined whether the user has started the motion photography mode for the photographic subject. This is because most of the objects of interest to the user will be concentrated on some of the same objects, such as: sports children, running dogs, flying birds, swimming fish, etc. That is, in some examples, the cell phone may save information (e.g., image features, etc.) of objects that have been turned on in the motion capture mode. When the object is subsequently detected again, the user may default to the motion capture mode or be asked whether to enter the motion capture mode.
For another example, after the mobile phone enters the telephoto shooting mode, the mobile phone performs a moving object detection algorithm on the M images collected by the camera to determine whether the M images include moving objects to be shot. If it is determined that the photographic subject including the motion is included, the motion photographing mode may be automatically entered or the user may be asked whether to enter the motion photographing mode.
The moving object detection algorithm may be any one or a combination of several of a background difference method, an inter-frame difference method, a gaussian mixture model, an optical flow method, block matching, and optical flow estimation, and may also be other neural network algorithms or non-neural network algorithms, which is not limited in this embodiment of the present application.
After the mobile phone enters the motion shooting mode, the mobile phone uses the image collected by the camera with the larger field angle than the current camera as the auxiliary preview image of the second view frame. For example, when the mobile phone just enters a tele shooting mode, the main preview image and the auxiliary preview image are respectively processed according to the image collected by the tele camera. And after the mobile phone enters the motion shooting mode, continuously using the image collected by the long-focus camera to process to obtain a main preview image. But at the same time, the image collected by the main camera is used for processing to obtain the auxiliary preview image. It is understood that the field angle of the image captured by the main camera is larger than that captured by the tele camera. As shown in (1) in fig. 5, if a moving object is detected after the mobile phone enters the tele-shooting mode, the mobile phone enters the moving-shooting mode automatically or according to an instruction of the user, and an interface 503 shown in (2) in fig. 5 is displayed. Comparing the subsidiary preview image 502 before the motion picture taking mode is entered with the subsidiary preview image 504 after the motion picture taking mode is entered, it can be seen that the viewing range of the subsidiary preview image 504 is larger than that of the subsidiary preview image 502. At the moment, the mobile phone is in the same setting except that the motion shooting mode is different.
Optionally, the recognized moving object may also be marked in the auxiliary preview image 504, for example, a second mark box 505, so that the user can observe the position of the moving object in the auxiliary preview image 504. The user can move the second mark frame 505 into the first mark frame 506 by observing the auxiliary preview image, so that the moving target appears in the main preview image of the mobile phone, and the tracking of the moving target is realized. The mobile phone can display the first mark frame 506 and the second mark frame 505 in different ways to show the difference. For example, the second mark frame 505 may be a mark of a different color from the first mark frame 506 (e.g., one is a yellow rectangular frame and the other is a red rectangular frame), or a mark of a different line type (e.g., one is a solid line and the other is a dotted line; or the two mark frames are different in linear thickness), or the like.
In other embodiments, the user may instruct the mobile phone to start the motion capture mode by operating a specific control, pressing a specific key or key combination, executing a specific space gesture, or inputting a voice command.
For example, a control of a motion shooting mode can be displayed in a shooting interface of the mobile phone in a long-focus shooting scene. And responding to the operation of the user on the control of the motion shooting mode, and enabling the mobile phone to enter the motion shooting mode. As shown in (2) in fig. 4, the zoom magnification used by the current mobile phone is greater than or equal to a preset magnification (e.g., "10 ×"), and a start motion capture mode control 409 for indicating whether the mobile phone enters the motion capture mode is displayed in the capture interface 403 of the mobile phone. Of course, the mobile phone may also set a control for starting the motion shooting mode in other interfaces, for example, in a function item set in a system of the mobile phone, or in a setting interface of a "camera" application. After the mobile phone starts a motion shooting mode according to the instruction of the user, the mobile phone uses the main camera to collect the auxiliary preview image, and continues to use the tele-camera to collect the main preview image. In some examples, the user may also directly manually turn on the motion capture mode after starting the camera. Then, the mobile phone may automatically switch the zoom magnification to be greater than or equal to a preset magnification, for example, switch the zoom magnification to "10 ×", "20 ×" and the like, and call the main camera to acquire the auxiliary preview image, and call the telephoto camera to acquire the main preview image.
Optionally, after the mobile phone starts the motion shooting mode, the user may be prompted to select a motion target to be shot. In an example, the mobile phone may perform image recognition on the acquired auxiliary preview image, identify an object included in the auxiliary preview image, and mark or present the object in an option or a list, so that the user may select a moving object to be photographed. For example, as shown in (1) in fig. 6, in the shooting interface, the mobile phone displays an option 601 for prompting the user to select a moving object to be shot. After the user selects the moving target, the mobile phone marks the moving target in the auxiliary preview image.
In another example, the mobile phone may also perform moving object detection on the acquired auxiliary preview image, automatically identify a moving object in the auxiliary preview image, and automatically mark the moving object in the auxiliary preview image. If the mobile phone identifies a plurality of moving objects, the user can be prompted to select the moving object to be shot, and only the moving object selected by the user is marked in the auxiliary preview image. Of course, the mobile phone may also mark a plurality of moving objects, which is not limited in this embodiment.
In another example, the mobile phone may not perform image recognition or moving object detection on the auxiliary preview image first, but prompt the user to select a moving object to be photographed. The user can select a moving object to be photographed in the main preview image or the auxiliary preview image. For example, the user long-presses the position of the moving object in the main preview image. Then, the mobile phone performs image recognition according to the position executed by the user selection operation, and recognizes the size and the position of the object at the position, namely the object is confirmed as a moving target. The mobile phone can further extract the image features of the moving object, and identify and mark the moving object in the auxiliary preview image according to the extracted image features. The mobile phone can mark the moving target in both the main preview image and the auxiliary preview image, or only mark the moving target in the auxiliary preview image. For another example, the user long-presses the position of the moving object in the auxiliary preview image. Similarly, the mobile phone performs image recognition according to the position where the user selects to operate, recognizes the size and the position of the object at the position, confirms the object as a moving target, and marks the object. Optionally, the mobile phone may further extract image features of the moving object, and mark the moving object in the main preview image according to the extracted image features. For example, as shown in (2) in fig. 6, the mobile phone may display a prompt message 602 for prompting the user to press the moving object to be photographed on the auxiliary preview image for a long time. Optionally, the mobile phone may further automatically or prompt the user to enlarge the auxiliary preview image, so that the user can conveniently select the moving target on the enlarged auxiliary preview image, and the accuracy of identifying the moving target by the mobile phone is improved. For example, as shown in (3) in fig. 6, only a full-screen auxiliary preview image is displayed in the interface 603, which is convenient for the user to select the moving object thereon. After the user selects, the mobile phone can also automatically or prompt the user to restore the auxiliary preview image to the original size. For example, as shown in (3) of fig. 6, the user can restore the full-screen auxiliary preview image to the original size and position by operating the restore control 604.
In still other embodiments, the mobile phone may further enter the motion capture mode after automatically detecting the moving object or determining the moving object according to a selection of the user, instead of entering the motion capture mode when further detecting that the moving object moves out of the view range of the second view frame. In one example, the mobile phone may only turn on the tele camera before entering the motion capture mode, and turn on the tele camera and the main camera after entering the motion capture mode. Therefore, the time for the mobile phone to enter the motion shooting mode can be shortened, the time for starting the two cameras is shortened, and the power consumption is saved.
Optionally, in each of the above embodiments, after the mobile phone enters the motion capture mode, the mobile phone may further display guidance information or play audio or animation according to a position of the second mark frame relative to the first mark frame, so as to guide the user to move the direction of the mobile phone. For example, the handset may display a pattern of arrows, the direction of the arrow indicating the direction the user moves the handset, and the length of the arrow indicating the distance the user moves the handset. For another example, the mobile phone may display guide text including a moving direction and a moving distance.
In still other embodiments, after the mobile phone automatically identifies the moving object or the user manually selects the moving object, i.e., locks the moving object, the main preview image and the auxiliary preview image of the mobile phone both Automatically Focus (AF) and Automatically Expose (AE) on the locked moving object. In some examples, the secondary preview image remains consistent with AWB processing of the primary preview image when performing Automatic White Balance (AWB) processing. In this way, the image effect of the auxiliary preview image is kept consistent with that of the main preview image as much as possible, and the user is prevented from being confused due to the deviation between the effect of the auxiliary preview image and the effect of the main preview image.
In still other embodiments, after the mobile phone locks the moving object, the mobile phone also supports automatic unlocking of the moving object or manual unlocking of the moving object by the user. For example, when the moving object moves out of the auxiliary preview image, the mobile phone automatically unlocks the moving object, and determines the moving object again according to the auxiliary preview image or automatically exits the moving shooting mode. For another example, the user may double-click the locked moving object in the auxiliary preview image or otherwise unlock the moving object. For another example, after detecting that the user turns off the operation of the motion shooting mode, the mobile phone automatically unlocks the moving object. For another example, when it is detected that the zoom magnification used by the mobile phone is lower than the preset magnification, that is, when the mobile phone exits the telephoto shooting mode, the mobile phone may also automatically unlock the moving object. The embodiment of the application does not limit the way of unlocking the moving target.
In still other embodiments, when the locked moving object moves out of the auxiliary preview image, the mobile phone can further continue to switch the camera with a larger field angle for acquiring the auxiliary preview image. That is, the viewing range of the preview auxiliary image continues to be increased. For example, when the mobile phone just enters a motion shooting mode, a main preview image is collected by using a telephoto camera, and an auxiliary preview image is collected by using a main camera. When the moving target is detected to move out of the auxiliary preview image, the main preview image is collected by using the tele-camera continuously, and the auxiliary preview image is collected by using the wide-angle camera (or the ultra wide-angle camera and the like) at the same time. For example, as a shooting interface 701 shown in (1) in fig. 7, a cell phone captures a secondary preview image 702 using a main camera. At this time, the moving target "bird" locked by the mobile phone has flown out of the auxiliary preview 702, so that the mobile phone can automatically switch the wide-angle camera to capture the auxiliary preview, that is, display the shooting interface 703 as described in (2) in fig. 7. At this time, the locked moving object 705 appears in the auxiliary original image 704 in the shooting interface 703, and the user can move the mobile phone according to the relative position relationship between the moving object 795 and the first mark frame 706 so that the moving object appears in the first mark frame 706, that is, the image of the moving object is displayed in the main preview image. In other examples, if the moving object 705 does not appear in the auxiliary preview image 704 after the mobile phone switches the wide-angle camera, the mobile phone may further switch to the ultra-wide-angle camera to acquire the auxiliary preview image. Optionally, when the mobile phone switches the camera for collecting the auxiliary preview image, the prompt information can be displayed or the voice prompt can be played, so that the confusion of the user caused by the change of the image of the auxiliary preview image when the user does not move the mobile phone is avoided. For example, the hint information 701 as described in (1) of fig. 7.
Of course, the user may manually switch the camera for capturing the auxiliary preview image. That is to say, when the mobile phone is in the motion shooting mode, the shooting interface of the mobile phone can display an option for switching the acquisition auxiliary preview image camera, so that a user can autonomously select the switched camera according to the requirement.
In some examples, the cell phone may also automatically switch back to the primary camera to capture the secondary preview image after the locked moving object 705 moves into the first marker box 706. In this way, since the moving object 705 has moved into the main preview image and the size of the object in the auxiliary preview image captured by the main camera is larger (compared to the auxiliary preview images captured by the wide-angle camera and the ultra-wide-angle camera), it is convenient for the user to view the object in the auxiliary preview image.
In summary, the mobile phone can switch the cameras with different view angles to acquire the auxiliary preview images according to the motion condition of the moving target, so that the moving target can be observed in the auxiliary preview images even under the condition of large position change of the moving target, and the tracking shooting of the moving target is facilitated.
4. Shooting moving objects
Further, in the motion shooting mode, the user may instruct the mobile phone to perform a shooting operation at an appropriate timing according to the auxiliary preview image and the main preview image. It should be noted that although the user can move the mobile phone through the auxiliary preview image to place the moving object in the main preview image, the user cannot predict the exact position of the moving object in the captured image (photo or video) because the position of the moving object is still in the process of changing. Therefore, a method for performing secondary composition processing on an image captured in a motion capture mode is also provided in the embodiments of the present application.
After the mobile phone responds to the user instruction to execute the shooting operation, on one hand, the mobile phone obtains the shot image according to the image collected by the current tele camera to obtain the image which is consistent with the main preview image, namely the image is shot when the user sees the image. For example, as shown in (1) in fig. 8, the image 801 is a full-size image currently acquired by a tele camera. The mobile phone cuts the image 801 according to the current zoom magnification, the cut image is an image 802, and further the image 802 is digitally zoomed and amplified to obtain an image 803 shown in (2) in fig. 8, that is, an image obtained by shooting with the mobile phone. On the other hand, the mobile phone also stores a full-size image acquired by the current tele camera, namely an image which is not cut according to the zoom magnification. Subsequently, when the mobile phone determines that the image photographed this time needs to be recomposed (also referred to as secondary composition), processing such as cropping may be performed based on the full-size image saved at this time. In some examples, the handset may also save the full size image captured by the main camera for subsequent recomposition. The embodiment of the present application does not limit this.
Further, the mobile phone intelligently judges the composition of the moving object in the shot image. And if the composition of the moving object in the shot image is judged to be reasonable, the secondary composition processing is not started. That is, the image captured at this time is the finally obtained image. And if the composition of the moving object in the shot image is judged to be unreasonable, starting secondary composition processing.
In some embodiments, whether or not it is reasonable to compose may be determined according to the position of the moving object in the captured image.
For example, whether the composition is reasonable may be determined according to the position of the center point of the moving object in the captured image. For example, if the center point of the moving object is determined to be within the first range of the current image (e.g., the range of the image center 1/2), the composition is considered reasonable. For example, as shown in (1) in fig. 9, it is possible to determine whether or not the composition is reasonable for an image 802 cut out according to the zoom magnification used at the time of mobile phone photographing, or for an image 803 enlarged by zooming. If the center position of the moving target "bird" is within the range of the entire image center 1/2, the composition is considered reasonable, and the secondary composition process may not be started. In some examples, the handset may delete a previously retained full-size image.
If the center point of the moving object is determined to be outside the first range (e.g., the range of the image center 1/2) of the current image but within the second range (e.g., the range of the image center 3/4) of the current image, fine adjustment of the composition may be performed. For example, as shown in (2) of fig. 9, if the center position of the moving target "bird" is outside the range of the entire image center 1/2 and within the range of the entire image center 3/4, the mobile phone may automatically perform resculpting according to the full-size image captured by the telephoto camera. For example, slightly down at the original cropping position so that the center point of the moving object in the cropped image is within the center 1/2 of the re-cropped image. And then carrying out digital zooming according to the re-cut image to obtain a new image, wherein the image is the image finally obtained by shooting, and the user browses the image through the image.
If the central point of the moving object is judged to be out of the second range (for example, the range of the image center 3/4) of the current image, the composition is considered to be unreasonable, and the user is required to manually perform the secondary composition. That is, after the shooting, the mobile phone obtains two images, one image is obtained by cutting and zooming the image collected by the telephoto camera according to the zoom magnification adopted during the shooting, and the image can be viewed by the user when the user browses the photo. The other image is a full-size image acquired by the long-focus camera, and the image can be called when the user manually performs secondary composition processing.
After the image is shot by the mobile phone, the user can be prompted to inquire whether the picture composition of the image shot at present is unreasonable or not by detecting that the picture composition of the shot image is unreasonable, and whether the user needs secondary picture composition processing or not is inquired. Or, when the user browses the image, the mobile phone can start the secondary composition processing on the image according to the user instruction.
For example, as shown in an interface 101 in (1) in fig. 10A, the user is a browsing interface for viewing the image captured this time through the gallery. A modification control 102 is displayed in the interface 101, and is used for a user to open a function control of the secondary composition processing. In response to the user clicking the modification control 102, the cell phone calls out the full-size image captured by the reserved tele-camera, and a modification interface 103 as shown in (2) in fig. 10A is displayed. The modification interface 103 displays a full-size image acquired by the telephoto camera, and displays a third mark frame 104 in the full-size image, where the mark frame 104 marks a position of the image shot this time in the full-size image. The user can reselect the area cropped by the cell phone at the time of composition by, for example, dragging the position of the third mark frame 104. For example, the user drags the position of the third mark frame 104 to the position in the interface 105 in (3) in fig. 10A, and in response to the user clicking the determination control 106, the mobile phone cuts and zooms and magnifies the full-size image of the telephoto camera again according to the current position of the third mark frame, so as to obtain an image after the secondary composition. The mobile phone can only retain the image after the secondary composition processing, and also can retain the image after the secondary composition processing and the original shot image at the same time.
It can be understood that, in the above embodiment, the secondary composition of the image obtained after the user performs the shooting operation is described, and certainly, the secondary composition of the previewed image may also be performed in the process of the camera preview, and the method is similar and will not be described again.
In addition, the above embodiments are described by taking a picture in a camera application as an example, and the auxiliary preview function of the present application can be used in other scenes such as continuous shooting and video recording of the camera application.
Continuously shooting a scene:
if the moving target is detected or the mobile phone enters the moving shooting mode according to the instruction of the user, the mobile phone performs previewing by adopting various methods introduced in the above embodiments. When it is detected that the user performs a continuous shooting operation, for example, long-press of the shooting control, the mobile phone performs processing on a plurality of photos by using the method of secondary composition processing described in the above embodiment, in addition to shooting the plurality of photos according to the prior art. Namely, whether the composition is reasonable or not is determined for each picture, if so, secondary composition processing is not started, otherwise, the composition of the picture is automatically finely adjusted or a full-size image acquired by a long-focus camera at the corresponding moment of the picture is reserved. When a user browses a plurality of continuously shot photos through the gallery, the function control of secondary composition can be set in the preferred interface, so that the user can manually perform secondary composition processing.
For example, as shown in (1) in fig. 10B, an interface 107 for browsing the photo-taken through the gallery is provided. The user can enter the preferred interface 109 for this burst of photographs as shown in fig. 10B (2) by clicking on the selection control 108. A modification control 110 is arranged on the preferred interface 109 for entering a secondary composition interface of the selected photo in the preferred interface 109. The related operations can refer to the secondary composition process of a single photo in fig. 10A, and are not described here.
Of course, in other examples, after the user selects one or more of the continuously shot photos to be stored separately, the one or more photos may be manually secondarily composed at the browsing interface of the separately stored one or more photos. The embodiment of the application does not limit the relevant interface of the secondary composition.
Recording video scenes:
when the mobile phone starts the video recording function and does not start recording, the method provided by the application can be adopted, and under the condition of high zoom magnification (the zoom magnification is greater than or equal to the preset magnification), the long-focus camera is used for collecting the main preview image and the auxiliary preview image. And when a moving target is determined to exist according to the auxiliary preview image or the moving shooting mode is entered according to the instruction of the user, acquiring the main preview image by using the telephoto camera, and acquiring the auxiliary preview image by using the main camera. The user can track the moving object through the auxiliary preview image with a larger viewing range. During the process of recording the video, after the position of the moving object moves out of the main preview image, the user can move the mobile phone by means of the auxiliary preview image, so that the moving object appears in the main preview image (namely, the recording range).
In other embodiments, during the process of recording the video, the mobile phone keeps a video composed of full-size images captured by a telephoto camera aligned with the time axis of the recorded video as a backup video (denoted as a second video) in addition to the video displayed in the main preview image (denoted as a first video). Subsequently, the mobile phone performs reasonable judgment of composition according to each frame of image in the first video. If the image composition of some frames in the first video is not reasonable, the image composition processing can be performed for the image frames corresponding to the frame time in the second video for the second time. And replacing the corresponding image frame in the first video by the image after the secondary composition processing, thereby realizing the tracking shooting of the moving target. For the specific processing of each frame of image, the above description of the related content may be referred to, and is not repeated here.
In some examples, the handset may capture the video stream using the tele camera after entering the tele-capture mode but not when entering the motion-capture mode, and the resolution of the video stream may be a default resolution, e.g., Full High Definition (FHD). After the mobile phone enters the motion shooting mode, the mobile phone uses the video stream collected by the main camera as the auxiliary preview image, but uses the tele camera to collect the video stream, and at the moment, the resolution of the video stream can be switched to 4K. Then, on one hand, the mobile phone cuts and compresses the 4K video stream into a default resolution, a default code and the like according to the currently used zoom magnification to obtain the video recorded this time, namely the first video. On the other hand, the mobile phone reserves a 4K video stream, and performs processing such as encoding to obtain a second video, which can be used for subsequent secondary composition processing. Therefore, the resolution of the video stream of the mobile phone in the motion shooting mode is set to be high-resolution outflow, the mobile phone can be ensured not to be started and stopped in the process of shooting the first video and reserving the second video, and the smoothness and no blockage of the recording process of the mobile phone are ensured.
For example, as shown in FIG. 10C, is an interface for browsing a first video through a gallery application. The browsing interface includes a first area 111 and a second area 113. The first area 111 is used for browsing each image frame in the first video. The second region 113 is used to display previews of a plurality of image frames contained in the first video. The user may select to play the first video or to pause playing the first video by clicking on play control 114. Also included in the interface is a modification control 112, which is operable to access a secondary composition interface for the image frame currently displayed in the first region 111. The related operations can refer to the secondary composition process of a single photo in fig. 10A, and are not described here.
As shown in fig. 11, a schematic flow chart of a shooting method provided in the embodiment of the present application specifically includes:
s1101, starting the camera.
And S1102, displaying the preview image by the mobile phone.
In general, the default zoom magnification of the mobile phone is "1 ×". At this time, the mobile phone displays only one preview image, and the image captured by the main camera is used as the preview image. Thereafter, the mobile phone can increase or decrease the zoom magnification automatically according to a specific shooting scene or according to the instruction of the user.
And S1103, judging whether the zoom magnification used by the mobile phone is larger than a preset magnification. If the magnification is greater than the preset magnification, the mobile phone executes step S1104. Otherwise, S1102 continues.
And S1104, the mobile phone displays the main preview image and the auxiliary preview image.
And the mobile phone respectively processes the images acquired by the long-focus camera to obtain a main preview image and an auxiliary preview image. And the zooming magnification of the main preview image is consistent with the zooming magnification currently used by the mobile phone. The zoom magnification of the auxiliary preview image is generally smaller than that of the mobile phone currently used, and the auxiliary preview image comprises a first mark frame used for marking the image in the auxiliary preview image, wherein the image is in the same view range as the main preview image. Further, the mobile phone performs image recognition on the auxiliary preview image to recognize whether the auxiliary preview image contains a moving object, and the specific recognition method refers to the above related description.
S1105, the mobile phone judges whether to lock the moving target. If the moving object is locked, step S1106 is executed, otherwise, step S1104 is continuously executed.
If the moving object is identified in the auxiliary preview image, or the fact that the user manually enters the moving shooting mode and the moving object is determined according to the indication of the user is detected, the moving object is considered to be locked.
And S1106, displaying the main preview image and the auxiliary preview image by the mobile phone.
At this time, the mobile phone continues to use the tele-camera to acquire the main preview image, but uses the main camera to acquire the auxiliary preview image.
S1107, the mobile phone refreshes the positions of the first mark frame and the second mark frame in the auxiliary preview image.
And refreshing the main preview image and the auxiliary preview image by the mobile phone at a certain frequency, and refreshing the positions of the first mark frame and the second mark frame in the auxiliary preview image.
S1108, the mobile phone determines whether to unlock the moving target. If so, go to step S1103, otherwise, go to step S1107.
The user may manually unlock the moving object. Or, the mobile phone can also automatically unlock the moving target after checking that the moving target removes the auxiliary preview image. Or, the mobile phone automatically unlocks the moving target when detecting that the user exits the moving shooting mode.
As shown in fig. 12, a schematic flow chart of another shooting method provided in the embodiment of the present application specifically includes:
s1201, the mobile phone starts a camera application, and the camera enters a motion shooting mode.
S1202, a long-focus camera of the mobile phone caches a plurality of collected images to form a full-size image queue.
And S1203, detecting the shooting operation executed by the user at the first moment by the mobile phone.
S1204, the mobile phone selects an image frame from the full-size image queue according to the first moment and backs up the selected image frame.
And S1205, the mobile phone cuts the selected image frame according to the zoom magnification used by the camera at the first moment and performs digital zoom processing.
And S1206, storing the cut images and the like by the mobile phone, and taking the images shot at the first moment.
The user may browse to the image through the gallery.
S1207, judging whether the image shot at the first moment needs to be subjected to secondary composition. If necessary, point out step S1208, otherwise, the process ends.
The mobile phone may automatically execute an algorithm on the image obtained in step S1206 or the image cropped in step S1205, to determine whether the position of the moving object in the image is reasonable, and if so, the mobile phone does not need to perform secondary composition, otherwise, the mobile phone needs to perform secondary composition. The concrete reasonable judgment method can refer to the above related description.
Alternatively, when the user enters the gallery to view photos, the user may browse to the image of step S1206. If the user is not satisfied, the secondary composition of the image can be manually started. (step S1210 and step S1211 in the figure show the steps)
And S1208, loading the backup full-size image by the mobile phone.
S1209, the mobile phone cuts the backed-up full-size image again according to the zoom magnification at the first moment, performs digital zooming and other processing to obtain a recomposed image.
An embodiment of the present application further provides a chip system, as shown in fig. 13, where the chip system includes at least one processor 1301 and at least one interface circuit 1302. The processor 1301 and the interface circuit 1302 may be interconnected by wires. For example, interface circuit 1302 may be used to receive signals from other devices (e.g., a memory of electronic apparatus 100). Also for example, the interface circuit 1302 may be used to transmit signals to other devices, such as the processor 1301. Illustratively, the interface circuit 1302 may read instructions stored in a memory and send the instructions to the processor 1301. The instructions, when executed by the processor 1301, may cause the electronic device to perform the various steps performed by the electronic device 100 (e.g., a mobile phone) in the embodiments described above. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
The embodiment of the present application further provides an apparatus, where the apparatus is included in an electronic device, and the apparatus has a function of implementing the behavior of the electronic device in any one of the above-mentioned embodiments. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module or unit corresponding to the above functions. For example, a detection module or unit, a display module or unit, and a determination module or unit, etc.
Embodiments of the present application further provide a computer storage medium, which includes computer instructions, and when the computer instructions are executed on an electronic device, the electronic device is caused to execute any one of the methods in the foregoing embodiments.
The embodiments of the present application also provide a computer program product, which when run on a computer, causes the computer to execute any one of the methods in the above embodiments.
Embodiments of the present application further provide a graphical user interface on an electronic device, where the electronic device has a display screen, a camera, a memory, and one or more processors, where the one or more processors are configured to execute one or more computer programs stored in the memory, and the graphical user interface includes a graphical user interface displayed when the electronic device executes any of the methods in the foregoing embodiments.
It is to be understood that the above-mentioned terminal and the like include hardware structures and/or software modules corresponding to the respective functions for realizing the above-mentioned functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
In the embodiment of the present application, the terminal and the like may be divided into functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiment of the present invention is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. A method of filming a moving object, comprising:
responding to the operation of starting the camera application, and displaying a first view frame by the electronic equipment, wherein the first view frame is used for displaying a picture acquired by a first camera of the electronic equipment;
in response to detecting that a user increases the zoom magnification of the camera application, the first view finder is used for displaying a first picture acquired by a second camera of the electronic equipment, and the view finding range of the second camera is smaller than that of the first camera;
when the zoom magnification of the camera application is larger than or equal to the preset magnification, the electronic equipment also displays a second view frame for displaying a second picture acquired by the second camera; the framing range corresponding to the second picture is larger than the framing range corresponding to the first picture, and the second framing frame comprises a first mark frame and is used for marking the framing range of the first framing frame in the picture displayed by the second framing frame;
in response to detecting a moving target or receiving an operation of entering a motion shooting mode indicated by a user, the first view frame is used for displaying the picture acquired by the second camera, the second view frame is used for displaying the picture acquired by the first camera, and the second view frame further comprises a second mark frame, wherein the second mark frame is used for marking the moving target in the picture displayed by the second view frame.
2. The method of claim 1, further comprising:
when the moving target is detected to move out of the picture displayed by the second view frame or the operation that the user indicates to close the motion shooting mode is detected, the first view frame is used for displaying a third picture acquired by the second camera, and the second view frame is used for displaying a fourth picture acquired by the second camera; the view range corresponding to the fourth picture is larger than the view range corresponding to the third picture; alternatively, the first and second electrodes may be,
when the zoom magnification applied by the camera is detected to be smaller than the preset magnification, the first view frame is used for displaying the picture collected by the first camera, and the electronic equipment does not display the second view frame any more.
3. The method of claim 1, further comprising:
when the moving target is detected to move out of the picture displayed by the second view frame, the first view frame is used for displaying the picture acquired by the second camera, and the second view frame is used for displaying the picture acquired by the third camera; and the framing range of the third camera is larger than that of the first camera.
4. The method according to any one of claims 1-3, further comprising:
and the electronic equipment displays prompt information or plays voice according to the relative position of the first mark frame and the second mark frame in the second viewing frame, so as to guide the user to move the electronic equipment.
5. The method according to any one of claims 1-4, further comprising:
in response to the fact that the user executes a photographing operation, the electronic equipment generates a first photo for a first image collected by the second camera;
when the center point of the moving object in the first image is located outside the first area of the first image, the center point of the moving object in the first photo is located in the first area of the first photo.
6. The method according to any one of claims 1-4, further comprising:
in response to the fact that the user executes a photographing operation, the electronic equipment generates a first photo for a first image collected by the second camera;
when the central point of the moving target in the first image is located outside the first area of the first image, the electronic device further saves the first image.
7. The method according to any one of claims 1-4, further comprising:
in response to the fact that the user executes a photographing operation, the electronic equipment generates a first photo for a first image collected by the second camera;
when the center point of the moving object in the first image is located outside the first area of the first image and within the second area, the center point of the moving object in the first photo is located within the first area of the first photo;
when the central point of the moving target in the first image is located outside the second area of the first image, the electronic equipment also saves the first image.
8. The method according to claim 6 or 7, characterized in that the method further comprises:
when the user browses the first photo through the gallery application, the browsing interface of the first photo comprises a first control;
in response to detecting user operation of the first control, the electronic device displays a modification interface of the first photo; the modification interface comprises the first image and a third mark frame, and the third mark frame is used for marking a framing range of the first photo on the first image;
in response to detecting a drag operation of the user on the third mark frame, the electronic device moves a position of the third mark frame on the first image;
in response to detecting that the user operates a determination control, the electronic device generates a second photo according to the image in the third mark frame, the second photo has the same zoom magnification as the first photo, and the moving object in the second photo is in a first area of the second photo.
9. The method according to any one of claims 1-8, further comprising:
in response to the fact that the user executes continuous shooting operation, the electronic equipment generates corresponding third photos for the plurality of second images collected by the second camera;
when the center point of the moving object in any one of the plurality of second images is located outside the first area of the image, the center point of the moving object in the third photo corresponding to the image is located in the first area of the third photo corresponding to the image.
10. The method according to any one of claims 1-8, further comprising:
in response to the fact that the user executes continuous shooting operation, the electronic equipment generates corresponding third photos for the plurality of second images collected by the second camera;
when the central point of the moving target in any one of the plurality of second images is located outside the first area of the image, the electronic equipment saves the image.
11. The method according to any one of claims 1-8, further comprising:
in response to the fact that the user executes continuous shooting operation, the electronic equipment generates corresponding third photos for the plurality of second images collected by the second camera;
when the center point of the moving object in any one of the plurality of second images is located outside the first area of the image and within the second area, the center point of the moving object in the third photo corresponding to the image is located within the first area of the third photo corresponding to the image;
when the central point of the moving target in any one of the plurality of second images is located outside the second area of the image, the electronic equipment saves the image.
12. The method according to claim 9 or 11, characterized in that the method further comprises:
when the user browses the third photo through the gallery application, a selection interface of the third photo comprises a first control;
in response to detecting the operation of the user on the first control, the electronic equipment displays a modification interface of the selected third photo; the modification interface comprises a second image corresponding to the selected third photo and a fourth mark frame, and the fourth mark frame is used for marking the view range of the selected third photo on the second image corresponding to the third photo;
in response to detecting the dragging operation of the user on the fourth mark frame, the electronic equipment moves the position of the fourth mark frame on the second image corresponding to the selected third photo;
in response to detecting that the user operates a determination control, the electronic device generates a fourth photo according to the image in the fourth mark frame, the fourth photo has the same zoom magnification as the third photo, and the moving object in the fourth photo is in a first area of the fourth photo.
13. The method of any one of claims 1-12, wherein the first camera is a mid-focus camera and the second camera is a tele camera.
14. The method of claim 3, wherein the third camera is a wide-angle camera or a super wide-angle camera.
15. An electronic device, comprising: a processor, a memory and a touch screen, the memory and the touch screen being coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions that, when read from the memory by the processor, cause the electronic device to perform the method of filming a moving object as claimed in any of claims 1-14.
16. A computer-readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of filming a moving object of any of claims 1-14.
17. A chip system comprising one or more processors which, when executing instructions, perform a method of filming a moving object as claimed in any one of claims 1 to 14.
CN202010432123.8A 2020-05-20 2020-05-20 Shooting method and electronic equipment Pending CN113709354A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010432123.8A CN113709354A (en) 2020-05-20 2020-05-20 Shooting method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010432123.8A CN113709354A (en) 2020-05-20 2020-05-20 Shooting method and electronic equipment

Publications (1)

Publication Number Publication Date
CN113709354A true CN113709354A (en) 2021-11-26

Family

ID=78645650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010432123.8A Pending CN113709354A (en) 2020-05-20 2020-05-20 Shooting method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113709354A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422692A (en) * 2022-01-12 2022-04-29 西安维沃软件技术有限公司 Video recording method and device and electronic equipment
CN115278030A (en) * 2022-07-29 2022-11-01 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN115278043A (en) * 2021-04-30 2022-11-01 华为技术有限公司 Target tracking method and related device
CN116051368A (en) * 2022-06-29 2023-05-02 荣耀终端有限公司 Image processing method and related device
WO2023231600A1 (en) * 2022-05-30 2023-12-07 荣耀终端有限公司 Photographing method and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101998052A (en) * 2009-08-07 2011-03-30 奥林巴斯映像株式会社 Photographing apparatus
US20120268641A1 (en) * 2011-04-21 2012-10-25 Yasuhiro Kazama Image apparatus
CN103914689A (en) * 2014-04-09 2014-07-09 百度在线网络技术(北京)有限公司 Picture cropping method and device based on face recognition
CN104991725A (en) * 2015-07-28 2015-10-21 北京金山安全软件有限公司 Picture clipping method and system
CN111010506A (en) * 2019-11-15 2020-04-14 华为技术有限公司 Shooting method and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101998052A (en) * 2009-08-07 2011-03-30 奥林巴斯映像株式会社 Photographing apparatus
US20120268641A1 (en) * 2011-04-21 2012-10-25 Yasuhiro Kazama Image apparatus
CN103914689A (en) * 2014-04-09 2014-07-09 百度在线网络技术(北京)有限公司 Picture cropping method and device based on face recognition
CN104991725A (en) * 2015-07-28 2015-10-21 北京金山安全软件有限公司 Picture clipping method and system
CN111010506A (en) * 2019-11-15 2020-04-14 华为技术有限公司 Shooting method and electronic equipment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278043A (en) * 2021-04-30 2022-11-01 华为技术有限公司 Target tracking method and related device
WO2022228259A1 (en) * 2021-04-30 2022-11-03 华为技术有限公司 Target tracking method and related apparatus
CN114422692A (en) * 2022-01-12 2022-04-29 西安维沃软件技术有限公司 Video recording method and device and electronic equipment
WO2023134583A1 (en) * 2022-01-12 2023-07-20 维沃移动通信有限公司 Video recording method and apparatus, and electronic device
CN114422692B (en) * 2022-01-12 2023-12-08 西安维沃软件技术有限公司 Video recording method and device and electronic equipment
WO2023231600A1 (en) * 2022-05-30 2023-12-07 荣耀终端有限公司 Photographing method and electronic device
CN116051368A (en) * 2022-06-29 2023-05-02 荣耀终端有限公司 Image processing method and related device
CN116051368B (en) * 2022-06-29 2023-10-20 荣耀终端有限公司 Image processing method and related device
CN115278030A (en) * 2022-07-29 2022-11-01 维沃移动通信有限公司 Shooting method and device and electronic equipment

Similar Documents

Publication Publication Date Title
US11831977B2 (en) Photographing and processing method and electronic device
CN110072070B (en) Multi-channel video recording method, equipment and medium
CN112532869B (en) Image display method in shooting scene and electronic equipment
WO2020073959A1 (en) Image capturing method, and electronic device
CN113489894B (en) Shooting method and terminal in long-focus scene
CN112887583B (en) Shooting method and electronic equipment
CN113709354A (en) Shooting method and electronic equipment
CN113556461A (en) Image processing method and related device
CN113194242B (en) Shooting method in long-focus scene and mobile terminal
US11949978B2 (en) Image content removal method and related apparatus
WO2021185296A1 (en) Photographing method and device
CN113497890B (en) Shooting method and equipment
CN115734071A (en) Image processing method and device
CN116055874B (en) Focusing method and electronic equipment
CN114866860A (en) Video playing method and electronic equipment
CN114466101B (en) Display method and electronic equipment
CN115037872B (en) Video processing method and related device
CN115344176A (en) Display method and electronic equipment
CN117714849A (en) Image shooting method and related equipment
CN113452895A (en) Shooting method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211126