CN116095464A - Terminal shooting method and terminal equipment - Google Patents

Terminal shooting method and terminal equipment Download PDF

Info

Publication number
CN116095464A
CN116095464A CN202210829336.3A CN202210829336A CN116095464A CN 116095464 A CN116095464 A CN 116095464A CN 202210829336 A CN202210829336 A CN 202210829336A CN 116095464 A CN116095464 A CN 116095464A
Authority
CN
China
Prior art keywords
control
user
area
camera
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210829336.3A
Other languages
Chinese (zh)
Other versions
CN116095464B (en
Inventor
王晓杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311705028.0A priority Critical patent/CN117729419A/en
Priority to CN202210829336.3A priority patent/CN116095464B/en
Publication of CN116095464A publication Critical patent/CN116095464A/en
Application granted granted Critical
Publication of CN116095464B publication Critical patent/CN116095464B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application provides a terminal shooting method and terminal equipment, which are beneficial to users to arbitrarily specify shooting modes of a plurality of cameras. The method comprises the following steps: displaying a first preview interface of a camera application, wherein the first preview interface comprises M areas and a movable first control, each area in the M areas corresponds to one camera, and each area displays an image acquired by the corresponding camera; responding to a first operation of a user on a first control in a first area, shooting by a first camera in N cameras to obtain a first file, wherein the first area is one of M areas; responding to the dragging operation of a user on the first control, and controlling the first control to move to a second area, wherein the second area is one of M areas; and responding to a second operation of the user on the first control in the second area, and shooting through a second camera in the N cameras to obtain a second file.

Description

Terminal shooting method and terminal equipment
Technical Field
The present invention relates to the field of terminals, and in particular, to a terminal shooting method and terminal equipment.
Background
With the continuous progress of terminal technology, multiple camera modules can be introduced into terminal equipment to meet the increasingly abundant shooting demands of users.
At present, the terminal device can provide a special shooting mode for a user, in the shooting mode, the terminal can call a plurality of cameras to collect preview images based on the requirement of the user, and after the user clicks a shutter key, the terminal device can obtain shooting results of the plurality of cameras.
However, in the above-described shooting mode, the terminal device uses a shooting mode in which images shot by a plurality of cameras are combined into one shooting result and stored, and the user cannot arbitrarily specify the shooting mode of the plurality of cameras.
Disclosure of Invention
The application provides a terminal shooting method and terminal equipment, which are beneficial to solving the problem that when the terminal equipment shoots by using a plurality of cameras, the images shot by the cameras can only be combined into one shooting result.
In a first aspect, a terminal shooting method is provided and applied to a terminal device, where N cameras of the terminal device are enabled. The method comprises the following steps: displaying a first preview interface of the camera application, wherein the first preview interface comprises M areas and a movable first control, each area in the M areas corresponds to one camera, each area displays an image acquired by the corresponding camera, N is an integer greater than or equal to 1, and M is an integer greater than or equal to 2; responding to a first operation of a user on a first control in a first area, shooting by a first camera in N cameras to obtain a first file, wherein the first area corresponds to the first camera, and the first area is one of M areas; responding to the dragging operation of a user on the first control, and controlling the first control to move to a second area, wherein the second area is one of M areas; and responding to a second operation of the user on the first control in a second area, shooting by a second camera in the N cameras to obtain a second file, wherein the second area corresponds to the second camera.
In the application, images acquired by the N cameras can be displayed in M areas of the first preview interface of the terminal device. In order to obtain a plurality of shooting results through time-sharing shooting of a plurality of cameras or obtain a plurality of shooting results through simultaneous shooting of a plurality of cameras, a first control is displayed in the first preview interface besides an existing shutter key, and a user can move the first control to an area which is expected to be shot so as to shoot an image displayed in the area through a camera corresponding to the area, so that a file of the area is obtained independently.
Based on the technical scheme, through adding the movable first control, a user can use each camera in a time-sharing mode according to own ideas, so that shooting advantages of a multi-camera module of the terminal equipment are exerted, the problem that when the terminal equipment shoots by the multi-camera, pictures shot by the multi-camera can only be combined into one shooting result is solved, and shooting experience of the user is improved.
With reference to the first aspect, in some implementations of the first aspect, a layout manner of the images displayed by the M regions includes a stitched layout or a picture-in-picture layout. And responding to the dragging operation of the user on the first control, controlling the first control to move to a second area, wherein the method comprises the following steps: and responding to the dragging operation of the user on the first control, and controlling the first control to move from the first area to the second area.
In the application, when the first control is displayed in the first area, if the user desires to capture the image of the second area, the user can drag the first control from the first area to the second area, so that the image displayed in the second area can be independently captured by the camera corresponding to the second area.
The M regions in the first preview interface may have a plurality of layouts, including a stitched layout or a picture-in-picture layout.
With reference to the first aspect, in some implementations of the first aspect, if the first operation includes a click operation, the first file is an image file; or if the first operation comprises a long press operation, the first file is a video file.
In the application, the user can click the first control in the first area, and the image shooting function of the first camera is triggered through clicking operation to obtain the image file. Or, the user can press the first control in the first area for a long time, and the video shooting function of the first camera is triggered through long-time pressing operation, so that the video file is obtained.
With reference to the first aspect, in some implementations of the first aspect, if the second operation includes a click operation, the second file is an image file; or if the second operation comprises a long press operation, the second file is a video file.
In the application, the user can click the first control in the second area, and the image shooting function of the second camera is triggered through clicking operation, so that the image file is obtained. Or, the user can press the first control in the second area for a long time, and the video shooting function of the second camera is triggered through long-time pressing operation, so that the video file is obtained.
With reference to the first aspect, in some implementations of the first aspect, a layout manner of the images displayed by the M regions includes a stitched layout or a picture-in-picture layout. Before the first file is obtained by shooting through the first camera in the N cameras in response to the first operation of the user on the first control in the first area, the method further comprises: and responding to the dragging operation of the user on the first control, controlling the first control to move from the first juncture position to the first area, wherein the first juncture position is positioned at the juncture of the M areas.
In the application, the initial position of the first control is the first boundary position, but when shooting is not performed, the first control is always adsorbed to be displayed at the first boundary position. If the user desires to capture an image of the first area, the user may drag the first control from the first border position to the first area, so that the image displayed in the first area may be captured by the camera corresponding to the first area alone.
With reference to the first aspect, in some implementations of the first aspect, after capturing, by a first camera of the N cameras, a first file in response to a first operation of the first control by the user in the first area, the method further includes: and controlling the first control to move from the first area to the first boundary position. And responding to the dragging operation of the user on the first control, controlling the first control to move to a second area, wherein the method comprises the following steps: and responding to the dragging operation of the user on the first control, and controlling the first control to move from the first juncture to the second area. After a second file is obtained through shooting by a second camera in the N cameras in response to a second operation of the user on the first control in the second area, the method further comprises the following steps: and controlling the first control to move from the second area to the first boundary position.
In the application, after the terminal equipment obtains the first file through shooting by the first camera, the first control is controlled to return to the first juncture position. And then, if the user desires to shoot the image of the second area, the user can drag the first control from the first juncture to the second area. After the terminal equipment obtains the second file through shooting by the second camera, the first control is controlled to return to the first boundary position from the second area.
With reference to the first aspect, in some implementations of the first aspect, if the first operation includes a loosening operation, the first file is an image file; or if the first operation comprises a long press operation, the first file is a video file.
In the application, the user can loosen the hand after moving the first control from the first juncture position to the first area, and the terminal equipment is triggered to shoot through the first camera through the loosening operation to obtain the image file. Or the user can move the first control from the first intersection position to the first area and then press the first control for a long time, and the terminal equipment is triggered to shoot through the first camera to obtain the video file through long-time pressing operation.
With reference to the first aspect, in some implementations of the first aspect, if the second operation includes a loosening operation, the second file is an image file; or if the second operation comprises a long press operation, the second file is a video file.
In the application, the user can loosen the hand after moving the first control from the first juncture to the second area, and the terminal equipment is triggered to shoot through the second camera to obtain the image file through the loosening operation. Or the user can move the first control from the first intersection position to the second area and then press the first control for a long time, and the terminal equipment is triggered to shoot through the second camera to obtain the video file through long-time pressing operation.
With reference to the first aspect, in some implementation manners of the first aspect, in response to a first operation of the first control by the user in the first area, shooting by a first camera of the N cameras to obtain a first file includes: responding to long-press operation of a user on a first control in a first area, shooting a video through a first camera, and displaying a second control on a first preview interface, wherein the second control is used for stopping video recording; and responding to the operation of clicking the second control by the user, stopping video recording to obtain a video file, and controlling the second control to disappear from the first preview interface.
In the application, the user can trigger the video shooting function of the first camera through long-press operation of the first control in the first area. After triggering the video shooting function, the terminal device may display a second control on the first preview interface, and the user may click on the second control to stop video recording. After the user clicks the second control to stop video recording, the second control is no longer displayed in the first preview interface. And if the user triggers the video shooting function of the terminal equipment through long-press operation of the first control, the terminal equipment can redisplay the second control in the first preview interface.
With reference to the first aspect, in certain implementations of the first aspect, the second control and the first control are displayed in a same one of the M regions.
In this application, the second control and the first control are displayed within the same area in the first preview interface.
In an exemplary embodiment, the first control is displayed in the first area, and then the terminal device displays the second control in the first area after the user presses the first control in the first area for a long time to trigger the video capturing function of the terminal device.
In an exemplary embodiment, the first control is displayed at a junction of the first area and the second area, and then after the user triggers the video capturing function of the terminal device by pressing the first control at the junction of the first area and the second area, the terminal device displays the second control at the junction of the first area and the second area.
With reference to the first aspect, in some implementations of the first aspect, in response to a user operation on the first control at a second intersection position, M files are obtained by shooting with N cameras, where the second intersection position is located at an intersection of M areas.
In the application, when the first control is located at the second juncture position, the terminal equipment can obtain a plurality of shooting results through a plurality of cameras shooting simultaneously in response to the operation of the user on the first control at the second juncture position, so that the requirement that the user shoots by the plurality of cameras simultaneously is met.
With reference to the first aspect, in some implementation manners of the first aspect, before responding to the operation of the user on the first control at the second juncture, before capturing M files by using N cameras, the method further includes: responding to the dragging operation of the user on the first control, and controlling the first control to move to the second juncture position; when the first control is detected to move to the junction of the M areas, the first control is controlled to be adsorbed to the junction of the M areas.
In the application, if the user desires to obtain M files by shooting through N cameras at the same time, the user needs to move the first control to the second juncture position. However, the user may not be able to observe the intersection of the M areas, so when detecting that the first control moves to the intersection of the M areas, the terminal device may adsorb the first control to the intersection of the M areas. The user may then also remove the first control from the intersection of the M zones.
With reference to the first aspect, in certain implementations of the first aspect, before displaying the first preview interface of the camera application, the method further includes: responding to the operation of opening the camera application by a user, and displaying a second preview interface, wherein the second preview interface comprises a third control and first images acquired by a third camera in the N cameras; responding to the selection operation of a user on a third control in a second preview interface, displaying the third preview interface, wherein the third preview interface comprises a first image and a second image, the second image is suspended above the first image, and the second image is acquired by a fourth camera in the N cameras; responding to the dragging operation of the user on the second image, and controlling the second image to move on a third preview interface; responding to the loosening operation of the user on the floating window of the picture, and determining the position of the finger when the user loosens the second image; according to the position of the finger, an area displaying the first image and an area displaying the second image are determined.
In the application, the user can call out the second image through the selection operation of the third control, and the second image and the first image are acquired by different cameras. The user can drag the second image, after dragging the second image to a certain position to loosen the hand, the terminal equipment determines the area for displaying the first image and the area for displaying the second image according to the hot area position where the finger is positioned when the user loosens the hand.
With reference to the first aspect, in certain implementations of the first aspect, determining, according to a position of the finger, an area in which the first image is displayed and an area in which the second image is displayed includes: determining a layout mode of the first image and the second image according to the position of the finger; according to the layout mode, an area for displaying the first image and an area for displaying the second image are determined.
In the application, the terminal equipment determines the layout mode of the first image and the second image according to the hot zone position of the finger when the user releases the second image, and further can determine the area for displaying the first image and the area for displaying the second image according to the layout mode.
In a second aspect, a terminal shooting method is provided and applied to a terminal device, where N cameras of the terminal device are enabled. The method comprises the following steps: displaying a fourth preview interface of the camera application, wherein the fourth preview interface comprises M areas and a movable first control, each area in the M areas corresponds to one camera, each area displays an image acquired by the corresponding camera, N is an integer greater than or equal to 1, M is an integer greater than or equal to 2, and the layout mode of the images displayed by the M areas is superposition layout; responding to clicking operation of a user on the first control in a third area, shooting by N cameras to obtain M image files, wherein the third area is one of the N areas; or responding to the long-press operation of the user on the first control in the third area, and shooting by the N cameras to obtain M video files.
In the present application, the layout manner of the images displayed in the M areas is a superimposed layout. And in the superposition layout mode, images acquired by the N cameras are displayed in a superposition mode, and the first control is displayed in a third area of the uppermost layer. And when the user clicks the first control in the third area, the terminal equipment simultaneously shoots M image files through the N cameras. When the user performs long-press operation on the first control in the third area, the terminal equipment simultaneously shoots through the N cameras to obtain M video files. Thus, the terminal device can meet the requirement that a user can shoot simultaneously under the superposition layout to obtain a plurality of shooting results.
And triggering the video shooting function of the terminal equipment after the user performs long-press operation on the first control in the third area, and displaying a second control on a fourth preview interface by the terminal equipment, wherein the second control is used for stopping video recording. After the user clicks the second control, the terminal device generates M video files and controls the second control to disappear from the fourth preview interface.
In a third aspect, there is provided a terminal device comprising: for performing the method in any one of the possible implementations of the above aspect. In particular, the apparatus comprises means for performing the method in any one of the possible implementations of the above aspect.
In a fourth aspect, there is provided another terminal device comprising a processor and a memory, the processor being coupled to the memory, the memory being operable to store a computer program, the processor being operable to invoke and execute the computer program in the memory to implement the method in any of the possible implementations of any of the aspects.
In a fifth aspect, there is provided a processor comprising: input circuit, output circuit and processing circuit. The processing circuitry is configured to receive signals via the input circuitry and to transmit signals via the output circuitry such that the processor performs the method of any one of the possible implementations of the above aspect.
In a specific implementation process, the processor may be a chip, the input circuit may be an input pin, the output circuit may be an output pin, and the processing circuit may be a transistor, a gate circuit, a trigger, various logic circuits, and the like. The input signal received by the input circuit may be received and input by, for example and without limitation, a receiver, the output signal may be output by, for example and without limitation, a transmitter and transmitted by a transmitter, and the input circuit and the output circuit may be the same circuit, which functions as the input circuit and the output circuit, respectively, at different times. The specific implementation of the processor and various circuits is not limited in this application.
In a sixth aspect, a processing device is provided that includes a processor and a memory. The processor is configured to read instructions stored in the memory and to receive signals via the receiver and to transmit signals via the transmitter to perform the method of any one of the possible implementations of the first aspect.
Optionally, the processor is one or more and the memory is one or more.
Alternatively, the memory may be integrated with the processor or the memory may be separate from the processor.
In a specific implementation process, the memory may be a non-transitory (non-transitory) memory, for example, a Read Only Memory (ROM), which may be integrated on the same chip as the processor, or may be separately disposed on different chips, where the type of the memory and the manner of disposing the memory and the processor are not limited in this application.
It should be appreciated that the related data interaction process, for example, transmitting the indication information, may be a process of outputting the indication information from the processor, and the receiving the capability information may be a process of receiving the input capability information by the processor. Specifically, the data output by the processing may be output to the transmitter, and the input data received by the processor may be from the receiver. Wherein the transmitter and receiver may be collectively referred to as a transceiver.
The processing means in the sixth aspect may be a chip, and the processor may be implemented by hardware or software, and when implemented by hardware, the processor may be a logic circuit, an integrated circuit, or the like; when implemented in software, the processor may be a general-purpose processor, implemented by reading software code stored in a memory, which may be integrated in the processor, or may reside outside the processor, and exist separately.
In a seventh aspect, there is provided a computer program product comprising: computer program code which, when run, causes a computer to perform the method of any one of the possible implementations of the above aspect.
In an eighth aspect, a computer readable storage medium is provided, the computer readable storage medium storing a computer program which, when executed, causes a computer to perform the method of any one of the possible implementations of the above aspect.
Drawings
Fig. 1 is a schematic structural diagram of a terminal device applicable to an embodiment of the present application;
fig. 2 and fig. 3 are schematic views of an interface for calling multiple cameras according to an embodiment of the present application;
Fig. 4 and fig. 5 are schematic interface diagrams of a trigger stitching layout provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of an interface for triggering a canvas office in a picture provided in an embodiment of the present application;
FIGS. 7 and 8 are interface diagrams of trigger overlay layouts provided by embodiments of the present application;
FIG. 9 is a schematic diagram of an interface for changing layout according to an embodiment of the present disclosure;
fig. 10 is an interface schematic diagram for changing the position of a floating window according to an embodiment of the present disclosure;
fig. 11 to 15 are interface schematic diagrams of a layout manner of a large-screen terminal according to an embodiment of the present application;
FIG. 16 is an interface schematic diagram of a layout of shutter keys under a tiled arrangement, provided in an embodiment of the present application;
fig. 17 is an interface schematic diagram of a time-sharing captured image under a stitched layout provided in an embodiment of the present application;
fig. 18 is an interface schematic diagram of two paths of images captured simultaneously under a stitching layout provided in an embodiment of the present application;
fig. 19 is an interface schematic diagram of capturing one image at the same time under a stitching layout provided in an embodiment of the present application;
fig. 20 is an interface schematic diagram of a time-sharing video shooting under a splicing layout provided in an embodiment of the present application;
fig. 21 is an interface schematic diagram of two paths of videos shot simultaneously under a splicing layout provided in an embodiment of the present application;
Fig. 22 and 23 are schematic interface views of simultaneously capturing a video under the splicing layout provided in the embodiments of the present application;
FIG. 24 is an interface schematic diagram of a layout of a canvas office shutter key in a picture provided in an embodiment of the present application;
FIG. 25 is an interface schematic diagram of a time-sharing captured image under a canvas office in a picture provided in an embodiment of the present application;
FIG. 26 is a schematic diagram of an interface for capturing two images simultaneously under a canvas office in a picture provided in an embodiment of the present application;
FIG. 27 is a schematic diagram of an interface for capturing a path of images simultaneously under a canvas office in a picture provided in an embodiment of the present application;
FIG. 28 is a schematic diagram of an interface for capturing video under a canvas office in a time sharing manner according to an embodiment of the present application;
FIG. 29 is a schematic diagram of an interface for capturing two video paths simultaneously under a canvas office in a picture provided in an embodiment of the present application;
FIG. 30 is a schematic diagram of an interface for capturing a video at the same time under a canvas office in a picture provided in an embodiment of the present application;
fig. 31 is an interface schematic diagram of two paths of images captured simultaneously under a superimposed layout according to an embodiment of the present application;
fig. 32 is an interface schematic diagram of capturing a path of images simultaneously under a superimposed layout according to an embodiment of the present application;
FIG. 33 is an interface schematic diagram of a shutter key layout under another splice layout provided by embodiments of the present application;
fig. 34 is an interface schematic diagram of a time-sharing captured image under another stitching layout provided in an embodiment of the present application;
fig. 35 is an interface schematic diagram of a time-sharing video shooting under another splicing layout provided in an embodiment of the present application;
FIG. 36 is an interface schematic diagram of a layout of shutter keys for a large screen terminal provided in an embodiment of the present application;
fig. 37 is an interface schematic diagram of a time-sharing captured image in still another stitching layout according to an embodiment of the present application;
FIG. 38 is an interface schematic diagram of a time-sharing captured image in yet another stitched layout according to an embodiment of the present disclosure;
FIG. 39 is a schematic diagram of an interface for capturing two images simultaneously in yet another stitched layout according to an embodiment of the present disclosure;
FIG. 40 is a schematic diagram of an interface for capturing images under another canvas office time division in a picture provided by an embodiment of the present application;
FIG. 41 is a schematic diagram of an interface for capturing two images simultaneously under another canvas office in a picture according to an embodiment of the present application
FIG. 42 is a schematic diagram of an interface for capturing two images simultaneously in another overlay layout according to an embodiment of the present disclosure;
fig. 43 is an interface schematic diagram of a time-sharing video capturing under still another splicing layout according to an embodiment of the present application;
Fig. 44 is a schematic block diagram of a terminal device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the present application will be described below with reference to the accompanying drawings.
In order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and effect. For example, the first region and the second region are for distinguishing different regions, and the order of the different regions is not limited. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
In this application, the terms "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
Furthermore, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, and c may represent: a, b, or c, or a and b, or a and c, or b and c, or a, b and c, wherein a, b and c can be single or multiple.
Fig. 1 is a schematic structural diagram of a terminal device applicable to an embodiment of the present application. As shown in fig. 1, the terminal device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc.
It is to be understood that the configuration illustrated in the present embodiment does not constitute a specific limitation on the terminal device 100. In other embodiments of the present application, terminal device 100 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, a display processing unit (display process unit, DPU), and/or a neural-network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. In some embodiments, the terminal device 100 may also include one or more processors 110. The processor may be a neural hub and a command center of the terminal device 100. The processor can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution. A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 uses or recycles. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. This avoids repeated accesses and reduces the latency of the processor 110, thereby improving the efficiency of the terminal device 100.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a USB interface, among others. The USB interface 130 is an interface conforming to the USB standard, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the terminal device 100, or may be used to transfer data between the terminal device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is illustrated schematically, and does not constitute a structural limitation of the terminal device 100. In other embodiments of the present application, the terminal device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The wireless communication function of the terminal device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the terminal device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the terminal device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier, etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN), bluetooth, global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), NFC, infrared technology (IR), etc. applied on the terminal device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of terminal device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that terminal device 100 may communicate with a network and other devices via wireless communication techniques. The wireless communication techniques may include GSM, GPRS, CDMA, WCDMA, TD-SCDMA, LTE, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou satellite navigation system (bei dou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The terminal device 100 may implement a display function through a GPU, a display screen 194, an application processor, and the like. The application processor may include an NPU and/or a DPU. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute instructions to generate or change display information. The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the terminal device 100 may be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc. The DPU is also referred to as a display sub-system (DSS) and is used to adjust the color of the display screen 194, which may be adjusted by a color three-dimensional look-up table (3D look up table,3D LUT). The DPU can also perform processes such as scaling, noise reduction, contrast enhancement, backlight brightness management, hdr processing, display parameter Gamma adjustment, and the like on the picture.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode (AMOLED), a flexible light-emitting diode (FLED), miniled, microLed, micro-OLED, or a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED). In some embodiments, the terminal device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The terminal device 100 may implement photographing functions through an ISP, one or more cameras 193, a video codec, a GPU, one or more display screens 194, an application processor, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to realize expansion of the memory capability of the terminal device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, data files such as music, photos, videos, etc. are stored in an external memory card.
The internal memory 121 may be used to store one or more computer programs, including instructions. The processor 110 may cause the terminal device 100 to execute various functional applications, data processing, and the like by executing the above-described instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area can store an operating system; the storage area may also store one or more applications (e.g., gallery, contacts, etc.), and so forth. The storage data area may store data (e.g., photos, contacts, etc.) created during use of the terminal device 100, etc. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. In some embodiments, the processor 110 may cause the terminal device 100 to perform various functional applications and data processing by executing instructions stored in the internal memory 121, and/or instructions stored in a memory provided in the processor 110.
The terminal device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc. Wherein the audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110. The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The terminal device 100 can listen to music or to handsfree talk through the speaker 170A. A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the terminal device 100 receives a call or voice message, it is possible to receive voice by approaching the receiver 170B to the human ear. Microphone 170C, also known as a "microphone" or "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The terminal device 100 may be provided with at least one microphone 170C. In other embodiments, the terminal device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the terminal device 100 may be further provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify the source of sound, implement directional recording functions, etc. The earphone interface 170D is used to connect a wired earphone. The earphone interface 170D may be a USB interface 130, or may be a 3.5mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface, or may be a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The sensors 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The terminal device in the embodiment of the present application may be a handheld device, an in-vehicle device, or the like with a wireless connection function, and the terminal device may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), or the like. Currently, examples of some terminal devices are: a mobile phone, a tablet, a smart television, a notebook, a tablet (Pad), a palm, a mobile internet device (mobile internet device, MID), a Virtual Reality (VR) device, an augmented reality (augmented reality, AR) device, a wireless terminal in an industrial control (industrial control), a wireless terminal in an unmanned (self driving), a wireless terminal in a teleoperation (remote medical surgery), a wireless terminal in a smart grid (smart grid), a wireless terminal in a transportation security (transportation safety), a wireless terminal in a smart city (smart home), a wireless terminal in a smart home (smart home), a cellular phone, a cordless phone, a session initiation protocol (session initiation protocol, SIP) phone, a wireless local loop (wireless local loop, WLL) station, a personal digital assistant (personal digital assistant, PDA), a handheld device with wireless communication function, a computing device or other processing device connected to a wireless modem, a vehicle-mounted device, a wearable device, a terminal device in a 5G network, or a terminal in an evolving network, a public communication system, a specific embodiment of the present application is not implemented in the future network (public land mobile network).
By way of example, and not limitation, in embodiments of the present application, the terminal device may also be a wearable device. The wearable device can also be called as a wearable intelligent device, and is a generic name for intelligently designing daily wear by applying wearable technology and developing wearable devices, such as glasses, gloves, watches, clothes, shoes and the like. The wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also can realize a powerful function through software support, data interaction and cloud interaction. The generalized wearable intelligent device includes full functionality, large size, and may not rely on the smart phone to implement complete or partial functionality, such as: smart watches or smart glasses, etc., and focus on only certain types of application functions, and need to be used in combination with other devices, such as smart phones, for example, various smart bracelets, smart jewelry, etc. for physical sign monitoring.
It should be understood that in the embodiment of the present application, the terminal device may be a device for implementing a function of the terminal device, or may be a device capable of supporting the terminal device to implement the function, for example, a chip system, and the device may be installed in the terminal. In the embodiment of the application, the chip system may be formed by a chip, and may also include a chip and other discrete devices.
The terminal device in the embodiment of the present application may also be referred to as: a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote terminal, a mobile device, a user terminal, a wireless communication device, a user agent, or a user equipment, etc.
Since the terminal equipment introduces the multi-camera module, the calling of the multi-camera is mostly defined by logic in the terminal equipment system or is opened to other software developers, and a user cannot use each camera at any time or simultaneously according to own thought, so that the shooting advantage of the multi-camera module of the terminal equipment cannot be exerted.
Currently, when a user shoots using a terminal device with a multi-camera module, the user can shoot with a rear camera of the terminal device or shoot with a front camera of the terminal device, and cannot observe preview images from a plurality of cameras at the same time. The call decision of the camera is automatically decided by the terminal equipment system according to certain characteristics in the environment, and the user has no decision right on which camera is used by the terminal equipment to shoot.
In a daily life scenario, sometimes a user may have a need to shoot simultaneously using a rear camera and a front camera of a terminal device, for example, shoot VLOG video, record interview video with a terminal device, and the like. Currently, some higher-order terminal devices provide special modes of shooting with multiple cameras at the same time, such as a "dual-view effect" or a "picture-in-picture effect", but the user still cannot arbitrarily specify a shooting mode, for example, specify that multiple cameras respectively shoot and store as multiple shooting results, or specify that multiple cameras shoot simultaneously to obtain one shooting result.
Based on the problem that the user cannot arbitrarily designate the shooting mode, the embodiment of the application provides a terminal shooting method, which is suitable for terminal equipment provided with a plurality of cameras. The preview interface also comprises a movable first control, and the terminal equipment can achieve the purpose of freely switching shooting modes by moving the first control, so that shooting advantages of multiple cameras can be brought into play, and the use experience of a user is improved.
The shooting modes of the plurality of cameras in the embodiment of the application include:
mode 1: the plurality of cameras respectively shoot and store as a plurality of shooting results;
mode 2: a plurality of cameras shoot simultaneously to obtain a plurality of shooting results;
mode 3: and a plurality of cameras shoot simultaneously to obtain a shooting result.
The manner in which the multiple cameras are invoked according to embodiments of the present application is described below in conjunction with fig. 2 and 3.
Fig. 2 is an interface schematic diagram for calling multiple cameras according to an embodiment of the present application. After receiving the operation of clicking the camera application by the user, the terminal device may display a preview interface shown as an interface a in fig. 2, where a preview image a collected by the camera a is displayed.
The preview interface comprises a control 01, and when a user clicks the control 01, the terminal device can shoot a target object through a camera A to obtain an image file. For example, when the user presses control 01 for a long time, the terminal device may shoot the target object through the camera a to obtain a video file. The long press operation may be understood as an operation in which a finger of a user presses a certain control for a period of time exceeding a preset period of time.
The preview interface also includes a control 02, and illustratively, when the user clicks on control 02, the terminal device can switch the front/rear camera. For example, when the user presses the control 02 for a long time beyond a preset time period, or drags the control 02 to move and loosen the hand, as shown in the interface B in fig. 2, the terminal device may call the camera B to generate a floating window, and the preview image B collected by the camera B is displayed in the floating window.
Fig. 3 is a schematic diagram of another interface for invoking multiple cameras according to an embodiment of the present application. After receiving the operation of clicking the camera application by the user, the terminal device may display a preview interface shown as an interface a in fig. 3, where a preview image a collected by the camera a is displayed.
The preview interface further includes a control 07, and when the user clicks or long presses the control 07, as shown in the interface B in fig. 3, the terminal device may call the camera B, and generate a floating window, where the preview image B collected by the camera B is displayed.
In addition to the two modes of calling up the camera B, after displaying the preview interface of the camera application, the terminal device may prompt the user that a preset position exists at the edge of the screen, and the user may slide from the preset position to the inside of the screen so as to call up the camera B, so as to generate a floating frame window, where the preview image B collected by the camera B is displayed.
As shown in fig. 2 and 3, the control 03 is included in the floating window, and when the user's finger holds the control 03 to drag the floating window, the floating window can move within the background area according to the dragging of the user's finger, but cannot exceed the edge of the background area. Wherein the background area represents an area in which the preview image a is displayed. When the user releases his hands, the layout of the preview image a and the preview image B on the preview interface is different according to the positions of the released fingers of the user.
The user can also click control 03 to open a menu including a button to switch the front/rear camera and a drag handle to change the focal length of the screen. When the user clicks a button of the front/rear camera, the direction of the camera can be switched. When a user drags the drag handle within the range of the preset focal segment, the cameras in the same direction can be switched according to the sequence from small to large of the focal segment.
The layout manner provided by the embodiment of the application includes a stitching layout, a picture-in-picture layout, and a superposition (or fusion) layout.
Fig. 4 is an interface schematic diagram of a trigger stitching layout according to an embodiment of the present application. As shown in an interface a in fig. 4, in response to an operation of dragging the screen floating window by a user's finger, the terminal device controls the screen floating window to move following the dragging of the user's finger, and after the user drags the screen floating window to a hot zone a loose hand, as shown in an interface B in fig. 4, the preview image a and the preview image B form a stitched layout, and the preview image B is above the preview image a.
In fig. 4, control 03 and control 05 are displayed in region B, and the user can click on control 05 to cancel region B, i.e., to stop the operation of camera B, and only the image captured by camera a is displayed in the preview interface.
In region a, controls 04 and 06 are displayed, and the user can press control 04 to change the layout of preview image a and preview image B. The user can click on control 06 to cancel area a, i.e. to stop the operation of camera a, and only the image captured by camera B is displayed in the preview interface.
Fig. 5 is an interface schematic diagram of another trigger stitching layout according to an embodiment of the present application. Similar to the description with respect to fig. 4, as shown in an interface a in fig. 5, the terminal device controls the screen floating window to move to the hotbox C following the user's finger, and after the user releases the screen floating window in the hotbox C, as shown in an interface B in fig. 5, the preview image a and the preview image B form a stitched layout, and the preview image B is below the preview image a.
FIG. 6 is a schematic diagram of an interface for triggering a canvas office in a picture provided in an embodiment of the present application. As shown in an interface a in fig. 6, the terminal device controls the floating window to move to the hot zone B1 following the finger of the user, and after the user releases the floating window in the hot zone B1, as shown in an interface B in fig. 6, the preview image a and the preview image B form a picture-in-picture layout in which the preview image B is suspended on the upper layer of the preview image a to be displayed.
Fig. 7 is an interface schematic diagram of a trigger overlay layout according to an embodiment of the present application. As shown in an interface a in fig. 7, the terminal device controls the screen floating window to move to the hot zone B2 following the user's finger, and after the user releases the screen floating window in the hot zone B2, as shown in an interface B in fig. 7, the preview image a and the preview image B form a superimposed layout, and the preview image B is on the lower layer of the preview image a.
Fig. 8 is an interface schematic diagram of another trigger overlay layout provided in an embodiment of the present application. As shown in an interface a in fig. 8, the terminal device controls the screen floating window to move to the hot zone B3 following the user's finger, and after the user releases the screen floating window in the hot zone B3, as shown in an interface B in fig. 8, the preview image a and the preview image B form a superimposed layout, and the preview image B is on top of the preview image a.
It should be noted that the above-mentioned hot zone designs shown in fig. 4 to 8 are only examples, and other hot zone design methods are also possible according to factors such as the size of the screen of the terminal device, the number of call cameras, and the like, which are not limited in this embodiment of the present application.
It should be noted that, the above-mentioned hot zone is invisible to the user, when the user loosens the hands on the floating window of the screen, the terminal device detects the event of loosening the hands of the user, and determines the hot zone position of the finger when loosening the hands, thereby determining the layout mode of the preview image a and the preview image B.
Fig. 9 is an interface schematic diagram of a layout changing manner according to an embodiment of the present application. If the user desires to change the layout of preview image a and preview image B, for example, as shown in the interface a in fig. 9, in the stitched layout, the user may press a finger against control 04 on preview image a to drag preview image a. At the moment when the user drags the preview image a, as shown in the B interface in fig. 9, the terminal device displays the preview image a in the form of a screen floating window and controls the screen floating window to move following the drag of the user's finger. At the instant the user drags the preview image a, the terminal device displays the preview image B (not shown in the figure) full screen.
After the user releases the floating window in the hot zone B1, as shown in the interface C in fig. 9, the preview image a and the preview image B form a picture-in-picture layout, and the preview image a is suspended on the upper display of the preview image B.
Fig. 10 is an interface schematic diagram for changing the position of a floating window according to an embodiment of the present application. For the canvas office in a picture, the user may change the position of the floating window of the picture in the background area. Illustratively, the picture-in-picture layout of preview image a and preview image B is shown as interface a in fig. 10, and the user may drag the other areas of the floating window except for control 03 to move the floating window. As shown in the B interface in fig. 10, the user drags the other area except for the control 03 in the screen floating window to change the position of the screen floating window in the background area.
In the above-mentioned various layout modes, the preview image a and the preview image B may be preview images collected from the same camera or preview images collected from two different cameras. In addition to displaying preview images from the native camera acquisition, the terminal device may also display images from cameras of other devices.
In the embodiment of the present application, the region in the preview interface where the image acquired by the camera a is displayed may be referred to as a region a, and the region in the preview interface where the image acquired by the camera B is displayed may be referred to as a region B. In fig. 8, preview image a is displayed in area a, and preview image B is displayed in area B.
It should be appreciated that the preview interface of the camera displays the preview image a captured by the camera a immediately after the user opens the camera application. When the user triggers the function of calling the multiple cameras, the terminal equipment calls the camera B in the multiple cameras, and at the moment, the camera A and the camera B are not the same camera.
Alternatively, the terminal device may determine the camera B according to weights of different cameras. The terminal device may determine the camera with high weight as the camera B.
In one possible implementation manner, the method for judging the weight may include: the terminal equipment records cameras which are called when a user presses the shutter each time, after long-term use, for example, after 1000 times of shutter use are accumulated and recorded, the shooting times of different cameras are counted, different weights are configured for the different cameras according to the shooting times, and the weight of the cameras with more shooting times is higher.
In another possible implementation manner, the method for determining the weight may include: the terminal equipment configures different weights for different cameras according to the time and position information of the camera when the user uses the camera. For example, when a user is in a scenic spot or an outdoor location, the terminal device configures a higher weight for the tele camera; when the user is in an indoor place, the terminal equipment configures higher weight for the wide-angle camera.
After the camera a and the camera B are called out, a preview image a collected by the camera a and a preview image B collected by the camera B are displayed in the preview interface. For example, the camera a and the camera B are cameras in the same direction (for example, are both in the rear direction), the user clicks the control 03 on the preview image B, opens the menu bar, changes the zoom segment by dragging the drag handle to switch the plurality of cameras in the same direction, and when the drag handle is dragged to a certain focal segment, the focal segment is the focal segment of the camera a, so that the terminal device deactivates the camera B, and also displays the preview image collected by the camera a in the region B, that is, the preview image a displayed by the region a and the preview image B displayed by the region B are preview images collected from the same camera.
Similarly, the user can also perform pinch gesture to change the zoom in the area B, and the terminal device can automatically switch the appropriate camera according to the size of the angle of view.
The layout of the two preview images described above in connection with fig. 4 to 10 may be applicable to small screen terminals, and more than two preview images may also be present for large screen terminals.
Fig. 11 to 15 are interface diagrams of layout modes of a large-screen terminal according to an embodiment of the present application.
Fig. 11 and 12 show the layout of three preview images, in which the layout of preview image a, preview image B, and preview image C is a stitched layout.
Fig. 13 shows the layout of four preview images, in which the layout of preview image a, preview image B, preview image C, and preview image D is a stitched layout.
For large screen terminals, multiple preview images may also form a canvas-in-picture and overlay layout. Fig. 14 shows a layout of three preview images, in which the layout of preview image a and preview image B is a stitched layout, and preview image C is suspended above preview image B to form a picture-in-picture layout with preview image B. Of course, the preview image C may also be suspended above the preview image a or above the preview image a and the preview image B (i.e., suspended at the junction of the area a and the area B), which is not limited in the embodiment of the present application.
Fig. 15 shows a layout of three preview images, in which the layout of preview image a and preview image B is a superimposed layout, and preview image C is suspended above the superimposed images of preview image a and preview image B to form a picture-in-picture layout.
The superimposed layout of two or more preview images is similar to the B interface in fig. 8 and is not described here too.
For a slightly smaller screen terminal where the display space is sufficient, the layout of a plurality of preview images as shown in fig. 11 to 15 may also be adopted.
When a user calls out a plurality of cameras to shoot, the terminal equipment collects images of the target object through the cameras, a plurality of preview images are displayed on a preview interface of the camera application, and the plurality of preview images can have a layout mode as shown in fig. 4 to 15. In response to a photographing operation by a user, the terminal device may render images collected by a plurality of cameras as a whole, and save the images as one image file or one video file (mode 3 described above). The terminal device may also store the images acquired by the plurality of cameras as a plurality of image files or a plurality of video files, respectively (mode 1 described above). The terminal device may also save the images acquired by the plurality of cameras as a plurality of image files or video files simultaneously (mode 2 described above). Whether stored as one file or two files may be at the discretion of the user. For mode 1, the terminal device can align the time codes of two video files when respectively storing, so as to meet the requirements of multi-machine preview and editing.
The following first describes a man-machine interaction operation of a user-specified shooting mode under a stitching layout with reference to the accompanying drawings.
Fig. 16 is an interface schematic diagram of a layout of shutter keys under a tiled arrangement according to an embodiment of the present application. As shown in fig. 16, a preview image a captured by the camera a and a preview image B captured by the camera application B are displayed in the preview interface of the camera application. The camera a and the camera B may be the same camera or two different cameras.
It can be seen that, in the preview interface, control 01 and control 08 are included, control 01 is an existing shutter key of the preview interface of the camera application, and control 08 is a new shutter key and can move within the screen range. Under the layout of the control 08 as shown in fig. 16, the user can customize the shooting mode of the multiple cameras by changing the area in which the control 08 is located.
The scheme of adding the control 08 to the preview interface to realize the custom shooting mode may be referred to as scheme one hereinafter.
Fig. 17 is an interface schematic diagram of a time-sharing captured image under a stitched layout according to an embodiment of the present application. As shown in an interface a in fig. 17, the layout manner of the preview image a and the preview image B is a stitching layout, the control 08 is displayed in the area a, and the user may click on the control 08 to shoot an image in the area a, that is, the preview image a collected by the camera a is encoded as an image file and recorded as a first image file. If the user desires to capture an image captured by camera B, the user may drag control 08 into region B. As shown in the interface B in fig. 17, in response to a drag operation of the user on the control 08, the terminal device controls the control 08 to move along with the drag of the finger of the user, after the user drags the control 08 to the region B to loosen the hand, the terminal device displays the control 08 at the position where the user loosens the hand, and the user can click the control 08 to shoot an image in the region B, namely, the preview image B acquired by the camera B is encoded into an image file and recorded as a second image file.
The shooting mode described with respect to fig. 17 is mode 1: and respectively shooting through the two cameras and storing the two shooting results, wherein the first image file and the second image file are the two shooting results obtained in a time-sharing way.
The manner in which mode 2 is implemented in a stitched layout is described below in connection with fig. 18.
Fig. 18 is an interface schematic diagram of two paths of images captured simultaneously under a stitching layout according to an embodiment of the present application. As shown in an interface a in fig. 18, the preview image a and the preview image B are laid out in a mosaic layout, and the control 08 is displayed in the area a. If the user desires to capture the image captured by the camera a and the image captured by the camera B at the same time, as shown in the interface B in fig. 18, the user may drag the control 08 to the intersection of the area a and the area B and then loosen his hand. When detecting that the user drags the control 08 to the junction of the area a and the area B, the terminal device can adsorb the control 08 to the junction of the area a and the area B for display. And in response to the operation of clicking the control 08 at the juncture by the user, the terminal equipment shoots through the camera A and the camera B simultaneously to obtain two image files, and the two image files are recorded as a third image file and a fourth image file.
For example, when the user drags the control 08 near the intersection of the area a and the area B, the intersection may present an adsorption animation effect to prompt the user to move the control 08 to the intersection.
Illustratively, when the user drags control 08 to the intersection of region A and region B, the intersection may be highlighted to alert the user that control 08 is currently at the intersection of region A and region B.
Similarly, when the user moves control 08 to region a, region a may be highlighted to prompt the user that control 08 is currently in region a; when the user moves control 08 to region B, region B may be highlighted to prompt the user that control 08 is currently in region B.
The shooting mode described with respect to fig. 18 is mode 2: and shooting by two cameras simultaneously to obtain two shooting results, wherein the third image file and the fourth image file are the two shooting results obtained simultaneously.
The manner in which mode 3 is implemented under the splice layout is described below in connection with fig. 19.
Fig. 19 is an interface schematic diagram of capturing one image at the same time under a stitching layout according to an embodiment of the present application. As shown in fig. 19, the preview image a and the preview image B are laid out in a stitched layout. If the user desires to render the image acquired by the camera A and the image acquired by the camera B into one path of image for storage, the user can click the control 01. And in response to the operation of clicking the control 01 by the user, the terminal equipment shoots through the camera A and the camera B simultaneously to obtain an image file, and the image file is recorded as a fifth image file.
The shooting mode described with respect to fig. 19 is mode 3: and shooting through the two cameras simultaneously to obtain a shooting result, wherein the fifth image file is the shooting result obtained simultaneously.
The man-machine interaction operation for obtaining the image file in the three photographing modes is described above with reference to fig. 17 to 19, respectively. Human-computer interaction operations for obtaining video files in three photographing modes are described below with reference to fig. 20 to 22, respectively.
Fig. 20 is an interface schematic diagram of a time-sharing video shooting under a splicing layout according to an embodiment of the present application. As shown in an interface a in fig. 20, the layout of the preview image a and the preview image B is a mosaic, and the control 08 is displayed in the area a. If the user desires to record the video of the image acquired by the camera a, the user can press the control 08 in the area a. When the duration of the user pressing the control 08 exceeds the preset duration, triggering the video recording function of the terminal equipment.
In response to an operation that the duration of pressing the control 08 by the user exceeds the preset duration, as shown in an interface B in fig. 20, the terminal device displays the control 09 in the area a, and the camera a starts video recording, wherein the control 09 is used for stopping video recording. In response to the user clicking the control 09, the terminal device stops video recording, and encodes the video shot by the camera A into a video file, and records the video as a first video file.
Optionally, after the user clicks control 09 to stop video recording, control 09 is not displayed in the interface. When the user presses control 08 for a long time to trigger the video recording function, control 09 appears again in the interface.
In the process that the terminal equipment records the video through the camera A, a user can still shoot an image acquired by the camera A by clicking the control 08 in the area A to obtain an image file; or the user may drag the control 08 to the area B, and shoot the image acquired by the camera B by clicking the control 08 in the area B, so as to obtain an image file.
If the user desires to record the video of the image acquired by the camera B, the user can drag the control 08 to the area B, and long-press the control 08 in the area B exceeds the preset duration, so that the video recording function of the terminal equipment is triggered. Similarly, in response to a user pressing control 08 for more than a preset period of time, the terminal device displays control 09 in area B, and camera B starts video recording. In response to the user clicking the control 09, the terminal device stops video recording, and encodes the video shot by the camera B into a video file, and records the video as a second video file.
It should be noted that, the user may trigger the terminal device to record video through the camera B in the process that the terminal device records video through the camera a, or may trigger the terminal device to record video through the camera B after the terminal device records video through the camera a.
In connection with the man-machine interaction described in fig. 20, the terminal device may implement mode 1, i.e. two video files are obtained by time sharing of two cameras, where the first video file includes the image collected by camera a and the second video file includes the image collected by camera B.
Fig. 21 is an interface schematic diagram of two paths of videos shot simultaneously under a splicing layout according to an embodiment of the present application. Similar to the manner of capturing two images at the same time described above with respect to fig. 18, if the user desires to record the image captured by the camera a and the image captured by the camera B at the same time, two paths of videos are obtained, as shown in the interface a in fig. 21, the user may drag the control 08 to move toward the junction of the area a and the area B. When the user loosens the control 08 at the junction of the area A and the area B, the terminal equipment adsorbs the control 08 at the junction for display.
As shown in the interface B in fig. 21, the user may press the control 08 for a long time, and when the duration of pressing the control 08 for a long time exceeds the preset duration, the video recording function of the terminal device is triggered. As shown in interface C in fig. 21, the terminal device displays a control 09 at the junction of the area a and the area B, and the camera a and the camera B start to record video simultaneously. In response to the user clicking the control 09, the terminal device stops video recording, encodes the video shot by the camera a into a video file, marks the video as a third video file, and encodes the video shot by the camera B into a video file, marks the video as a fourth video file.
The terminal device may implement mode 2 in conjunction with the man-machine interaction described in fig. 21, i.e. two video files are obtained simultaneously by two cameras, the third video file includes the image collected by camera a, and the fourth video file includes the image collected by camera B.
Fig. 22 is an interface schematic diagram of capturing a video at the same time under a splicing layout according to an embodiment of the present application. Similar to the manner of capturing a path of images at the same time described above with respect to fig. 19, if the user desires to record the images captured by the camera a and the images captured by the camera B at the same time, as shown in the interface a in fig. 22, a path of video is obtained, and the user can implement by pressing the control 01. When the duration of pressing the control 01 by the user exceeds the preset duration, triggering the video recording function of the terminal equipment.
As shown in interface B in fig. 22, control 01 displays an icon of video recording (a circular icon in control 01 changes to a square icon) at the time of video recording, camera a and camera B simultaneously perform video recording, and the user can stop video recording by clicking control 01. In response to the user clicking the control 01 in the interface B in fig. 22, the terminal device stops video recording, renders the video shot by the camera a and the video shot by the camera B into a video, encodes the video into one video file, and records the video file as a fifth video file. After stopping the video recording, the style of control 01 is restored to that of control 01 as shown in the a interface in fig. 22 (square icons in control 01 are restored to circular icons).
In connection with the man-machine interaction described in fig. 22, the terminal device may implement mode 3, that is, one video file is obtained by two cameras at the same time, and the fifth video file is a video file obtained by combining the video shot by the camera a and the video shot by the camera B into a whole.
Fig. 23 is an interface schematic diagram of capturing a video at the same time under another splicing layout provided in an embodiment of the present application. As shown in an interface a in fig. 23, if a user desires to record images acquired by the camera a and the camera B simultaneously, a path of video is obtained, the user can press the control 01 to drag in a direction towards the control 02, and after the user releases his hands, the video recording function of the terminal device is triggered. As shown in the B interface in fig. 23, in response to a user's operation of releasing the hand after dragging the control 01, the terminal device starts video recording. In response to the user clicking the control 01 in the interface B in fig. 22, the terminal device stops video recording, renders the video shot by the camera a and the video shot by the camera B into a piece of video, encodes the video into one video file, and records the video file as a sixth video file.
The man-machine interaction described in connection with fig. 23 may implement mode 3, that is, the terminal device obtains one video file through two cameras at the same time, and the sixth video file is a video file obtained by combining the video shot by the camera a and the video shot by the camera B into a whole.
In fig. 16 to 23, the layout manner of the preview image a and the preview image B is a stitched layout, under which the terminal device may provide the user with a control 08 to customize the shooting mode for the user. Similarly, when the layout of preview image a and preview image B is a picture-in-picture layout, the terminal device under the picture-in-picture canvas office may also provide control 08 for the user to customize the shooting mode.
Human-computer interaction operation of a user-specified shooting mode under a canvas office in a picture is described below with reference to the accompanying drawings.
FIG. 24 is an interface schematic diagram of a layout of shutter keys under a canvas office in a picture provided in an embodiment of the present application. As shown in fig. 24, a preview image a captured by the camera a and a preview image B captured by the camera application B are displayed in the preview interface of the camera application. The camera a and the camera B may be the same camera or two different cameras. Under the canvas office in the picture, the user can define the shooting mode of multiple cameras by changing the area where the control 08 is located.
Fig. 25 is an interface schematic diagram of a time-sharing captured image under a canvas office in a picture according to an embodiment of the present application. As shown in the interface a of fig. 25, the preview image a and the preview image B are laid out in the manner of a canvas office in a picture, and the control 08 is displayed in the area a. Similar to the description with respect to fig. 17, the user may click on control 08 in area a to capture an image, i.e., encode the preview image a captured by camera a as an image file, denoted as a sixth image file.
If the user desires to capture an image captured by camera B, the user may drag control 08 into region B. As shown in the interface B in fig. 25, the user may click on the control 08 to capture an image in the area B, that is, encode the preview image B acquired by the camera B into an image file, and record the image file as a seventh image file.
The shooting mode described in connection with fig. 25 is mode 1: and respectively shooting through the two cameras and storing the shooting results as two shooting results, wherein the sixth image file and the seventh image file are the two shooting results obtained in a time-sharing way.
The manner in which mode 2 is implemented under the canvas office in picture is described below in connection with FIG. 26.
Fig. 26 is an interface schematic diagram of two paths of images captured simultaneously under a canvas office in a picture according to an embodiment of the present application. As shown in fig. 26, the preview image a and the preview image B are laid out in a manner of a canvas office in picture. The preview interface comprises a control 01, and the control 01 can move within a preset range. As shown in an interface a in fig. 26, the user may drag the control 01 from an initial position of the control 01 to slide upward, and in response to an operation of dragging the control 01 by the user, the terminal device controls the control 01 to move upward. When the user is detected to loosen the control 01, triggering the photographing function of the terminal equipment, and simultaneously photographing by the terminal equipment through the camera A and the camera B to obtain two image files, namely an eighth image file and a ninth image file. After the user releases his hand to trigger the photographing function of the terminal device, the terminal device controls the control 01 to return to the initial position as shown in the C interface in fig. 26.
The shooting mode described in connection with fig. 26 is mode 2: and shooting through the two cameras at the same time and storing the shooting results as two shooting results, wherein the eighth image file and the ninth image file are the two shooting results obtained in a time-sharing way.
In another mode of implementing mode 2 under the canvas office in picture, the user can move the control 08 to the junction of the area a and the area B, and after moving to the junction, the control 08 adsorbs and displays at the junction, and can move along with the movement of the floating window of the picture. And responding to the operation of clicking the control 08 at the juncture by the user, and simultaneously shooting by the terminal equipment through the camera A and the camera B to obtain two image files. When the user desires to independently capture an image captured by camera a or an image captured by camera B, control 08 may be removed from the junction.
The manner in which mode 3 is implemented under the canvas office in picture is described below in connection with FIG. 27.
Fig. 27 is an interface schematic diagram of capturing a path of images simultaneously under a canvas office in a picture according to an embodiment of the present application. As shown in fig. 27, the preview image a and the preview image B are laid out in a manner of a canvas office in picture. Similar to the description of fig. 19, under the canvas office in picture, if the user desires to render the image collected by the camera a and the image collected by the camera B into one path of image for saving, the user can click on the control 01. And responding to the operation of clicking the control 01 by the user, and shooting by the terminal equipment through the camera A and the camera B simultaneously to obtain an image file, and recording the image file as a tenth image file.
The shooting mode described in connection with fig. 27 is mode 3: and shooting through the two cameras simultaneously to obtain a shooting result, wherein the tenth image file is the shooting result obtained simultaneously.
The man-machine interaction operation for obtaining the image file in the three photographing modes is described above with reference to fig. 25 to 27, respectively. Human-computer interaction operations for obtaining video files in three photographing modes are described below with reference to fig. 28 to 30, respectively.
Fig. 28 is an interface schematic diagram of a video shot under a canvas office in a time sharing manner according to an embodiment of the present application. Similar to the description with respect to FIG. 20, as shown in interface A in FIG. 28, preview image A and preview image B are laid out in a manner of canvas office in a picture. Illustratively, the control 08 is located in the area B, and the user may press the control 08 in the area B where the video is desired to be recorded, triggering the video recording function of the terminal device.
As shown in the interface B in fig. 28, when the duration of pressing the control 08 by the user exceeds the preset duration, the terminal device displays the control 09 in the area B, and starts video recording on the image collected by the camera B, where the control 09 is used to stop video recording. In response to the user clicking the control 09, the terminal device stops video recording and encodes the video shot by the camera B into a video file.
If the user desires to record the video of the image acquired by the camera A, the user can drag the control 08 to the area A, and the control 08 is pressed for a long time in the area A for more than a preset time period, so that the video recording function of the terminal equipment is triggered. After stopping recording the video, obtaining a video file shot by the camera A.
It should be noted that, because the area of the area B in which the preview image is displayed is limited, too many controls may not be accommodated in the area B, so for the layout of the picture-in-picture, the terminal device may display the control 09 in the area B, and when the control 09 is displayed in the area B, the terminal device moves the control 08 to a position where the control 09 is not blocked; or display control 09 near the outside of region B to reduce occlusion of the preview image displayed within region B.
Through the man-machine interaction described with respect to fig. 28, the terminal device can realize mode 1, i.e. two video files are obtained by time sharing through two cameras.
Fig. 29 is an interface schematic diagram of two paths of video shot simultaneously under a canvas office in a picture according to an embodiment of the present application. As shown in an interface a in fig. 29, the preview image a and the preview image B are laid out in a manner of canvas office in picture. The preview interface comprises a control 01, and the control 01 can move within a preset range. The user can drag the control 01 to slide upwards from the initial position of the control 01, and the terminal device controls the control 01 to move upwards in response to the operation of dragging the control 01 from the initial position by the user. As shown in the interface B in fig. 29, if the user desires to shoot two paths of videos through the camera a and the camera B at the same time, the user may stay at a certain position after dragging the control 01, and if the user continuously presses the control 01 at the stay position for longer than a preset period of time, the video recording function of the terminal device is triggered, and the terminal device records through the camera a and the camera B at the same time. As shown in interface C of fig. 29, after the terminal device starts video recording, the user can release control 01, and the terminal device controls control 01 to return to the initial position.
Illustratively, control 01 displays an icon of video recording (the circular icon in control 01 changes to a square icon) after the terminal device begins recording video. If the user wants to end video recording, the user can click on the control 01, and the terminal equipment stops video recording in response to the operation of clicking on the control 01 by the user, and simultaneously obtains a video file obtained by shooting by the camera A and a video file obtained by shooting by the camera B. After stopping the video recording, the style of control 01 is restored to that of control 01 as shown in the a interface in fig. 29 (square icons in control 01 are restored to circular icons).
Through the man-machine interaction described with respect to fig. 29, the terminal device can realize mode 2, i.e. two video files are obtained simultaneously through two cameras.
Fig. 30 is an interface schematic diagram of capturing a video at the same time under a canvas office in a picture according to an embodiment of the present application. Similar to the description of fig. 22, if the user desires to record the image collected by the camera a and the image collected by the camera B at the same time, a path of video is obtained, and the user can press the control 01. When the duration of pressing the control 01 by the user exceeds the preset duration, triggering the video recording function of the terminal equipment. After the user clicks the control 01 to stop video recording, the terminal equipment renders the video shot by the camera A and the video shot by the camera B into a video section, and codes the video section into a video file. The specific description may refer to the description of fig. 22, and will not be repeated here.
In fig. 24 to 30, the preview image a and the preview image B are laid out in a manner of a canvas in a picture, under which the terminal device may provide the user with a control 08 to customize the photographing mode. Similarly, when the layout manner of the preview image a and the preview image B is the superimposed layout, the terminal device may also provide the control 08 for the user under the superimposed layout to customize the shooting mode for the user.
It should be understood that in the manner of the overlay layout, the terminal device overlays the image acquired by the camera a and the image acquired by the camera B together to form an overlay image, that is, in the manner of the overlay layout, the overlay image is presented to the user in the preview interface, and the overlay image is similar to the existing image obtained by performing double exposure on two images. During shooting, the terminal device can decide how to synthesize the images acquired by the two cameras according to a mixed mode selected by a user or a system.
In the embodiment of the application, the terminal equipment does not need to singly shoot two images and then carries out double exposure, but presents the superimposed image with the double exposure effect for the user in the preview interface in a superimposed layout mode, and the user can obtain one superimposed image with the double exposure effect only by shooting once, so that the mode of obtaining the superimposed image is more convenient and quicker
Human-computer interaction operation of a user-specified photographing mode in a superimposed layout is described below with reference to the accompanying drawings.
Fig. 31 is an interface schematic diagram of two paths of images captured simultaneously under a superimposed layout according to an embodiment of the present application. As shown in fig. 31, the layout of the preview image a and the preview image B is a superimposed layout. The preview interface includes a control 08, and the control 08 can move in the preview interface. If the user desires to shoot two paths of images at the same time, the user can click the control 08, and the terminal equipment shoots through the camera A and the camera B at the same time to obtain two image files in response to the operation of clicking the control 08 by the user.
The manner of capturing two paths of videos simultaneously under the superimposed layout is similar to the manner of capturing two paths of images simultaneously, and is similar to that of fig. 31 in that after the duration of pressing the control 01 by the user exceeds the preset duration, the video recording function of the terminal device can be triggered.
Fig. 32 is an interface schematic diagram of capturing one image simultaneously under a superimposed layout according to an embodiment of the present application. As shown in fig. 32, the layout of the preview image a and the preview image B is a superimposed layout. The preview interface includes a control 01. If the user desires to obtain a superimposed image in which the image acquired by the camera a and the image acquired by the camera B are superimposed, the user can click on the control 01. And responding to the operation of clicking the control 01 by the user, and synthesizing the images acquired by the two cameras by the terminal equipment to obtain an image file.
The mode of simultaneously shooting one path of video under the superposition layout is similar to the mode of simultaneously shooting one path of image, and the difference is that the time length of pressing the control 01 by a user exceeds the preset time length, so that the video recording function of the terminal equipment can be triggered, and after the user stops video recording, the terminal equipment can render the image acquired by the camera A and the image acquired by the camera B into one path of video, and a video file is obtained through encoding.
In the above description in connection with fig. 16 to 32, the terminal device may add a control 08 in the preview interface, and in the case where a plurality of preview images are presented in a stitched layout manner or in a canvas-in-picture manner, the user may move the control 08 to an area corresponding to the preview image desired to be photographed to implement image photographing or video photographing. In the case where a plurality of preview images are presented in a superimposed layout, since the plurality of preview images are superimposed together, after the user clicks the control 08, the terminal device cannot distinguish from which camera the user desires to capture an image from, but captures a plurality of image files or video files through a plurality of cameras at the same time.
For the large-screen terminal as shown in fig. 11 to 15, the user may move the control 08 to the target area to shoot in the area where the control 08 is located, and this way is not limited to the layout way of the plurality of preview images in the large-screen terminal.
Another shutter key layout will be described with reference to the drawings. In this way of layout of the shutter key, the terminal device may add a control 10 to the preview interface, and the user may customize the shooting mode by moving the control 10.
The scheme of adding the control 10 to the preview interface to implement the custom shooting mode may be referred to as scheme two hereinafter.
First, man-machine interaction operation of a user-specified shooting mode under a splicing layout is introduced.
Fig. 33 is an interface schematic diagram of a layout of shutter keys under another splice layout provided in an embodiment of the present application. As shown in fig. 33, a preview image a captured by the camera a and a preview image B captured by the camera application B are displayed in the preview interface of the camera application. The camera a and the camera B may be the same camera or two different cameras.
It can be seen that, in the preview interface, the control 01, the control 10 and the virtual track 11 are included, the control 01 is an existing shutter key of the preview interface of the camera application, the control 10 is a newly added shutter key, and the initial position of the control 10 is located at the intersection of the area a and the area B, so that the initial position of the control 10 may also be referred to as the intersection position. The area A displays the image acquired by the camera A, and the area B displays the image acquired by the camera B. In the layout of the control 10 as shown in fig. 33, the user can define the shooting mode of multiple cameras by changing the area where the control 10 is located.
It should be noted that, the user may move the control 10 on the virtual track 11, and when the user releases the hand by moving the control 10, the control 10 automatically returns to the intersection of the area a and the area B. That is, without any manipulation of the control 10, the control 10 will always be in the interface position.
The virtual track 11 may be illustratively semi-transparent. The virtual track 11 and the control 10 can be fixedly displayed at any position of the junction of the area A and the area B as a whole, or the virtual track 11 and the control 10 are fixedly displayed at the junction of the area A and the area B as a whole, but a user can drag the control 10 to move the control 10 and the virtual track 11 left and right at the junction as a whole, and the control 10 is always adsorbed at the junction in the moving process.
Fig. 34 is an interface schematic diagram of a time-sharing captured image under another stitching layout according to an embodiment of the present application. As shown in an interface a in fig. 34, the preview image a and the preview image B are arranged in a spliced arrangement, and the control 10 is displayed at the boundary between the area a and the area B. If the user desires to capture an image captured by camera a, the user may drag control 10 upward on virtual track 11. As shown in interface B in fig. 34, after the control 10 is moved to region a, the user can loosen his hand. In response to the user's operation of releasing the control 10 in the area a, the terminal device obtains an image file by shooting through the camera a. After the user releases his hand, as shown in interface C of fig. 34, the terminal device control 10 returns to the interface position.
If the user desires to capture an image captured by camera a, the user may drag control 10 down virtual track 11 as shown in the D interface of fig. 34. As shown in the E interface in fig. 34, after the control 10 moves to region B, the user can loosen his hand. In response to the user's operation of releasing the control 10 in the area B, the terminal device obtains an image file by shooting through the camera B. After the user releases his hand, as shown in the F interface of fig. 34, the terminal device control 10 returns to the interface position.
The shooting mode described with respect to fig. 34 is mode 1: and respectively shooting through the two cameras and storing the shooting results as two shooting results.
In the layout manner of the shutter key as shown in fig. 33, if the user desires to shoot the image acquired by the camera a and the image acquired by the camera B at the same time, two paths of images are obtained, and the user can click the control 10 at the intersection position. In response to the user clicking the control 10, the terminal device shoots through the camera a and the camera B simultaneously, and two image files are obtained.
In the layout manner of the shutter key as shown in fig. 33, if the user desires to shoot the image acquired by the camera a and the image acquired by the camera B at the same time, a path of image is obtained, and the user can click the control 01. In response to the user clicking the control 10, the terminal device shoots through the camera a and the camera B simultaneously, and an image file is obtained.
In the layout manner of the shutter key as shown in fig. 33, the manner of capturing video is similar to that in the manner of capturing video as shown in fig. 20 to 23, and the user is required to press the control 10 for more than a preset time period to trigger the video recording function of the terminal device.
Fig. 35 is an interface schematic diagram of a time-sharing video capturing under another splicing layout according to an embodiment of the present application. As shown in the interface a of fig. 35, if the user desires to record the video of the image acquired by the camera a, the user may drag the control 10 to move toward the top end (at the end of the area a) of the virtual track 11. As shown in the interface B in fig. 35, when the user moves the control 10 to the top end of the virtual track 11 and continuously presses the control 10 for longer than a preset time, the video recording function of the terminal device is triggered, and the terminal device records video through the camera a.
After triggering the video recording function of the terminal device, as shown in interface C in fig. 35, the terminal device controls the control 10 to return to the border position, and in region a, the control 09 is displayed, and the control 09 is used for stopping video recording. In response to the user clicking the control 09, the terminal device stops video recording and encodes the video shot by the camera A into a video file.
Similarly, as shown in the D interface of fig. 35, if the user desires to record video of the image captured by the camera B, the user can drag the control 10 to move toward the bottom end (at the end of the area B) of the virtual track 11. As shown in an E interface in fig. 35, when the user moves the control 10 to the bottom end of the virtual track 11 and continuously presses the control 10 for a period of time exceeding a preset period of time, the video recording function of the terminal device is triggered, and the terminal device records video through the camera B. After triggering the video recording function of the terminal device, as shown in an interface F in fig. 35, the terminal device controls the control 10 to return to the border position, and in the area B, the control 09 is displayed, and the control 09 is used for stopping video recording. In response to the user clicking the control 09, the terminal device stops video recording and encodes the video shot by the camera B into a video file.
In the layout manner of the shutter key as shown in fig. 33, if the user desires to record the image collected by the camera a and the image collected by the camera B at the same time, two paths of videos are obtained, and the user can press the control 10 at the border position. And in response to the operation that the user presses the control 10 for more than the preset time, the terminal equipment records the video through the camera A and the camera B at the same time, and two video files are obtained after stopping recording.
In the layout manner of the shutter key shown in fig. 33, if the user desires to record the image collected by the camera a and the image collected by the camera B at the same time, a path of video is obtained, and the user can press the control 01. And in response to the operation that the user presses the control 01 for more than the preset time, the terminal equipment shoots through the camera A and the camera B at the same time, and a video file is obtained after recording is stopped.
In the above description with reference to fig. 33 to 35, the terminal device may add a control 10 in the preview interface, and in the case that a plurality of preview images are presented in a stitched layout, the user may move the control 10 to an area corresponding to the preview image desired to be photographed to implement image photographing or video photographing.
For a large screen terminal as shown in fig. 11 to 15, the initial position of the control 10 is at a common intersection of a plurality of areas (corresponding to areas where a plurality of preview images are displayed). The terminal device may determine the number of virtual tracks for movement of the control 10 based on the number and orientation of images for preview.
Fig. 36 is an interface schematic diagram of a layout of shutter keys for a large screen terminal provided in an embodiment of the present application. As shown in fig. 36, the preview interface of the terminal device includes a preview image a collected by a camera a, a preview image B collected by a camera B, and a preview image C collected by a camera C, and the layout modes of the three are a mosaic layout. The control 10 is located at a joint of an area a, an area B and an area C, wherein the area a is an area for displaying an image acquired by the camera a, the area B is an area for displaying an image acquired by the camera B, and the area C is an area for displaying an image acquired by the camera C. There is a virtual track 11 in the direction of each preview image, and the user can move the control 01 to the target area on the virtual track 11, and shoot through the camera corresponding to the target area, so as to complete independent shooting of the image in the target area. The user can click the control 10 at the junction of a plurality of areas, and shoot through a plurality of cameras to obtain a plurality of image files simultaneously, or press the control 10 at the junction of a plurality of preview images for a long time, and shoot through a plurality of cameras to obtain a plurality of video files simultaneously.
For the first or second solution, the terminal device may provide a function of a new shutter key in the camera application, and when the user opens the function, the terminal device may automatically display the control 08 or the control 10 on the preview interface when the user invokes the multiple cameras to take a picture. The initial position of the control 01 is at the joint of the multiple areas, and the initial position of the control 08 may be determined according to the last use condition of the user.
Next, another layout of the shutter key will be described with reference to the drawings. In the layout mode of the shutter key, the terminal equipment does not need to add a new control, the control 01 is movable within a preset range, and a user can customize a shooting mode by moving the control 01.
The scheme of moving the control 01 in the preview interface to implement the custom shooting mode may be referred to as scheme three hereinafter.
First, a man-machine interaction flow of a shooting mode appointed by a user under a splicing layout is introduced.
Fig. 37 is an interface schematic diagram of a time-sharing captured image under still another stitching layout according to an embodiment of the present application. The control 01 in fig. 37 can move within an area enclosed by a dashed box (invisible to the user), and the embodiment of the present application refers to the area enclosed by the dashed box as a movable area. When detecting an operation of dragging the control 01 by the user, the terminal device may display a cursor 12 on the screen, and the position of the cursor 12 on the screen changes following the change of the control 01. In this way, the area surrounded by the dashed box can be understood as a virtual touch pad, and the position of the control 01 on the touch pad has a mapping relationship with the position of the cursor 12 on the whole screen, so that the user can change the position of the cursor on the screen by moving the control 01.
As shown in an interface a in fig. 37, a hot zone D, a hot zone E, and a hot zone F are corresponding within the movable region of the control 01. When a user drags a control 01 to a hot zone D, the terminal equipment detects that the finger position of the user is in the hot zone D, and a cursor 12 is correspondingly displayed in the zone A; when a user drags the control 01 to the hot zone F, the terminal equipment detects that the finger position of the user is in the hot zone F, and a cursor 12 is correspondingly displayed in the zone B; when the user drags the control 01 to the hot zone E, the terminal device detects that the finger position of the user is in the hot zone E, and correspondingly displays the cursor 12 at the juncture of the area a and the area B.
It should be understood that the hotspot D, the hotspot E and the hotspot F are not visible to the user, and the terminal device detects the hotspot where the finger of the user is located, so that the shooting mode is determined according to the hotspot where the finger is located when the user releases his hand.
As shown in the interface a of fig. 37, if the user desires to capture an image captured by camera a, the user may move control 01 such that cursor 12 is in region a. When the cursor 12 is in the area a, the user can achieve shooting by releasing the control 01. When detecting that a user loosens the control 01 in the hot zone D, the terminal equipment shoots through the camera A to obtain an image file.
After the user releases his hand, as shown in the B interface in fig. 37, the control 01 returns to the initial position, and the cursor 12 disappears on the screen.
Fig. 38 is an interface schematic diagram of a time-sharing captured image under still another stitched layout according to an embodiment of the present application. As shown in the interface a of fig. 38, if the user desires to capture an image captured by camera B, the user may move control 01 such that cursor 12 is in region B. When the cursor 12 is in the area B, the user can realize shooting by releasing the control 01. When detecting that a user loosens the control 01 in the hot zone F, the terminal equipment shoots through the camera B to obtain an image file.
After the user releases his hand, as shown in the B interface in fig. 38, the control 01 returns to the initial position, and the cursor 12 disappears on the screen.
Fig. 39 is an interface schematic diagram of two images captured simultaneously under still another stitching layout according to an embodiment of the present application. As shown in an interface a in fig. 39, if the user desires to capture the image captured by the camera a and the image captured by the camera B at the same time, the control 01 may be moved to cause the cursor to be displayed at the intersection of the area a and the area B. When the cursor 12 is at the intersection of the area a and the area B, the user can release the control 01 to achieve shooting. When detecting that a user loosens the control 01 in the hot zone E, the terminal equipment shoots through the camera A and the camera B simultaneously to obtain two image files. As shown in the B interface in fig. 39, after the user releases control 01 to trigger the photographing function of the terminal device, the terminal device controls control 01 to return to the initial position.
If the user expects to shoot through the camera A and the camera B at the same time, a path of image is obtained, and the user can click the control 01 at the initial position of the control 01. In response to the operation of clicking the control 01 at the initial position of the control 01 by a user, the terminal equipment shoots through the camera A and the camera B simultaneously to obtain an image file, wherein the image file is an image file obtained by combining an image acquired by the camera A and an image acquired by the camera B.
For the canvas office in a picture, the user can also customize the shooting mode through control 01.
FIG. 40 is a schematic diagram of an interface for capturing images under another canvas office time division in a picture provided in an embodiment of the present application. Specific man-machine interaction operations may refer to descriptions of fig. 37 and 38, and are not described herein.
FIG. 41 is a schematic diagram of an interface for capturing two images simultaneously under another canvas office in a picture provided in an embodiment of the present application. Specific man-machine interaction operations may be referred to as description of fig. 39, and will not be described herein.
Fig. 42 is an interface schematic diagram of two images captured simultaneously under another superimposed layout according to an embodiment of the present application. As shown in fig. 42, the layout of the preview image a and the preview image B is a superimposed layout. The preview interface comprises a control 01, and the control 01 can move within a preset range. As shown in an interface a in fig. 42, if a user desires to capture two paths of images at the same time, the user may drag the control 01 to slide upward from an initial position of the control 01, and in response to an operation of dragging the control 01 by the user, the terminal device controls the control 01 to move upward. When the user is detected to loosen the control 01, triggering the photographing function of the terminal equipment, and simultaneously photographing by the terminal equipment through the camera A and the camera B to obtain two image files. As shown in the B interface in fig. 42, after the user releases his hand to trigger the photographing function of the terminal device, the terminal device may control the control 01 to return to the initial position.
In the layout of the shutter keys as described in fig. 37, the manner of capturing a video is similar to that described above in the layout of the other shutter keys.
Fig. 43 is an interface schematic diagram of a time-sharing video capturing under still another splicing layout according to an embodiment of the present application. As shown in the interface a of fig. 43, if the user desires to record the video of the image captured by the camera B, the user may move the control 01 to cause the cursor 12 to be displayed in the area B. When the cursor 12 is displayed in the area B, the user may stop moving the control 01, press 01 until the duration of pressing the control 01 exceeds the preset duration, and trigger the video recording function of the terminal device. And responding to the operation that the duration of pressing the control 01 by the user exceeds the preset duration, and recording the video by the terminal equipment through the camera B.
After triggering the video recording function of the terminal device, as shown in the interface B in fig. 43, the terminal device controls the control 01 to return to the initial position, and displays the control 09 in the area B, where the control 09 is used to stop video recording. In response to the user clicking the control 09, the terminal device stops video recording and encodes the video shot by the camera B into a video file.
Similarly, if the user desires to record the video of the image acquired by the camera a, the user may stop moving the control 01 after moving the control 01 to make the cursor 12 displayed in the area a, and press the control 01 until the duration of pressing the control 01 exceeds the preset duration, and trigger the video recording function of the terminal device. And the terminal equipment starts video recording through the camera A, and encodes the video shot by the camera A into a video file after stopping video recording.
Similarly, if the user desires to record the images acquired by the camera a and the camera B at the same time to obtain two paths of videos, the user can stop moving the control 01 after moving the control 01 to enable the cursor 12 to be displayed at the juncture of the area a and the area B, press the control 01 until the duration of pressing the control 01 exceeds the preset duration, and trigger the video recording function of the terminal device. And the terminal equipment starts video recording through the camera A and the camera B, and encodes the video shot by the camera A into one video file and encodes the video shot by the camera B into the other video file after stopping video recording, so as to obtain two video files.
If the user expects to record the images acquired by the camera A and the camera B at the same time to obtain two paths of videos, the user can press the control 01 at the initial position until the duration of pressing the control 01 exceeds the preset duration, and the video recording function of the terminal equipment is triggered. And the terminal equipment starts video recording through the camera A and the camera B, and encodes the video shot by the camera A and the video shot by the video B into a video file after stopping video recording.
It should be understood that, in the layout manner of the shutter key as shown in fig. 37, if the layout manner of the preview image a and the preview image B is the canvas office in picture, the manner of recording video is similar to that of the splicing layout, and will not be repeated here.
In the second and third aspects described above, it is common that, once the user moves the shutter key (control 10 or control 01), the photographing function or video recording function of the terminal device is started immediately after stopping the movement operation and releasing the hands. But there is a possibility that a malfunction occurs during the movement of the control 10 or the control 01 by the user or that the shooting is attempted to be canceled as a result of the remorse during the operation.
For the second scheme, if the terminal device detects that the user moves the control 10 from the initial position of the control 10 on the virtual track 11, and then moves the control 10 to the initial position of the control 10 again (i.e. the junction of the multiple areas where multiple preview images are displayed correspondingly), the terminal device determines to cancel shooting.
Aiming at the scheme III, if the terminal equipment detects that the user moves the control 01 from the initial position of the control 01 and then moves the control 01 to the initial position of the control 01 again, the terminal equipment determines to cancel shooting.
Since the movement control 08 is separated from the shooting operation in the first embodiment, no user misoperation or remorse is involved.
It should be understood that the sequence numbers of the above processes do not mean the order of execution, and the execution order of the processes should be determined by the functions and internal logic of the processes, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The terminal photographing method according to the embodiment of the present application is described in detail above with reference to fig. 1 to 43, and the terminal device according to the embodiment of the present application will be described in detail below with reference to fig. 44.
Fig. 44 is another schematic block diagram of a terminal device 100 provided in an embodiment of the present application. The terminal device 100 shown in fig. 44 includes a display module 41 and a processing module 42.
In one embodiment, the display module 41 is configured to: and displaying a first preview interface of the camera application, wherein the first preview interface comprises M areas and a movable first control, each area in the M areas corresponds to one camera, each area displays an image acquired by the corresponding camera, N is an integer greater than or equal to 1, and M is an integer greater than or equal to 2. The processing module 42 is configured to: responding to a first operation of a user on a first control in a first area, shooting by a first camera in N cameras to obtain a first file, wherein the first area corresponds to the first camera, and the first area is one of M areas; responding to the dragging operation of a user on the first control, and controlling the first control to move to a second area, wherein the second area is one of M areas; and responding to a second operation of the user on the first control in a second area, shooting by a second camera in the N cameras to obtain a second file, wherein the second area corresponds to the second camera.
Optionally, the processing module 42 is configured to: and responding to the dragging operation of the user on the first control, and controlling the first control to move from the first area to the second area.
Optionally, if the first operation includes a click operation, the first file is an image file; or if the first operation comprises a long press operation, the first file is a video file.
Optionally, if the second operation includes a click operation, the second file is an image file; or if the second operation comprises a long press operation, the second file is a video file.
Optionally, the layout manner of the images displayed by the M areas includes a stitched layout or a picture-in-picture layout. The processing module 42 is configured to: and responding to the dragging operation of the user on the first control, controlling the first control to move from the first juncture position to the first area, wherein the first juncture position is positioned at the juncture of the M areas.
Optionally, the processing module 42 is configured to: controlling the first control to move from the first area to the first boundary position; responding to the dragging operation of a user on the first control, and controlling the first control to move from the first juncture to the second area; and controlling the first control to move from the second area to the first juncture.
Optionally, if the first operation includes a loosening operation, the first file is an image file; or if the first operation comprises a long press operation, the first file is a video file.
Optionally, if the second operation includes a loosening operation, the second file is an image file; or if the second operation comprises a long press operation, the second file is a video file.
Optionally, the processing module 42 is configured to: responding to long-press operation of a user on a first control in a first area, shooting a video through a first camera, and displaying a second control on a first preview interface, wherein the second control is used for stopping video recording; and responding to the operation of clicking the second control by the user, stopping video recording to obtain a video file, and controlling the second control to disappear from the first preview interface.
Optionally, the second control and the first control are displayed in a same one of the M regions.
Optionally, the processing module 42 is configured to: and responding to the operation of the user on the first control at the second juncture position, obtaining M files through shooting by the N cameras, wherein the second juncture position is positioned at the juncture of the M areas.
Optionally, the processing module 42 is configured to: responding to the dragging operation of the user on the first control, and controlling the first control to move to the second juncture position; and when the first control is detected to move to the junction of the M areas, controlling the first control to be adsorbed to the junction of the M areas.
Optionally, the processing module 42 is configured to: responding to the operation of opening the camera application by a user, and displaying a second preview interface, wherein the second preview interface comprises a third control and first images acquired by a third camera in the N cameras; responding to the selection operation of a user on a third control in a second preview interface, displaying the third preview interface, wherein the third preview interface comprises a first image and a second image, the second image is suspended above the first image, and the second image is acquired by a fourth camera in the N cameras; responding to the dragging operation of the user on the second image, and controlling the second image to move on a third preview interface; responding to the loosening operation of the user on the floating window of the picture, and determining the position of the finger when the user loosens the second image; and determining an area for displaying the first image and an area for displaying the second image according to the position of the finger.
Optionally, the processing module 42 is configured to: determining a layout mode of the first image and the second image according to the position of the finger; and determining a region displaying the first image and a region displaying the second image according to the layout mode.
In another embodiment, the display module 41 is configured to: displaying a fourth preview interface of the camera application, wherein the fourth preview interface comprises M areas and a movable first control, each area in the M areas corresponds to one camera, each area displays an image acquired by the corresponding camera, N is an integer greater than or equal to 1, M is an integer greater than or equal to 2, and the layout mode of the images displayed by the M areas is superposition layout; the processing module 42 is configured to: responding to clicking operation of a user on the first control in a third area, shooting by N cameras to obtain M image files, wherein the third area is one of the N areas; or responding to the long-press operation of the user on the first control in the third area, and shooting by the N cameras to obtain M video files.
The functions of the terminal device 100 may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above. It should be understood that the terminal device 100 herein is embodied in the form of functional modules. The term module herein may refer to an application specific integrated circuit (application specific integrated circuit, ASIC), an electronic circuit, a processor (e.g., a shared, dedicated, or group processor, etc.) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality. In the embodiment of the present application, the terminal device 100 may also be a chip or a chip system, for example: system on chip (SoC).
The application also provides a computer readable storage medium, in which computer executable instructions are stored, where the computer executable instructions, when executed by a processor, can implement a method executed by a terminal device in any of the above method embodiments.
Embodiments of the present application also provide a computer program product, which includes a computer program, where the computer program when executed by a processor may implement a method performed by a terminal device in any of the above method embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system, apparatus and module may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art may easily think about changes or substitutions within the technical scope of the embodiments of the present application, and the changes or substitutions are intended to be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (19)

1. A terminal photographing method, applied to a terminal device, N cameras of the terminal device being enabled, the method comprising:
displaying a first preview interface of a camera application, wherein the first preview interface comprises M areas and a movable first control, each area in the M areas corresponds to one camera, each area displays an image acquired by the corresponding camera, N is an integer greater than or equal to 1, and M is an integer greater than or equal to 2;
responding to a first operation of a user on the first control in a first area, shooting by a first camera in the N cameras to obtain a first file, wherein the first area corresponds to the first camera, and the first area is one of the M areas;
responding to the dragging operation of a user on the first control, and controlling the first control to move to a second area, wherein the second area is one of the M areas;
and responding to a second operation of the user on the first control in the second area, shooting by a second camera in the N cameras to obtain a second file, wherein the second area corresponds to the second camera.
2. The method of claim 1, wherein the layout of the images displayed by the M regions comprises a stitched layout or a picture-in-picture layout;
and responding to the dragging operation of the user on the first control, controlling the first control to move to a second area, wherein the method comprises the following steps:
and responding to the dragging operation of the user on the first control, and controlling the first control to move from the first area to the second area.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
if the first operation comprises a clicking operation, the first file is an image file; or alternatively, the first and second heat exchangers may be,
and if the first operation comprises a long-press operation, the first file is a video file.
4. A method according to claim 2 or 3, characterized in that,
if the second operation comprises a clicking operation, the second file is an image file; or alternatively, the first and second heat exchangers may be,
and if the second operation comprises a long-press operation, the second file is a video file.
5. The method of claim 1, wherein the layout of the images displayed by the M regions comprises a stitched layout or a picture-in-picture layout;
before the first file is obtained by shooting through the first camera in the N cameras in response to the first operation of the user on the first control in the first area, the method further comprises:
And responding to the dragging operation of the user on the first control, controlling the first control to move from a first juncture position to the first area, wherein the first juncture position is positioned at the juncture of the M areas.
6. The method of claim 5, wherein after the capturing the first file by the first one of the N cameras in response to the first operation of the first control by the user in the first area, the method further comprises:
controlling the first control to move from the first area to the first juncture position;
and responding to the dragging operation of the user on the first control, controlling the first control to move to a second area, wherein the method comprises the following steps:
responding to the dragging operation of a user on the first control, and controlling the first control to move from the first juncture to the second area;
after the second file is obtained by shooting through a second camera of the N cameras in response to the second operation of the user on the first control in the second area, the method further comprises:
and controlling the first control to move from the second area to the first juncture.
7. The method according to claim 5 or 6, wherein,
if the first operation comprises a loosening operation, the first file is an image file; or alternatively, the first and second heat exchangers may be,
and if the first operation comprises a long-press operation, the first file is a video file.
8. The method according to any one of claims 5 to 7, wherein,
if the second operation comprises a loosening operation, the second file is an image file; or alternatively, the first and second heat exchangers may be,
and if the second operation comprises a long-press operation, the second file is a video file.
9. The method of any one of claims 1 to 8, wherein the capturing, by a first camera of the N cameras, a first file in response to a first operation of the first control by a user in a first area, comprises:
responding to long-press operation of a user on the first control in the first area, shooting a video through the first camera, and displaying a second control on the first preview interface, wherein the second control is used for stopping video recording;
and responding to the operation of clicking the second control by the user, stopping video recording to obtain a video file, wherein the second control disappears from the first preview interface.
10. The method of claim 9, wherein the second control and the first control are displayed in a same one of the M regions.
11. The method according to claim 1, wherein the method further comprises:
and responding to the operation of a user on the first control at a second juncture position, and shooting by the N cameras to obtain M files, wherein the second juncture position is positioned at the juncture of the M areas.
12. The method of claim 11, wherein prior to capturing M files by the N cameras in response to the operation for the first control at the second interface location, the method further comprises:
responding to the dragging operation of a user on the first control, and controlling the first control to move to the second juncture position;
when the first control is detected to move to the junction of the M areas, the first control is controlled to be adsorbed to the junction of the M areas.
13. The method of any of claims 1 to 12, wherein prior to the displaying the first preview interface of the camera application, the method further comprises:
Responding to the operation of opening the camera application by a user, displaying a second preview interface, wherein the second preview interface comprises a third control and first images acquired by a third camera of the N cameras;
responding to the selection operation of a user on the third control on the second preview interface, displaying a third preview interface, wherein the third preview interface comprises the first image and a second image, the second image is suspended above the first image, and the second image is acquired by a fourth camera in the N cameras;
responding to the dragging operation of the user on the second image, and controlling the second image to move on the third preview interface;
determining the position of a finger when the user loosens the second image in response to the loosening operation of the user on the second image;
and determining an area for displaying the first image and an area for displaying the second image according to the position of the finger.
14. The method of claim 13, wherein determining the area in which the first image is displayed and the area in which the second image is displayed based on the position of the finger comprises:
determining a layout mode of the first image and the second image according to the position of the finger;
And determining the area for displaying the first image and the area for displaying the second image according to the layout mode.
15. A terminal photographing method, applied to a terminal device, N cameras of the terminal device being enabled, the method comprising:
displaying a fourth preview interface of the camera application, wherein the fourth preview interface comprises M areas and a movable first control, each area in the M areas corresponds to one camera, each area displays images acquired by the corresponding camera, N is an integer greater than or equal to 1, M is an integer greater than or equal to 2, and the layout mode of the images displayed by the M areas is superposition layout;
responding to clicking operation of a user on the first control in a third area, and shooting by the N cameras to obtain M image files, wherein the third area is one of the N areas; or alternatively, the first and second heat exchangers may be,
and responding to the long-press operation of the user on the first control in the third area, and shooting through the N cameras to obtain M video files.
16. Terminal device, characterized by comprising means for performing the method according to any of claims 1 to 14 or by comprising means for performing the method according to claim 15.
17. A terminal device, comprising: a processor and a memory, wherein,
the memory is used for storing a computer program;
the processor is configured to invoke and execute the computer program to cause the terminal device to perform the method of any of claims 1 to 14 or to cause the terminal device to perform the method of claim 15.
18. A computer readable storage medium storing a computer program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 14 or causes the computer to perform the method of claim 15.
19. A computer program product comprising computer program code embodied therein, which when run on a computer causes the computer to carry out the method according to any one of claims 1 to 14 or causes the computer to carry out the method according to claim 15.
CN202210829336.3A 2022-07-15 2022-07-15 Terminal shooting method and terminal equipment Active CN116095464B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202311705028.0A CN117729419A (en) 2022-07-15 2022-07-15 Terminal shooting method and terminal equipment
CN202210829336.3A CN116095464B (en) 2022-07-15 2022-07-15 Terminal shooting method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210829336.3A CN116095464B (en) 2022-07-15 2022-07-15 Terminal shooting method and terminal equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202311705028.0A Division CN117729419A (en) 2022-07-15 2022-07-15 Terminal shooting method and terminal equipment

Publications (2)

Publication Number Publication Date
CN116095464A true CN116095464A (en) 2023-05-09
CN116095464B CN116095464B (en) 2023-10-31

Family

ID=86187473

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210829336.3A Active CN116095464B (en) 2022-07-15 2022-07-15 Terminal shooting method and terminal equipment
CN202311705028.0A Pending CN117729419A (en) 2022-07-15 2022-07-15 Terminal shooting method and terminal equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202311705028.0A Pending CN117729419A (en) 2022-07-15 2022-07-15 Terminal shooting method and terminal equipment

Country Status (1)

Country Link
CN (2) CN116095464B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210014413A1 (en) * 2018-03-15 2021-01-14 Vivo Mobile Communication Co.,Ltd. Photographing method and mobile terminal
CN112511751A (en) * 2020-12-04 2021-03-16 维沃移动通信(杭州)有限公司 Shooting method and device, electronic equipment and readable storage medium
CN112954196A (en) * 2021-01-27 2021-06-11 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
CN112954218A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
CN113542581A (en) * 2020-04-22 2021-10-22 华为技术有限公司 View finding method of multi-channel video, graphical user interface and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210014413A1 (en) * 2018-03-15 2021-01-14 Vivo Mobile Communication Co.,Ltd. Photographing method and mobile terminal
CN112954218A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
CN113542581A (en) * 2020-04-22 2021-10-22 华为技术有限公司 View finding method of multi-channel video, graphical user interface and electronic equipment
CN112511751A (en) * 2020-12-04 2021-03-16 维沃移动通信(杭州)有限公司 Shooting method and device, electronic equipment and readable storage medium
CN112954196A (en) * 2021-01-27 2021-06-11 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN117729419A (en) 2024-03-19
CN116095464B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN110072070B (en) Multi-channel video recording method, equipment and medium
US11785329B2 (en) Camera switching method for terminal, and terminal
WO2021093793A1 (en) Capturing method and electronic device
US11669242B2 (en) Screenshot method and electronic device
US20230046708A1 (en) Application Interface Interaction Method, Electronic Device, and Computer-Readable Storage Medium
EP4064684A1 (en) Method for photography in long-focal-length scenario, and terminal
CN110602315B (en) Electronic device with foldable screen, display method and computer-readable storage medium
WO2020029306A1 (en) Image capture method and electronic device
WO2022100610A1 (en) Screen projection method and apparatus, and electronic device and computer-readable storage medium
EP3893495A1 (en) Method for selecting images based on continuous shooting and electronic device
WO2020078273A1 (en) Photographing method, and electronic device
CN110559645B (en) Application operation method and electronic equipment
CN113824878A (en) Shooting control method based on foldable screen and electronic equipment
CN113596319A (en) Picture-in-picture based image processing method, apparatus, storage medium, and program product
EP4199499A1 (en) Image capture method, graphical user interface, and electronic device
CN114500901A (en) Double-scene video recording method and device and electronic equipment
CN112449101A (en) Shooting method and electronic equipment
CN116095464B (en) Terminal shooting method and terminal equipment
WO2022062985A1 (en) Method and apparatus for adding special effect in video, and terminal device
CN114115617B (en) Display method applied to electronic equipment and electronic equipment
CN116709018B (en) Zoom bar segmentation method and electronic equipment
WO2023071497A1 (en) Photographing parameter adjusting method, electronic device, and storage medium
US20240045586A1 (en) Method for Enabling Function in Application and Apparatus
CN117714849A (en) Image shooting method and related equipment
CN117750186A (en) Camera function control method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant