CN114745508B - Shooting method, terminal equipment and storage medium - Google Patents
Shooting method, terminal equipment and storage medium Download PDFInfo
- Publication number
- CN114745508B CN114745508B CN202210660829.9A CN202210660829A CN114745508B CN 114745508 B CN114745508 B CN 114745508B CN 202210660829 A CN202210660829 A CN 202210660829A CN 114745508 B CN114745508 B CN 114745508B
- Authority
- CN
- China
- Prior art keywords
- picture
- camera
- shooting
- instruction
- photographing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000004590 computer program Methods 0.000 claims description 8
- 230000001976 improved effect Effects 0.000 abstract description 4
- 238000004891 communication Methods 0.000 description 27
- 230000006854 communication Effects 0.000 description 27
- 238000012545 processing Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 12
- 238000007726 management method Methods 0.000 description 12
- 238000010295 mobile communication Methods 0.000 description 11
- 230000001960 triggered effect Effects 0.000 description 9
- 230000005236 sound signal Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 210000000988 bone and bone Anatomy 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 229920001621 AMOLED Polymers 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 230000003416 augmentation Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- RAFGELQLHMBRHD-VFYVRILKSA-N Bixin Natural products COC(=O)C=CC(=C/C=C/C(=C/C=C/C=C(C)/C=C/C=C(C)/C=C/C(=O)O)/C)C RAFGELQLHMBRHD-VFYVRILKSA-N 0.000 description 1
- RAFGELQLHMBRHD-UHFFFAOYSA-N alpha-Fuc-(1-2)-beta-Gal-(1-3)-(beta-GlcNAc-(1-6))-GalNAc-ol Natural products COC(=O)C=CC(C)=CC=CC(C)=CC=CC=C(C)C=CC=C(C)C=CC(O)=O RAFGELQLHMBRHD-UHFFFAOYSA-N 0.000 description 1
- 239000001670 anatto Substances 0.000 description 1
- 235000012665 annatto Nutrition 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- RAFGELQLHMBRHD-SLEZCNMESA-N bixin Chemical compound COC(=O)\C=C\C(\C)=C/C=C/C(/C)=C/C=C/C=C(\C)/C=C/C=C(\C)/C=C/C(O)=O RAFGELQLHMBRHD-SLEZCNMESA-N 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000010349 pulsation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the application provides a shooting method, terminal equipment and storage medium, wherein the method comprises the following steps: receiving a first shooting instruction, shooting a first picture, and displaying the first picture in a display interface; receiving a second shooting instruction, and shooting a second picture; and merging the first picture and the second picture into a third picture according to the relative positions of the first picture and the second picture. In the embodiment of the application, when a photo containing a plurality of pictures is required to be shot, for example, in a multi-shot shooting mode, the pictures are shot independently, so that a user can find the fit points among different pictures conveniently, and the user experience is improved.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a shooting method, a terminal device, and a storage medium.
Background
In order to improve user experience, terminal equipment such as mobile phones and tablet computers are provided with a plurality of cameras, and a multi-camera mode is provided. In the multi-shot mode, the terminal device can collect a plurality of pictures through a plurality of cameras and combine the collected pictures into one picture. Based on the above features of the multi-shot mode, the user can take creative photos in the multi-shot mode. For example, in a multi-shot mode, a river image is acquired by one camera, a user head image is acquired by another camera, and the mouth of the user is aimed at the source of the river, so that a 'kouzhuzhudang' image is generated.
However, in practical applications, because the user cannot accurately find the fitting point between the pictures acquired by different cameras (for example, in the "koku-suspension" picture, the mouth of the user in the user head portrait picture cannot be aligned to the source of the river in the river picture), the capturing effect of the creative photo is poor.
Disclosure of Invention
The application provides a shooting method, terminal equipment and a storage medium, which are beneficial to solving the problem that in the prior art, under a multi-shooting mode, the shooting effect of creative photos is poor because a user cannot accurately find the fitting points among pictures acquired by different cameras.
In a first aspect, an embodiment of the present application provides a photographing method, including:
receiving a first shooting instruction, shooting a first picture, and displaying the first picture in a display interface;
receiving a second shooting instruction, and shooting a second picture;
and merging the first picture and the second picture into a third picture according to the relative positions of the first picture and the second picture.
In one possible implementation manner, after the receiving the first shooting instruction, shooting a first picture, and displaying the first picture in a display interface, the method further includes:
And receiving a first picture adjustment instruction, and adjusting the first picture in the display interface.
In one possible implementation manner, the receiving the first picture adjustment instruction, adjusting the first picture in the display interface, includes:
and receiving a first picture adjustment instruction, and adjusting the display position, the display angle and/or the picture size of the first picture in the display interface.
In one possible implementation, the first picture and the second picture are pictures taken with different cameras.
In one possible implementation manner, before the receiving the first shooting instruction, shooting a first picture, and displaying the first picture in a display interface, the method further includes:
and starting a multi-shot shooting mode, and respectively displaying a first preview picture acquired by the first camera and a second preview picture acquired by the second camera in a display interface.
In one possible implementation manner, the receiving the first shooting instruction, shooting the first picture includes:
receiving a first shooting instruction;
according to a preset first shooting priority, determining to take the picture by the first camera preferentially;
And shooting the first picture through the first camera.
In one possible implementation manner, the receiving the second shooting instruction, shooting the second picture includes:
and receiving a second shooting instruction, and shooting the second picture through the second camera.
In one possible implementation manner, the receiving the first shooting instruction, shooting the first picture includes:
receiving a first shooting instruction;
according to a preset second shooting priority, determining to take the picture by the second camera preferentially;
and shooting the first picture through the second camera.
In one possible implementation manner, the receiving the second shooting instruction, shooting the second picture includes:
and receiving a second shooting instruction, and shooting the second picture through the first camera.
In one possible implementation, the multi-shot mode includes any one or a combination of the following modes:
the front-mounted double-shot shooting mode is adopted, the first camera and the second camera are front-mounted cameras, and pictures acquired by the first camera and the second camera are not overlapped;
the first camera and the second camera are rear cameras, and pictures acquired by the first camera and the second camera are not overlapped;
A front-back double-shooting mode, wherein a front camera and a rear camera exist in the first camera and the second camera, and pictures acquired by the first camera and the second camera are not overlapped;
the front-mounted picture-in-picture photographing mode is adopted, the first camera and the second camera are front-mounted cameras, and pictures acquired by the first camera and the second camera overlap;
the rear picture-in-picture photographing mode is that the first camera and the second camera are rear cameras, and pictures acquired by the first camera and the second camera overlap;
the front and back picture-in-picture photographing mode is that a front camera and a rear camera exist in the first camera and the second camera, and pictures acquired by the first camera and the second camera overlap.
In one possible implementation manner, after the receiving the first shooting instruction, shooting a first picture, and displaying the first picture in a display interface, the method further includes:
and receiving a first re-shooting instruction, re-shooting the first picture, and displaying the re-shot first picture in a display interface.
In one possible implementation manner, after the receiving the second photographing instruction and photographing the second picture, the method further includes:
and receiving a second re-shooting instruction, and re-shooting the second picture.
In a second aspect, an embodiment of the present application provides a terminal device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the terminal device to perform the method according to any of the first aspects.
In a third aspect, an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium includes a stored program, where when the program runs, the program controls a device in which the computer readable storage medium is located to execute the method of any one of the first aspects.
In the embodiment of the application, when a photo containing a plurality of pictures is required to be shot, for example, in a multi-shot shooting mode, the pictures are shot independently, so that a user can find the fit points among different pictures conveniently, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a terminal device according to an embodiment of the present application;
fig. 2A is a schematic diagram of a shooting scene in a front-back dual-shooting mode according to an embodiment of the present application;
fig. 2B is a schematic diagram of a scene shot in a front-back picture-in-picture mode according to an embodiment of the present application;
fig. 2C is a schematic diagram of a scene shot in a post-pd mode according to an embodiment of the present application;
fig. 3A is a schematic view of an application scenario in a multi-shot photographing mode according to an embodiment of the present application;
fig. 3B is a schematic view of an application scenario in another multi-shot photographing mode according to an embodiment of the present application;
fig. 3C is a schematic view of an application scenario in another multi-shot photographing mode according to an embodiment of the present application;
fig. 4 is a schematic flow chart of a shooting method according to an embodiment of the present application;
fig. 5 is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 6 is a schematic diagram of another application scenario provided in an embodiment of the present application;
fig. 7 is a flowchart of another photographing method according to an embodiment of the present application;
fig. 8 is a schematic diagram of another application scenario provided in an embodiment of the present application;
fig. 9 is a flowchart of another photographing method according to an embodiment of the present application;
fig. 10 is a schematic diagram of another application scenario provided in an embodiment of the present application;
Fig. 11 is a flowchart of another photographing method according to an embodiment of the present application;
fig. 12 is a schematic diagram of another application scenario provided in an embodiment of the present application;
fig. 13 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
For a better understanding of the technical solution of the present application, the following detailed description of the embodiments of the present application refers to the accompanying drawings.
It should be understood that the described embodiments are merely some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one way of describing an association of associated objects, meaning that there may be three relationships, e.g., a and/or b, which may represent: the first and second cases exist separately, and the first and second cases exist separately. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
Referring to fig. 1, a schematic diagram of a terminal device is provided in an embodiment of the present application. In fig. 1, a mobile phone 100 is taken as an example to illustrate a terminal device, fig. 1 shows a front view and a rear view of the mobile phone 100, two front cameras 111 and 112 are disposed on the front side of the mobile phone 100, and four rear cameras 121, 122, 123 and 124 are disposed on the rear side of the mobile phone 100. Through the plurality of cameras that dispose, can provide multiple shooting modes for the user. The user can select a corresponding shooting mode to shoot according to the shooting scene so as to improve the user experience.
It should be understood that the illustration in fig. 1 is only an exemplary illustration and should not be taken as a limitation on the scope of the application. For example, the number and location of the cameras may be different for different handsets. In addition, the terminal device related to the embodiment of the present application may be a tablet computer, a personal computer (personal computer, PC), a personal digital assistant (personal digital assistant, PDA), a smart watch, a netbook, a wearable terminal device, an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, a vehicle-mounted device, a smart car, a smart sound, a robot, a smart glasses, a smart television, or the like, in addition to a mobile phone.
It should be noted that, in some possible implementations, the terminal device may also be referred to as an electronic device, a User Equipment (UE), or the like, which is not limited by the embodiment of the present application.
In some possible implementations, the shooting modes involved by the terminal device may include a single-pass shooting mode (also referred to herein as a "single shooting mode" for short) and a multi-pass shooting mode (also referred to herein as a "multi-shooting mode" for short).
The single shooting mode adopts a camera to shoot, and one path of picture acquired by the camera is displayed and/or encoded. For example, a front single shot mode, a rear single shot mode, and the like. The multi-shooting mode adopts two or more cameras to shoot, and displays and/or encodes two or more pictures acquired by the two or more cameras. For example, a front double-shot mode, a rear double-shot mode, a front-picture-in-picture mode, a rear-picture-in-picture mode, a front-rear picture-in-picture mode, and the like.
Specifically, in a front single-shot mode, shooting is performed by adopting a front camera; in the post single shooting mode, shooting is carried out by adopting a post camera; in the front double-shooting mode, shooting is carried out by adopting two front cameras, and images acquired by the two front cameras are not overlapped; in the rear double-shooting mode, two rear cameras are adopted for shooting, and images acquired by the two rear cameras are not overlapped; in the front-back double-shooting mode, a front camera and a rear camera are adopted for shooting, and pictures acquired by the front camera and the rear camera are not overlapped; in the front-end picture-in-picture mode, two front-end cameras are adopted for shooting, pictures acquired by the two front-end cameras overlap, and specifically, the picture acquired by one front-end camera is positioned in the picture acquired by the other front-end camera; in the post-picture-in-picture mode, two post cameras are adopted for shooting, pictures acquired by the two post cameras overlap, and specifically, the picture acquired by one post camera is positioned in the picture acquired by the other post camera; in the front and rear picture-in-picture mode, a front camera and a rear camera are adopted for shooting, pictures acquired by the front camera and the rear camera overlap, and specifically, the pictures acquired by the front camera are located in the pictures acquired by the rear camera, or the pictures acquired by the rear camera are located in the pictures acquired by the front camera.
Referring to fig. 2A, a schematic view of a shooting scene in a front-back dual-shooting mode is provided in an embodiment of the present application. In the front and rear double-shot mode, a front camera is adopted to collect the foreground picture, a rear camera is adopted to collect the rear Jing Huamian, and the foreground picture and the rear picture are not overlapped in the display interface.
Referring to fig. 2B, a schematic diagram of a scene shot in a front-back picture-in-picture mode is provided in an embodiment of the present application. In the front and rear picture-in-picture mode, a front camera is used for collecting a foreground picture, a rear camera is used for collecting a rear picture, and the foreground picture is placed in the rear picture.
Referring to fig. 2C, a schematic view of a post-pd mode shooting scene is provided in an embodiment of the present application. In the rear picture-in-picture mode, a rear camera is used for collecting a far-view picture, and another rear camera is used for collecting a near-view picture, and the near-view picture is placed in the far-view picture.
It should be noted that the above shooting modes are only some possible implementations listed in the embodiments of the present application, and those skilled in the art may configure other shooting modes according to actual needs, and the embodiments of the present application are not limited thereto.
It should be noted that, the single-shot mode referred to in the present application includes a single-shot shooting mode and a single-shot video recording mode; the multi-shot mode includes a multi-shot shooting mode and a multi-shot video mode. Specifically, the front double-shot mode includes a front double-shot video mode and a front double-shot photographing mode, the rear double-shot mode includes a rear double-shot video mode and a rear double-shot photographing mode, the front double-shot mode includes a front double-shot video mode and a rear double-shot video mode and a front double-shot photographing mode, the front picture-in-picture mode includes a front picture-in-picture video mode and a front picture-in-picture photographing mode, the rear picture-in-picture mode includes a rear picture-in-picture video mode and a rear picture-in-picture photographing mode, and the front picture-in-picture and the rear picture-in-picture photographing mode. Hereinafter, the multi-shot photographing mode will be described with emphasis.
Referring to fig. 3A-3C, a schematic view of an application scenario in a multi-shot photographing mode according to an embodiment of the present application is provided. Wherein, the multi-shot shooting mode is a front-back picture-in-picture shooting mode. Specifically, the front Jing Huamian is collected by a front camera, and the foreground picture is a user head portrait; the rear image is collected by a rear camera Jing Huamian, and the rear image is a river. It can be appreciated that by adjusting the shooting positions and angles of the front camera and the rear camera, the mouth of the user can be aligned to the source of the river, and when the user clicks to shoot, a picture of "kouzhudang river" is generated, as shown in fig. 3A.
However, in practical application, because the user cannot accurately find the fitting points between the pictures acquired by different cameras, the shooting effect of the creative photo is poor. For example, a "kou-ruo-river" picture is generated as shown in fig. 3B and 3C.
Aiming at the problems, the embodiment of the application provides a shooting method, which independently shoots a plurality of pictures in a multi-shot shooting mode, is convenient for a user to find the fit points among different pictures and improves the user experience.
Referring to fig. 4, a flowchart of a photographing method according to an embodiment of the present application is shown. The method is applicable to the terminal device shown in fig. 1, and as shown in fig. 4, mainly comprises the following steps.
Step S401: and receiving a first shooting instruction, shooting a first picture, and displaying the first picture in a display interface.
In a specific implementation, a user may first start a multi-shot photographing mode of the terminal device, and respectively display a first preview image collected by the first camera and a second preview image collected by the second camera in a display interface. It can be understood that the first preview picture is a picture acquired by the first camera in real time and is displayed in real time in a display interface (display screen) of the terminal device; the second preview picture is a picture acquired by the second camera in real time and is displayed in real time in a display interface (display screen) of the terminal device. Of course, in practical application, the terminal device may be set to enter the multi-shot shooting mode after being turned on, without the need of the user to separately turn on the multi-shot shooting mode.
It should be noted that, the first camera according to the embodiment of the present application may be one camera or a group of cameras (for example, the first preview image is acquired by two rear cameras); the second camera according to the embodiment of the present application may be one camera or a group of cameras (for example, the second preview image is acquired by two front cameras), which is not limited in this embodiment of the present application.
In the related art, in the multi-shot shooting mode, after a shooting instruction is triggered by a user, a plurality of cameras in the multi-shot shooting mode shoot simultaneously, and a terminal device directly synthesizes a final picture (i.e., a third picture described below), so that the user cannot accurately find a fitting point between pictures acquired by different cameras.
In the embodiment of the application, after the user triggers the first shooting instruction, the terminal equipment shoots a first picture, wherein the first picture is a picture shot by one camera or one group of cameras in a multi-shot shooting mode. That is, when the user triggers the photographing instruction, one picture in the multi-shot photographing mode is photographed first.
In one possible implementation, the user may set a photographing priority for indicating which camera is preferred for photographing before triggering the first photographing instruction. The user sets a first shooting priority before triggering the first shooting instruction, where the first shooting priority is used to instruct to take a picture with the first camera preferentially, and when the terminal device receives the first shooting instruction, a first picture is taken by the first camera. Or, before triggering the first shooting instruction, the user sets a second shooting priority, where the second shooting priority is used to instruct to take the picture with the second camera preferentially, and when the terminal device receives the first shooting instruction, the terminal device takes the first picture with the second camera.
Of course, in some possible implementations, a default shooting priority may also be configured for the multi-shot shooting mode, without requiring a separate setting by the user, which is not a limitation of the embodiments of the present application. For example, after the user opens the multi-shot shooting mode, shooting is performed by using the first camera preferentially by default. For ease of understanding, the following description will be given by taking the front-to-rear picture-in-picture photographing mode as an example.
Referring to fig. 5, a schematic view of an application scenario is provided in an embodiment of the present application. In the application scene, a user takes a picture in picture photographing mode before and after. Specifically, the user may first start the front-back picture-in-picture photographing mode, in which the terminal device collects a foreground picture through a front camera and a background picture through a rear camera, and displays a front Jing Yulan picture (user head portrait picture) and a rear Jing Yulan picture (river picture) in a display interface of the terminal device, respectively, where the front Jing Yulan picture is located in the background preview picture, as shown in 5A in fig. 5.
A rear priority switch is arranged in the display interface, and a user can set shooting priority through the rear priority switch. Specifically, when the rear priority switch is turned off, the front camera is preferentially adopted for shooting; when the rear priority switch is turned on, the rear camera is preferentially adopted for shooting.
In the application scenario, since the rear priority switch is in the off state, the front camera is preferentially adopted for shooting, that is, when the terminal device receives the first shooting instruction, the front camera is used for shooting the first picture.
Specifically, before triggering the first shooting instruction, the user can adjust the shooting picture of the front camera by rotating or moving the terminal equipment so as to adjust the head portrait of the user to a proper angle and position. It should be noted that, since only the first image is captured after the first capturing instruction is triggered, there is no need to consider the influence on the captured image of the rear camera when the captured image of the front camera is adjusted. After the user adjusts the shooting picture of the front camera to a proper position and angle, clicking the shooting control in the state shown as 5B in fig. 5 triggers the first shooting instruction to obtain a first picture, and the first picture is displayed in the display interface, as shown as 5C in fig. 5.
Step S402: and receiving a second shooting instruction and shooting a second picture.
In a specific implementation, after the terminal device completes shooting of the first picture, the user can trigger a second shooting instruction, so as to shoot the second picture. The second picture and the first picture are pictures shot by the terminal equipment by adopting different cameras.
Specifically, when the first picture is a picture taken by the first camera, the second picture is a picture taken by the second camera; when the first picture is a picture shot by the second camera, the second picture is a picture shot by the first camera. The user sets a first shooting priority before triggering the first shooting instruction, where the first shooting priority is used to instruct to take a picture with the first camera preferentially, and when the terminal device receives the first shooting instruction, a first picture is taken by the first camera; and after receiving the second shooting instruction, the terminal equipment shoots a second picture through the second camera. Or, before triggering the first shooting instruction, the user sets a second shooting priority, where the second shooting priority is used to instruct to take a picture with the second camera preferentially, and when the terminal device receives the first shooting instruction, the terminal device takes a first picture with the second camera; and after receiving the second shooting instruction, the terminal equipment shoots a second picture through the first camera.
It can be understood that, after the terminal device completes the shooting of the first picture, the shooting picture corresponding to the first picture is already determined and will not change along with the change of the position or angle of the terminal device. At this time, the user only needs to adjust the shooting picture corresponding to the second picture to a proper position and angle through rotating or moving the terminal equipment, and then triggers the second shooting instruction to finish shooting of the second picture. In addition, because the first picture is displayed in real time in the process of shooting the second picture, the shooting position and the shooting angle of the second picture are convenient for a user to determine, namely, the user can conveniently determine the fit point of the first picture and the second picture.
For ease of understanding, the following description will also take, as an example, a front-to-rear picture-in-picture photographing mode shown in fig. 5.
As shown in fig. 5, when the photographing of the first picture is completed, the first picture is displayed in the display interface, as shown by 5C in fig. 5. At this time, the photographed picture corresponding to the first picture has been determined so as not to change with the change of the position or angle of the terminal device. The user can adjust the shooting picture of the rear camera by rotating or moving the terminal equipment, namely, the river is adjusted to a proper position and angle. After the user adjusts the shooting picture of the rear camera to a proper position and angle, in a state shown as 5D in fig. 5, clicking the "shooting control" triggers a second shooting instruction to obtain a second picture, as shown as 5E in fig. 5.
Step S403: and merging the first picture and the second picture into a third picture according to the relative positions of the first picture and the second picture.
In the embodiment of the present application, the relative position may be understood as a relative position of the first picture and the second picture in the display interface. After the first picture and the second picture are obtained, rendering and merging the first picture and the second picture according to the relative positions of the first picture and the second picture in the display interface, and merging the first picture and the second picture into a third picture. As shown in fig. 5, the first picture and the second picture are combined to obtain a final "koujora" picture, as shown at 5F in fig. 5.
It should be noted that, in some possible implementations, the merging of the first picture and the second picture does not need to be triggered by the user, that is, after the user triggers the second shooting instruction, the terminal device automatically generates the third picture after shooting the second picture, so as to reduce user operations and improve user experience.
Of course, in some possible implementations, after the first picture and the second picture are obtained, the user may trigger a picture merging instruction, and when the terminal device receives the picture merging instruction, the first picture and the second picture are merged into a third picture.
In the embodiment of the application, when a photo containing a plurality of pictures is required to be shot, for example, in a multi-shot shooting mode, the pictures are shot independently, so that a user can find the fit points among different pictures conveniently, and the user experience is improved.
Note that, in the above-described embodiment, the multi-shot photographing mode is exemplified. However, in some possible implementations, the first picture and the second picture may also be pictures taken by the same camera or the same several cameras, which is not limited in this embodiment of the present application. The terminal device shoots a first picture through a rear camera and displays the first picture in a display interface; then, the user clicks the shooting control again to trigger a second shooting instruction, and the terminal equipment shoots a second picture through the same rear camera, so that the first picture and the second picture are combined into a third picture.
In one possible implementation, the shooting instruction triggered by the user further includes camera information, and according to the camera information, it can be determined which camera is used for shooting. For example, the first shooting instruction includes first camera information, and after receiving the first shooting instruction, the terminal device shoots a first picture through the first camera; and the second shooting instruction comprises second camera information, and after receiving the second shooting instruction, the terminal equipment shoots a second picture through the second camera. Or the first shooting instruction comprises the second camera information, and after receiving the first shooting instruction, the terminal equipment shoots a first picture through the second camera; and after the terminal equipment receives the second shooting instruction, shooting a second picture through the first camera.
Referring to fig. 6, another application scenario is schematically provided in an embodiment of the present application. In the application scene, a user takes a picture in picture photographing mode before and after. After the front and rear picture-in-picture photographing modes are started, a front Jing Yulan picture acquired by the front camera and a rear Jing Yulan picture acquired by the rear camera are respectively displayed in the display interface. The front Jing Yulan picture is internally provided with a foreground shooting control, and the rear Jing Yulan picture is internally provided with a background shooting control.
In the state shown as 6A in fig. 6, when the user clicks the "foreground shooting control", the terminal device receives a foreground shooting instruction and shoots a foreground picture, as shown as 6B in fig. 6; in the state shown in 6B in fig. 6, when the user clicks the "background shooting control", the terminal device receives a background shooting instruction, and shoots a background picture, as shown in 6C in fig. 6. Of course, the user can click the "background shooting control" to shoot the background picture; then, clicking the "foreground shooting control" to shoot the foreground picture, which is not limited in the embodiment of the present application.
That is, in the embodiment of the present application, a correspondence between a shooting instruction and a camera is established. It will be appreciated that in this implementation, the photographing priority need not be set, but rather it may be determined which camera to take for photographing based on a photographing instruction triggered by the user.
Referring to fig. 7, a flowchart of another photographing method according to an embodiment of the present application is shown. As shown in fig. 7, the method further includes the following steps between step S401 and step S402 of the method shown in fig. 4.
Step S701: and receiving a first picture adjustment instruction, and adjusting the first picture in the display interface.
In the embodiment of the application, after the shooting of the first picture is completed, in order to facilitate better fit between the first picture and the subsequent second picture, a user can adjust the first picture in the display interface according to actual needs, and specifically, the display position, the display angle, the display size and the like of the first picture can be adjusted.
Referring to fig. 8, another application scenario is schematically provided in an embodiment of the present application. In the application scene, a user takes a picture in picture photographing mode before and after. When the photographing of the first picture is completed, the first picture (user head portrait) is displayed in the display interface, as shown by 8A in fig. 8. It will be appreciated that it is more difficult to create a "kou-ruffle" effect in a subsequent step, according to the orientation of the user's head portraits shown at 8A in fig. 8. At this time, the user may adjust the first picture. Specifically, first, the first picture is rotated counterclockwise according to the arrow direction shown as 8A in fig. 8, and the angle shown as 8B in fig. 8 is adjusted; then, the first picture is dragged in the direction of the arrow shown as 8B in fig. 8, and the first picture is modulated to the position shown as 8C in fig. 8. Thus, the position and angle of the first picture are adjusted, so that the first picture and the second picture can be conveniently combined into an effect of 'kouzhuzhudang river' in the subsequent steps.
It should be noted that, in addition to the adjustment of the position and angle of the first picture, other information may be adjusted for the first picture. For example, the size of the first picture is adjusted (the first picture is enlarged or reduced), the resolution of the first picture is adjusted, a filter is added to the first picture, and the like, which is not limited in the embodiment of the present application.
Other contents of the embodiment of the present application may be referred to the description of the embodiment shown in fig. 4, and for brevity, the description is omitted herein.
Referring to fig. 9, a flowchart of another photographing method according to an embodiment of the present application is shown. As shown in fig. 9, the method further comprises the following steps on the basis of the method shown in fig. 4.
Step S901: and receiving a first re-shooting instruction, re-shooting the first picture, and displaying the re-shot first picture in the display interface.
Specifically, after the shooting of the first picture is completed in step S401, if the shooting effect of the user on the first picture is not satisfied, the first re-shooting instruction may be triggered, the first picture is re-shot, and the re-shot first picture is displayed in the display interface. It can be appreciated that after the first picture is taken again, the third picture is combined in a subsequent step using the taken first picture again.
Step S902: and receiving a second re-shooting instruction, and re-shooting a second picture.
Specifically, after the shooting of the second picture is completed in step S402, if the shooting effect of the user on the second picture is not satisfactory, the second re-shooting instruction may be triggered to re-shoot the second picture. It can be appreciated that after the second picture is taken again, the third picture is combined in a subsequent step using the taken second picture again.
For ease of understanding, the re-shooting process is described below in connection with a specific application scenario.
Referring to fig. 10, another application scenario is schematically provided in an embodiment of the present application. In the application scene, a user takes a picture in picture photographing mode before and after. When the photographing of the first picture is completed, the first picture (user head portrait) is displayed in the display interface, as shown by 10A in fig. 10. At this time, if the user is not satisfied with the first picture taken, the return command may be triggered by clicking the "return control", and after receiving the return command, the terminal device returns to the previous step (the state before the first picture is taken), as shown by 10B in fig. 10. In the state shown as 10B in fig. 10, after the user adjusts the foreground screen to a suitable position and angle, clicking the "photographing control" triggers a first re-photographing instruction, re-photographs the first picture, and displays the re-photographed first picture in the display interface, as shown as 10C in fig. 10. In the state shown as 10C in fig. 10, the user clicks the "photographing control" to trigger the second photographing instruction, and when the terminal device receives the second photographing instruction, a second picture is photographed, as shown as 10D in fig. 10. At this time, if the user is not satisfied with the second picture taken, the return command may be triggered by clicking the "return control", and after receiving the return command, the terminal device returns to the previous step (state before the second picture is taken), as shown by 10E in fig. 10. In the state shown as 10E in fig. 10, after the user adjusts the background screen to a suitable position and angle, clicking the "photographing control" triggers a second photographing instruction to re-photograph a second picture, as shown as 10F in fig. 10.
In the embodiment of the application, the re-shooting function of the first picture and/or the second picture is added, the flexibility of the shooting process is improved, and the operation of a user is facilitated.
It should be noted that, for brevity, reference may be made to the description of the embodiment shown in fig. 4 for other matters related to the embodiment of the present application, and the description thereof will not be repeated here.
In order to facilitate understanding, the shooting method provided by the embodiment of the application is described below with reference to a specific implementation manner.
Referring to fig. 11, a flowchart of another photographing method according to an embodiment of the present application is shown. The method is applicable to the terminal device shown in fig. 1, and as shown in fig. 11, it mainly includes the following steps.
Step S1101: and starting a front and back picture-in-picture photographing mode.
In the embodiment of the application, after a user opens a photographing application program (hereinafter referred to as an application program for short) of a terminal device, a front-back picture-in-picture photographing mode can be started so as to photograph through the front-back picture-in-picture photographing mode.
Step S1102: the application program opens the front camera and the rear camera simultaneously.
Because the front camera and the rear camera are needed in the front and rear picture-in-picture photographing mode, after the front and rear picture-in-picture modes are started, the application program simultaneously opens the front camera and the rear camera. Specifically, the application program may issue a request to the Hardware Abstraction Layer (HAL) to open the front-end camera and the rear-end camera, where the HAL issues the request to the kernel (kernel), and the front-end camera and the rear-end camera are opened after the kernel processing.
Step S1103: and respectively creating a preview stream and a photographing stream of the front camera, and a preview stream and a photographing stream of the rear camera.
Specifically, after the front camera and the rear camera are opened, the front camera respectively plays two paths of flows of preview flow and photographing flow; the rear camera plays two paths of flows of preview flow and photographing flow respectively. The preview stream is used for being sent to a display interface for display, and the photographing stream is used for generating a photo file.
Step S1104: and receiving a first shooting instruction.
Specifically, the user may click on the "capture control" to trigger the first capture instruction.
Step S1105: and judging whether the rear priority switch is opened or not.
After receiving the first shooting instruction, the terminal device firstly judges whether the rear-mounted priority switch is turned on, and if the rear-mounted priority switch is turned on, the step S11061 is carried out; otherwise, the process advances to step S11062.
Step S11061: and the application program issues a photographing request of the rear camera.
Specifically, if the rear priority switch is turned on, the application program issues a photographing request of the rear camera to the HAL, and the HAL issues the request to the kernel.
Step S11071: and shooting a first picture through the rear camera.
Specifically, a photographing request of a rear camera is processed by a kernel, a photographing frame corresponding to a photographing stream is intercepted, a first picture is generated, and the first picture is returned to a rear application program and is displayed in a display interface.
In addition, after the shooting of the first picture is completed, the rear camera can stop the flow (stopping the preview flow and shooting the flow) and close the processing.
Step S11081: and judging whether a return instruction is received.
Specifically, after the shooting of the first picture is completed, the terminal device judges whether a return instruction is received, and if the return instruction is received, the step S11071 is returned to shoot the first picture again; if the return instruction is not received, the process advances to step S11091. It should be noted that, since the rear camera is stopped and turned off at this time, if a return instruction is received, the rear camera needs to be turned on again and restarted (preview stream and photographing stream).
Step S11091: and receiving a second shooting instruction.
Specifically, the user may click on the "shooting control" again to trigger the second shooting instruction.
Step S11101: and shooting a second picture through the front camera.
Since the post-priority switch is turned on in this flow branch, in the above-described step S11071, the first picture is taken by the post-camera, and therefore, in this step, after receiving the second photographing instruction, the second picture is taken by the pre-camera.
In addition, after the second picture is shot, the front camera can stop the flow (stop the preview flow and the shooting flow) and close the processing.
Step S11111: and judging whether a return instruction is received.
Specifically, after the shooting of the second picture is completed, the terminal device judges whether a return instruction is received, and if the return instruction is received, the step S11091 is returned to shoot the second picture again; if the return instruction is not received, the flow proceeds to step S1112. It should be noted that, since the front camera is stopped and turned off at this time, if a return instruction is received, the front camera needs to be turned on again, and the streaming (preview streaming and photographing streaming) is restarted.
Step S11062: and the application program transmits a photographing request of the front-end camera.
Specifically, if the rear priority switch is turned off, the application program issues a photographing request of the front camera to the HAL, and the HAL issues the photographing request to the kernel.
Step S11072: and shooting a first picture through the front camera.
Specifically, a photographing request of the front camera is processed by a kernel, a photographing frame corresponding to a photographing stream is intercepted, a first picture is generated, and the first picture is returned to a rear application program and is displayed in a display interface.
In addition, after the shooting of the first picture is completed, the front camera can stop the flow (stop the preview flow and the shooting flow) and close the processing.
Step S11082: and judging whether a return instruction is received.
Specifically, after the shooting of the first picture is completed, the terminal device judges whether a return instruction is received, if the return instruction is received, the step S11072 is returned to shoot the first picture again; if the return instruction is not received, the process advances to step S11092. It should be noted that, since the front camera is stopped and turned off at this time, if a return instruction is received, the front camera needs to be turned on again, and the streaming (preview streaming and photographing streaming) is restarted.
Step S11092: and receiving a second shooting instruction.
Specifically, the user may click on the "shooting control" again to trigger the second shooting instruction.
Step S11102: and shooting a second picture through the rear camera.
Since the post-priority switch is turned off in this flow branch, in the above-described step S11072, the first picture is taken by the front camera, and therefore, in this step, after receiving the second photographing instruction, the second picture is taken by the post-camera.
In addition, after the second picture is shot, the rear camera can stop the flow (stop the preview flow and the shooting flow) and close the processing.
Step S11112: and judging whether a return instruction is received.
Specifically, after the shooting of the second picture is completed, the terminal device judges whether a return instruction is received, if the return instruction is received, the step S11092 is returned to shoot the second picture again; if the return instruction is not received, the flow proceeds to step S1112. It should be noted that, since the rear camera is stopped and turned off at this time, if a return instruction is received, the rear camera needs to be turned on again and restarted (preview stream and photographing stream).
Step S1112: and combining the first picture and the second picture into a third picture.
Specifically, after the first picture and the second picture are obtained, rendering merging processing may be performed on the first picture and the second picture to obtain the third picture.
Referring to fig. 12, another application scenario is schematically provided in an embodiment of the present application. In the application, the terminal device adopts a double-shot shooting mode (which can be a back-back double-shot mode, a front-front double-shot mode or a front-back double-shot mode) to shoot. Specifically, the first picture taken in the double-shot photographing mode is "half heart shape with left hand ratio for the user", and the second picture is "half heart shape with right hand ratio for the user". And combining the first picture and the second picture to generate a 'heart comparing' picture.
It should be noted that, in addition to the "kou-ruo-spell" photograph and "bixin" photograph listed above, the user may take other creative interactive scenes by using the above method, which is not limited in the embodiment of the present application.
Corresponding to the above embodiment, the present application also provides a terminal device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the terminal device to perform part or all of the steps in the above method embodiments.
Referring to fig. 13, a schematic structural diagram of a terminal device according to an embodiment of the present application is provided. As shown in fig. 13, the terminal device 1300 may include a processor 1310, an external memory interface 1320, an internal memory 1321, a universal serial bus (universal serial bus, USB) interface 1330, a charge management module 1340, a power management module 1341, a battery 1342, an antenna 1, an antenna 2, a mobile communication module 1350, a wireless communication module 1360, an audio module 1370, a speaker 1370A, a receiver 1370B, a microphone 1370C, an earphone interface 1370D, a sensor module 1380, keys 1390, a motor 1391, an indicator 1392, a camera 1393, a display screen 1394, and a subscriber identification module (subscriber identification module, SIM) card interface 1395, etc. The sensor module 1380 may include, among other things, a pressure sensor 1380A, a gyroscope sensor 1380B, a barometric sensor 1380C, a magnetic sensor 1380D, an acceleration sensor 1380E, a distance sensor 1380F, a proximity light sensor 1380G, a fingerprint sensor 1380H, a temperature sensor 1380J, a touch sensor 1380K, an ambient light sensor 1380L, a bone conduction sensor 1380M, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the terminal apparatus 1300. In other embodiments of the application, terminal device 1300 may include more or less components than those illustrated, or may combine certain components, or may split certain components, or may have a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 1310 may include one or more processing units, such as: the processor 1310 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in processor 1310 for storing instructions and data. In some embodiments, the memory in processor 1310 is a cache memory. The memory may hold instructions or data that the processor 1310 has just used or recycled. If the processor 1310 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided, reducing the latency of the processor 1310, and thus improving the efficiency of the system.
In some embodiments, the processor 1310 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 1310 may contain multiple sets of I2C buses. The processor 1310 may be coupled to the touch sensor 1380K, charger, flash, camera 1393, etc., respectively, through different I2C bus interfaces.
The I2S interface may be used for audio communication. In some embodiments, the processor 1310 may contain multiple sets of I2S buses. The processor 1310 may be coupled to the audio module 1370 through an I2S bus to enable communication between the processor 1310 and the audio module 1370.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 1370 and the wireless communication module 1360 may be coupled through a PCM bus interface.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 1310 with the wireless communication module 1360.
The MIPI interface may be used to connect processor 1310 to peripheral devices such as display 1394, camera 1393, etc. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 1310 and camera 1393 communicate via a CSI interface, implementing the photographing function of terminal device 1300. The processor 1310 and the display screen 1394 communicate via a DSI interface to realize the display function of the terminal apparatus 1300.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect processor 1310 with camera 1393, display 1394, wireless communication module 1360, audio module 1370, sensor module 1380, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 1330 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 1330 may be used to connect a charger to charge the terminal device 1300, or may be used to transfer data between the terminal device 1300 and a peripheral device.
It should be understood that the interfacing relationship between the modules illustrated in the embodiment of the present application is only illustrative, and does not constitute a structural limitation of the terminal device 1300. In other embodiments of the present application, the terminal device 1300 may also use different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The charge management module 1340 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 1340 may receive charging inputs of a wired charger through the USB interface 1330. In some wireless charging embodiments, the charge management module 1340 may receive wireless charging inputs through a wireless charging coil of the terminal device 1300. The charging management module 1340 charges the battery 1342 and can also supply power to the terminal through the power management module 1341.
The power management module 1341 is used to connect the battery 1342, the charge management module 1340 and the processor 1310. The power management module 1341 receives input from the battery 1342 and/or the charge management module 1340, and provides power to the processor 1310, the internal memory 1321, the display 1394, the camera 1393, the wireless communication module 1360, and so forth. The power management module 1341 may also be used to monitor battery capacity, battery cycle times, battery health (leakage, impedance) and other parameters.
The wireless communication function of the terminal apparatus 1300 may be implemented by the antenna 1, the antenna 2, the mobile communication module 1350, the wireless communication module 1360, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in terminal device 1300 may be configured to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 1350 may provide a solution for wireless communication including 2G/3G/4G/5G or the like applied to the terminal device 1300. The mobile communication module 1350 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 1350 may receive electromagnetic waves from the antenna 1, filter, amplify the received electromagnetic waves, and transmit the electromagnetic waves to a modem processor for demodulation. The mobile communication module 1350 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 for radiation. In some embodiments, at least some of the functional modules of the mobile communication module 1350 may be disposed in the processor 1310. In some embodiments, at least some of the functional modules of the mobile communication module 1350 may be provided in the same device as at least some of the modules of the processor 1310.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 1370A, the receiver 1370B, and the like), or displays images or videos through the display screen 1394.
The wireless communication module 1360 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc. applied to the terminal device 1300. The wireless communication module 1360 may be one or more devices integrating at least one communication processing module. The wireless communication module 1360 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 1310. The wireless communication module 1360 may also receive signals to be transmitted from the processor 1310, frequency modulate them, amplify them, and convert them to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 1350 of terminal device 1300 are coupled, and antenna 2 and wireless communication module 1360 are coupled, such that terminal device 1300 can communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The terminal apparatus 1300 realizes a display function by a GPU, a display screen 1394, an application processor, and the like. The GPU is a microprocessor for processing images and is connected with the display screen 1394 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 1310 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 1394 is used for displaying images, videos, and the like. The display screen 1394 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, terminal device 1300 may include 1 or N displays 1394, N being a positive integer greater than 1.
The terminal apparatus 1300 can realize a photographing function through an ISP, a camera 1393, a video codec, a GPU, a display screen 1394, an application processor, and the like.
The ISP is used to process the data fed back by camera 1393. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. I
Camera 1393 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. In some embodiments, terminal device 1300 may include 1 or N cameras 1393, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the terminal apparatus 1300 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The terminal device 1300 may support one or more video codecs. In this way, the terminal apparatus 1300 may play or record video in various encoding formats, for example: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. The application such as intelligent cognition of the terminal device 1300 can be realized through the NPU,
the external memory interface 1320 may be used to connect an external memory card, such as a Micro SD card, to realize the memory capability of the extension terminal device 1300. The external memory card communicates with the processor 1310 via an external memory interface 1320 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 1321 may be used to store computer-executable program code that includes instructions. The internal memory 1321 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the terminal device 1300 (e.g., audio data, phonebook, etc.), and the like. In addition, the internal memory 1321 may include a high-speed random access memory, and may also include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 1310 performs various functional applications of the terminal device 1300 and data processing by executing instructions stored in the internal memory 1321 and/or instructions stored in a memory provided in the processor.
Terminal device 1300 may implement audio functions through an audio module 1370, a speaker 1370A, a receiver 1370B, a microphone 1370C, an earphone interface 1370D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 1370 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 1370 may also be used to encode and decode audio signals.
Speakers 1370A, also known as "horns," are used to convert audio electrical signals into sound signals. The terminal device 1300 can listen to music or to handsfree talk through the speaker 1370A.
Receiver 1370B, also referred to as a "receiver," converts an audio electrical signal into a sound signal. When terminal device 1300 receives a telephone call or voice message, it can receive voice by bringing receiver 1370B close to the human ear.
A microphone 1370C, also called a "microphone" or "microphone", is used to convert a sound signal into an electrical signal. When making a call or transmitting voice information, the user can sound near the microphone 1370C through the mouth, inputting a sound signal to the microphone 1370C. The terminal device 1300 may be provided with at least one microphone 1370C.
The earphone interface 1370D is used to connect a wired earphone. Headset interface 1370D may be USB interface 1330 or a 3.5mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface, american cellular telecommunications industry Association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 1380A is configured to sense a pressure signal and convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 1380A may be disposed on display 1394. The pressure sensor 1380A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. When a force is applied to the pressure sensor 1380A, the capacitance between the electrodes changes. The terminal apparatus 1300 determines the intensity of the pressure according to the change of the capacitance. When a touch operation is applied to the display screen 1394, the terminal apparatus 1300 detects the intensity of the touch operation based on the pressure sensor 1380A. The terminal apparatus 1300 may also calculate the position of the touch from the detection signal of the pressure sensor 1380A.
The gyro sensor 1380B may be used to determine a motion gesture of the terminal apparatus 1300. In some embodiments, the angular velocity of terminal device 1300 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 1380B. The gyro sensor 1380B may be used for photographing anti-shake.
The air pressure sensor 1380C is used to measure air pressure. In some embodiments, the terminal device 1300 calculates altitude from barometric pressure values measured by barometric pressure sensor 1380C, aiding in positioning and navigation.
The magnetic sensor 1380D includes a hall sensor. The terminal apparatus 1300 may detect the opening and closing of the flip cover using the magnetic sensor 1380D.
The acceleration sensor 1380E may detect the magnitude of acceleration of the terminal device 1300 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the terminal device 1300 is stationary. The method can also be used for identifying the gesture of the terminal, and is applied to the applications such as horizontal and vertical screen switching, pedometers and the like.
A distance sensor 1380F for measuring distance. The terminal apparatus 1300 may measure the distance by infrared or laser. In some embodiments, the terminal device 1300 may range using the distance sensor 1380F to achieve fast focus.
The proximity light sensor 1380G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The terminal apparatus 1300 emits infrared light outward through the light emitting diode. The terminal apparatus 1300 detects infrared reflected light from a nearby object using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object in the vicinity of the terminal apparatus 1300.
The ambient light sensor 1380L is used to sense ambient light levels. The terminal device 1300 can adaptively adjust the brightness of the display 1394 according to the perceived ambient light level.
The fingerprint sensor 1380H is used to collect a fingerprint. The terminal device 1300 may utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call by the fingerprint, and so on.
The temperature sensor 1380J is used to detect temperature. In some embodiments, terminal device 1300 performs a temperature processing strategy using the temperature detected by temperature sensor 1380J.
The touch sensor 1380K is also referred to as a "touch device". The touch sensor 1380K may be disposed on the display screen 1394, and the touch sensor 1380K and the display screen 1394 form a touch screen, which is also referred to as a "touch screen". The touch sensor 1380K is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation can be provided through the display screen 1394. In other embodiments, touch sensor 1380K may also be disposed on a surface of terminal device 1300 other than where display 1394 is located.
The bone conduction sensor 1380M may acquire a vibration signal. In some embodiments, bone conduction sensor 1380M may acquire a vibration signal of a human vocal tract vibrating bone piece. The bone conduction sensor 1380M may also contact the pulse of a human body to receive a blood pressure pulsation signal.
Key 1390 includes a power on key, a volume key, etc. Key 1390 may be a mechanical key. Or may be a touch key. The terminal apparatus 1300 may receive key inputs, generate key signal inputs related to user settings of the terminal apparatus 1300 and function control.
Motor 1391 may generate a vibration alert. The motor 1391 may be used for incoming call vibration alerting as well as for touch vibration feedback.
The indicator 1392 may be an indicator light, which may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 1395 is used to connect a SIM card. The SIM card may be contacted and separated from the terminal apparatus 1300 by being inserted into the SIM card interface 1395 or being withdrawn from the SIM card interface 1395. Terminal device 1300 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 1395 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 1395 can be used to insert multiple cards at the same time. The types of the plurality of cards may be the same or different. Terminal device 1300 interacts with the network through the SIM card to perform functions such as talking and data communication. In some embodiments, the terminal device 1300 employs esims, namely: an embedded SIM card.
In a specific implementation, the present application further provides a computer storage medium, where the computer storage medium may store a program, where when the program runs, the program controls a device where the computer readable storage medium is located to execute some or all of the steps in the foregoing embodiments. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory, RAM), or the like.
In a specific implementation, an embodiment of the present application further provides a computer program product, where the computer program product contains executable instructions, where the executable instructions when executed on a computer cause the computer to perform some or all of the steps in the above method embodiments.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relation of association objects, and indicates that there may be three kinds of relations, for example, a and/or B, and may indicate that a alone exists, a and B together, and B alone exists. Wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of the following" and the like means any combination of these items, including any combination of single or plural items. For example, at least one of a, b and c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in the embodiments disclosed herein can be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In several embodiments provided by the present invention, any of the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely exemplary embodiments of the present invention, and any person skilled in the art may easily conceive of changes or substitutions within the technical scope of the present invention, which should be covered by the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. A photographing method, comprising:
starting a multi-shot shooting mode, and respectively displaying a first preview picture acquired by a first camera and a second preview picture acquired by a second camera in a display interface;
receiving a first shooting instruction, shooting a first picture, and displaying the first picture and a second preview picture acquired by the second camera in a display interface;
receiving a second shooting instruction, shooting a second picture, and displaying the first picture and the second picture in a display interface; the first picture and the second picture are pictures shot by adopting different cameras;
when the first picture and the second picture are displayed in the display interface, if a picture merging instruction is received, merging the first picture and the second picture into a third picture according to the relative positions of the first picture and the second picture;
When the first picture and the second picture are displayed in the display interface, if a second re-shooting instruction is received, displaying a second preview picture acquired by the first picture and the second camera in the display interface;
when the first picture and the second preview picture collected by the second camera are displayed in the display interface, if a first re-shooting instruction is received, the first preview picture collected by the first camera and the second preview picture collected by the second camera are displayed in the display interface;
wherein,,
before the first shooting instruction is received, receiving a shooting priority setting instruction, wherein the shooting priority setting instruction is used for determining that shooting is performed by adopting the first camera or the second camera preferentially;
or,
the first shooting instruction and the second shooting instruction comprise camera information, and the camera information is used for determining to adopt the first camera or the second camera to shoot.
2. The method of claim 1, wherein after the receiving the first photographing instruction, photographing a first picture, and displaying the first picture in a display interface, the method further comprises:
And receiving a first picture adjustment instruction, and adjusting the first picture in the display interface.
3. The method of claim 2, wherein receiving the first picture adjustment command to adjust the first picture in the display interface comprises:
and receiving a first picture adjustment instruction, and adjusting the display position, the display angle and/or the picture size of the first picture in the display interface.
4. The method of claim 1, wherein receiving the first photographing instruction, photographing the first picture, comprises:
receiving a first shooting instruction;
according to a preset first shooting priority, determining to take the picture by the first camera preferentially;
and shooting the first picture through the first camera.
5. The method of claim 4, wherein receiving a second photographing instruction, photographing a second picture, comprises:
and receiving a second shooting instruction, and shooting the second picture through the second camera.
6. The method of claim 1, wherein receiving the first photographing instruction, photographing the first picture, comprises:
receiving a first shooting instruction;
According to a preset second shooting priority, determining to take the picture by the second camera preferentially;
and shooting the first picture through the second camera.
7. The method of claim 6, wherein receiving a second photographing instruction, photographing a second picture, comprises:
and receiving a second shooting instruction, and shooting the second picture through the first camera.
8. The method of any of claims 4-7, wherein the multi-shot mode comprises any one or a combination of the following modes:
the front-mounted double-shot shooting mode is adopted, the first camera and the second camera are front-mounted cameras, and pictures acquired by the first camera and the second camera are not overlapped;
the first camera and the second camera are rear cameras, and pictures acquired by the first camera and the second camera are not overlapped;
a front-back double-shooting mode, wherein a front camera and a rear camera exist in the first camera and the second camera, and pictures acquired by the first camera and the second camera are not overlapped;
The front-mounted picture-in-picture photographing mode is adopted, the first camera and the second camera are front-mounted cameras, and pictures acquired by the first camera and the second camera overlap;
the rear picture-in-picture photographing mode is that the first camera and the second camera are rear cameras, and pictures acquired by the first camera and the second camera overlap;
the front and back picture-in-picture photographing mode is that a front camera and a rear camera exist in the first camera and the second camera, and pictures acquired by the first camera and the second camera overlap.
9. A terminal device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the terminal device to perform the method of any of claims 1-8.
10. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program when run controls a device in which the computer readable storage medium is located to perform the method according to any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210660829.9A CN114745508B (en) | 2022-06-13 | 2022-06-13 | Shooting method, terminal equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210660829.9A CN114745508B (en) | 2022-06-13 | 2022-06-13 | Shooting method, terminal equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114745508A CN114745508A (en) | 2022-07-12 |
CN114745508B true CN114745508B (en) | 2023-10-31 |
Family
ID=82286897
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210660829.9A Active CN114745508B (en) | 2022-06-13 | 2022-06-13 | Shooting method, terminal equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114745508B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005142704A (en) * | 2003-11-05 | 2005-06-02 | Matsushita Electric Ind Co Ltd | Digital camera |
CN101651767A (en) * | 2008-08-14 | 2010-02-17 | 三星电子株式会社 | Device and method for synchronously synthesizing images |
CN103856617A (en) * | 2012-12-03 | 2014-06-11 | 联想(北京)有限公司 | Photographing method and user terminal |
CN104284064A (en) * | 2013-07-05 | 2015-01-14 | 三星电子株式会社 | Method and apparatus for previewing a dual-shot image |
CN105141833A (en) * | 2015-07-20 | 2015-12-09 | 努比亚技术有限公司 | Terminal photographing method and device |
CN105391866A (en) * | 2015-11-30 | 2016-03-09 | 东莞酷派软件技术有限公司 | Terminal and shooting method and device |
CN105657299A (en) * | 2015-07-14 | 2016-06-08 | 宇龙计算机通信科技(深圳)有限公司 | Method and system for processing shot data based on double cameras |
CN105872365A (en) * | 2016-03-29 | 2016-08-17 | 努比亚技术有限公司 | Photographing method and device for mobile terminal |
CN106027900A (en) * | 2016-06-22 | 2016-10-12 | 维沃移动通信有限公司 | Photographing method and mobile terminal |
CN107948364A (en) * | 2017-12-28 | 2018-04-20 | 努比亚技术有限公司 | Mobile terminal image pickup method, mobile terminal and computer-readable recording medium |
CN111200686A (en) * | 2018-11-19 | 2020-05-26 | 中兴通讯股份有限公司 | Photographed image synthesizing method, terminal, and computer-readable storage medium |
CN112954221A (en) * | 2021-03-11 | 2021-06-11 | 深圳市几何数字技术服务有限公司 | Method for real-time photo shooting |
-
2022
- 2022-06-13 CN CN202210660829.9A patent/CN114745508B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005142704A (en) * | 2003-11-05 | 2005-06-02 | Matsushita Electric Ind Co Ltd | Digital camera |
CN101651767A (en) * | 2008-08-14 | 2010-02-17 | 三星电子株式会社 | Device and method for synchronously synthesizing images |
CN103856617A (en) * | 2012-12-03 | 2014-06-11 | 联想(北京)有限公司 | Photographing method and user terminal |
CN104284064A (en) * | 2013-07-05 | 2015-01-14 | 三星电子株式会社 | Method and apparatus for previewing a dual-shot image |
CN105657299A (en) * | 2015-07-14 | 2016-06-08 | 宇龙计算机通信科技(深圳)有限公司 | Method and system for processing shot data based on double cameras |
CN105141833A (en) * | 2015-07-20 | 2015-12-09 | 努比亚技术有限公司 | Terminal photographing method and device |
CN105391866A (en) * | 2015-11-30 | 2016-03-09 | 东莞酷派软件技术有限公司 | Terminal and shooting method and device |
CN105872365A (en) * | 2016-03-29 | 2016-08-17 | 努比亚技术有限公司 | Photographing method and device for mobile terminal |
CN106027900A (en) * | 2016-06-22 | 2016-10-12 | 维沃移动通信有限公司 | Photographing method and mobile terminal |
CN107948364A (en) * | 2017-12-28 | 2018-04-20 | 努比亚技术有限公司 | Mobile terminal image pickup method, mobile terminal and computer-readable recording medium |
CN111200686A (en) * | 2018-11-19 | 2020-05-26 | 中兴通讯股份有限公司 | Photographed image synthesizing method, terminal, and computer-readable storage medium |
CN112954221A (en) * | 2021-03-11 | 2021-06-11 | 深圳市几何数字技术服务有限公司 | Method for real-time photo shooting |
Also Published As
Publication number | Publication date |
---|---|
CN114745508A (en) | 2022-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110445978B (en) | Shooting method and equipment | |
WO2020073959A1 (en) | Image capturing method, and electronic device | |
CN113473005B (en) | Shooting transfer live-action insertion method, equipment and storage medium | |
CN113810601B (en) | Terminal image processing method and device and terminal equipment | |
CN114489533A (en) | Screen projection method and device, electronic equipment and computer readable storage medium | |
CN113596321B (en) | Method, device and storage medium for generating transition dynamic effect | |
CN113596319A (en) | Picture-in-picture based image processing method, apparatus, storage medium, and program product | |
CN114422340A (en) | Log reporting method, electronic device and storage medium | |
CN113364970B (en) | Imaging method of non-line-of-sight object and electronic equipment | |
CN113542613A (en) | Device and method for photographing | |
CN114257920B (en) | Audio playing method and system and electronic equipment | |
CN114339429A (en) | Audio and video playing control method, electronic equipment and storage medium | |
CN114257737B (en) | Shooting mode switching method and related equipment | |
CN113592751B (en) | Image processing method and device and electronic equipment | |
CN114500901A (en) | Double-scene video recording method and device and electronic equipment | |
CN113518189B (en) | Shooting method, shooting system, electronic equipment and storage medium | |
CN113364969B (en) | Imaging method of non-line-of-sight object and electronic equipment | |
CN115412678B (en) | Exposure processing method and device and electronic equipment | |
CN113923351B (en) | Method, device and storage medium for exiting multi-channel video shooting | |
CN113810595B (en) | Encoding method, apparatus and storage medium for video shooting | |
CN114302063B (en) | Shooting method and equipment | |
CN114745508B (en) | Shooting method, terminal equipment and storage medium | |
WO2022033344A1 (en) | Video stabilization method, and terminal device and computer-readable storage medium | |
CN115762108A (en) | Remote control method, remote control device and controlled device | |
CN116782024A (en) | Shooting method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |