CN115278030A - Shooting method and device and electronic equipment - Google Patents

Shooting method and device and electronic equipment Download PDF

Info

Publication number
CN115278030A
CN115278030A CN202210908200.1A CN202210908200A CN115278030A CN 115278030 A CN115278030 A CN 115278030A CN 202210908200 A CN202210908200 A CN 202210908200A CN 115278030 A CN115278030 A CN 115278030A
Authority
CN
China
Prior art keywords
shooting
auxiliary
main
camera
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210908200.1A
Other languages
Chinese (zh)
Other versions
CN115278030B (en
Inventor
彭作
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210908200.1A priority Critical patent/CN115278030B/en
Publication of CN115278030A publication Critical patent/CN115278030A/en
Application granted granted Critical
Publication of CN115278030B publication Critical patent/CN115278030B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The application discloses a shooting method, a shooting device and electronic equipment, and belongs to the technical field of camera shooting. The method comprises the following steps: the method comprises the steps of receiving a first input of a user under the condition that a main shooting preview interface of a main camera is displayed, responding to the first input, displaying an auxiliary shooting preview interface of a target auxiliary camera, controlling the main camera and the auxiliary camera to shoot, and outputting a first main shooting file and an auxiliary shooting file, wherein a preview image displayed on the auxiliary shooting preview interface is a preview image of a target preview area in the main shooting preview interface.

Description

Shooting method and device and electronic equipment
Technical Field
The application belongs to the technical field of camera shooting, and particularly relates to a shooting method, a shooting device and electronic equipment.
Background
The shooting function in the electronic equipment is usually used for recording daily life trends, and in order to meet the increasing recording requirements of people, the shooting modes of the electronic equipment are more and more, and the shooting effect is better and better.
In the related art, when an image is shot by a camera of an electronic device, a picture captured by a single camera can be displayed on a display screen, and if a user needs to shoot a plurality of images with different shooting effects of a current scene, the user needs to shoot for a plurality of times by switching the cameras and triggering the electronic device through the corresponding cameras. Thus, the shooting process is complicated and time-consuming.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method, a shooting device and electronic equipment, which can quickly shoot a full-frame image and a close-up image of a partial area in the full-frame image, so that the efficiency of shooting images with different shooting effects is improved.
In a first aspect, an embodiment of the present application provides a shooting method, where the method includes: the method comprises the steps of receiving a first input of a user under the condition that a main shooting preview interface of a main camera is displayed, responding to the first input, displaying an auxiliary shooting preview interface of a target auxiliary camera, controlling the main camera and the auxiliary camera to shoot, and outputting a first main shooting file and an auxiliary shooting file, wherein a preview image displayed on the auxiliary shooting preview interface is a preview image of a target preview area in the main shooting preview interface.
In a second aspect, an embodiment of the present application provides a shooting device, including: receiving module, display module and control module, wherein: the receiving module is used for receiving a first input of a user under the condition that a main shooting preview interface of a main camera is displayed; the display module is used for responding to the first input received by the receiving module and displaying an auxiliary shooting preview interface of the target auxiliary camera, and a preview image displayed on the auxiliary shooting preview interface is a preview image of a target preview area in the main shooting preview interface; and the control module is used for controlling the main camera and the target auxiliary camera to shoot and outputting a first main shooting file and an auxiliary shooting file.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, which is stored in a storage medium and executed by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, under the condition that a main shooting preview interface of a main camera is displayed, a shooting device receives a first input of a user, displays an auxiliary shooting preview interface of a target auxiliary camera, and controls the main camera and the target auxiliary camera to shoot and output a first main shooting file and an auxiliary shooting file, wherein a preview image displayed on the auxiliary shooting preview interface is a preview image of a target preview area in the main shooting preview interface. According to the method, the main shooting preview interface of the main camera and the auxiliary shooting preview interface of the target auxiliary camera are displayed, so that a user can quickly check images with different shooting effects shot by different cameras, and because the preview image in the auxiliary shooting preview interface is the preview image of the target preview area in the main shooting preview interface, a full-frame image and close-up images of partial areas in the full-frame image can be quickly shot, and the efficiency of shooting the images with different shooting effects is improved.
Drawings
Fig. 1 is a schematic method flow diagram of a shooting method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an interface applied by a shooting method according to some embodiments of the present application;
fig. 3 is a schematic diagram of an interface applied by a shooting method according to some embodiments of the present application;
fig. 4 is a schematic diagram of an interface applied by a shooting method provided in some embodiments of the present application;
fig. 5 is a schematic diagram of an interface applied by a shooting method according to some embodiments of the present application;
fig. 6 is a schematic diagram of an interface applied by a shooting method provided in some embodiments of the present application;
fig. 7 (a) is a schematic view of an interface applied by a photographing method according to some embodiments of the present application;
fig. 7 (b) is a schematic view of an interface applied by a photographing method according to some embodiments of the present application;
fig. 8 (a) is a schematic diagram of an interface applied by a shooting method provided in some embodiments of the present application;
fig. 8 (b) is a schematic view of an interface applied by a photographing method provided in some embodiments of the present application;
fig. 9 (a) is a schematic view of an interface applied by a photographing method according to some embodiments of the present application;
fig. 9 (b) is a schematic diagram of an interface applied by a shooting method provided in some embodiments of the present application;
fig. 10 is a schematic structural diagram of a shooting device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 12 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application are capable of operation in sequences other than those illustrated or described herein, and that the terms "first," "second," etc. are generally used in a generic sense and do not limit the number of terms, e.g., a first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The shooting method provided by the embodiment of the invention can be applied to shooting scenes by multiple persons in a combined manner and also can be applied to shooting scenes in night sky.
According to the multi-person combined photograph shooting scene, a user A is supposed to use a wide-angle camera of an electronic device to shoot a combined photograph for a user B, a user C, a user D, a user E and a user F, a preview image of a shooting preview interface of the wide-angle camera comprises images of the user B, the user C, the user D, the user E and the user F, if the user needs to pay attention to the figure details of the user F, an area where the image of the user F is located can be manually framed in the shooting preview interface, under the condition that the area where the image of the user F is located is framed by the user, a shooting device starts a depth-of-field camera to shoot the user F according to the object type of the user F in the area framed by the user, the shooting preview interface of the depth-of-field camera is displayed, the preview image of the user F in the shooting interface of the depth-of-field camera is the image of the user F, then the shooting device clicks a shooting button, the shooting device controls the shooting camera to shoot, and obtains a combined photograph comprising images of the user B, the user C, the user D, the user E and the user F, and controls the shooting device to obtain a photo of the single-of the wide-of the user F.
For a scene of shooting a night sky, a user A is supposed to shoot the night sky by using a wide-angle camera of an electronic device, a preview image of a shooting preview interface of the wide-angle camera comprises a shot night sky image, if the user needs to pay attention to the details of the moon in the night sky, an area where the moon image is located can be manually framed in the shooting preview interface, when the area where the moon image is located is framed by the user, a shooting device starts the tele-view camera to shoot the moon according to the object type of the moon in the area framed by the user, and displays a shooting preview interface of the tele-view camera, wherein the preview image in the shooting preview interface of the tele-view camera is the moon image, then the user clicks a shooting button, the shooting device controls the wide-angle camera to shoot, a full-frame night sky picture is obtained, and the tele-view camera is controlled to obtain a picture only comprising the moon image.
The embodiment of the application provides a shooting method which can be applied to large-screen, flexible-screen and folding-screen electronic equipment. Fig. 1 shows a flowchart of a shooting method provided in an embodiment of the present application. As shown in fig. 1, the shooting method provided in the embodiment of the present application may include the following steps 201 to 203:
step 201: in a case where a main photographing preview interface of a main camera is displayed, a photographing apparatus receives a first input of a user.
Optionally, in this embodiment of the present application, the main camera may include any one of the following: the main camera may also be another camera, and this is not limited in this embodiment of the present application.
Optionally, in this embodiment of the application, when shooting with the main camera, the main camera shoots a full-frame picture, and a preview image of the full-frame is displayed in a main shooting preview interface of the main camera. Illustratively, when the main camera is used for shooting a night scene, a whole night sky image is displayed in the main shooting preview interface, so that a full-frame night sky image shot by the main camera is obtained.
It should be noted that the full-frame image can accommodate a wider scene, so that the user can see richer image content.
Optionally, in this embodiment of the present application, the first input may include any one of: the input method includes touch input of a user, voice input, gesture input, folding or unfolding operation of a foldable screen, or other feasible input, which is not limited in any way in the embodiment of the present application. Further, the touch input may be: a click input by the user, a slide input, a press input, etc. Further, the click operation may be any number of times of click operations. The sliding operation may be a sliding operation in any direction, such as an upward sliding operation, a downward sliding operation, a leftward sliding operation, or a rightward sliding operation, which is not limited in this embodiment of the present application.
Alternatively, the first input may be a touch input of a user to a target control in the main shooting preview interface, or a folding input of a folding foldable screen, or a sliding input in the main shooting preview interface.
Step 202: the shooting device responds to the first input and displays an auxiliary shooting preview interface of the target auxiliary camera.
And the preview image displayed on the auxiliary shooting preview interface is a preview image of the target preview area in the main shooting preview interface.
Optionally, in this embodiment of the present application, the target secondary camera may include any one of the following: the auxiliary target camera may be another camera, and the embodiment of the present application is not limited to this.
Optionally, in this embodiment of the present application, the main camera and the target auxiliary camera may be different cameras. For example, the main camera is a wide-angle camera, and the target auxiliary camera is a telephoto camera; or the first camera and the target camera are the same camera, and shooting parameters of the first camera and the target camera are different, for example, the main camera and the target auxiliary camera are both 2-time cameras, and a focal length of the target auxiliary camera is greater than that of the first camera.
It should be noted that the larger the focal length is, the farther the subject can be zoomed in, so that the subject can image on the screen largely and show details, but the viewing angle is small; the smaller the focal length is, the more scenes can be accommodated, the visual angle is wide, but the image details are insufficient.
Optionally, in this embodiment of the application, the target preview area in the main shooting preview interface may be at least a partial preview area in the main shooting preview interface.
Alternatively, different photographic subjects may be included in the preview image of the main camera and the preview image of the target sub camera. Illustratively, the shooting objects corresponding to the preview image of the target secondary camera may be: and partial shooting objects in all the shooting objects corresponding to the preview image of the main camera.
For example, in the case where a starry sky is photographed by the main camera, the preview image displayed in the main photographing preview interface is a starry sky image including images of stars and moon in the night sky, and the target preview area may be a preview area in which the image of moon in the main photographing preview interface is located.
For example, in the case where a group portrait of a person is captured by a main camera, a preview image displayed in a main capture preview interface is the group portrait of the person, the group portrait of the person includes images of a plurality of persons, and the target preview area may be an image area where a person a of a user who needs to pay attention to in the main capture preview interface is located.
Optionally, in this embodiment of the application, the target preview area may be preset, or selected by a user, or determined by the shooting device according to a recognition result of the preview image in the main shooting preview interface.
In some embodiments of the present application, the first input is taken as an example of an input of the user to fold the foldable screen. And the shooting device triggers an auxiliary shooting preview interface for displaying the target auxiliary camera under the condition that the foldable screen is detected to be folded and the folding angle is 90 degrees.
In some embodiments of the present application, a touch input of a target control in a user-initiated shooting preview interface is taken as an example of the first input. And when receiving dragging input of a user to a zoom multiple control in the main shooting preview interface, the shooting device triggers a sub-shooting preview interface of the target sub-camera to be displayed in a split screen mode.
In some embodiments of the application, the shooting device displays the auxiliary shooting preview interface of the target auxiliary camera on the condition that gesture input that two fingers of a user respectively slide towards left and right opposite directions in the main shooting preview interface is received.
Therefore, the user can quickly trigger the shooting device to display the auxiliary shooting preview interface through the input according to actual requirements, so that the user can conveniently check a plurality of preview interfaces in the shooting scene.
Step 203: the shooting device controls the main camera and the target auxiliary camera to shoot and outputs a first main shooting file and an auxiliary shooting file.
Optionally, in this embodiment of the present application, the first main shot file may be an image or a video, and the auxiliary shot file may be an image or a video.
Optionally, in this embodiment of the application, the shooting device, when receiving a seventh input from the user, controls the main camera and the target auxiliary camera to shoot, and outputs the first main shooting file and the first auxiliary shooting file.
Optionally, the seventh input is used to trigger the photographing apparatus to perform a photographing operation. Illustratively, the seventh input may be: the user inputs touch input, voice input, gesture input, or other feasibility inputs to the shooting preview interface, which is not limited in this embodiment of the present application.
For example, the shooting device may receive a third input from the user, and simultaneously or sequentially control the first camera and the target camera to shoot, so as to obtain the first image and the second image, respectively.
Optionally, the seventh input includes a first sub input and a second sub input, and the shooting device receives the first sub input, controls the main camera to shoot, and outputs a first main shooting file, and then receives the second sub input, controls the target auxiliary camera to shoot, and outputs an auxiliary shooting file.
In some embodiments of the present application, for example, the main camera is a wide-angle camera, and the target auxiliary camera is a depth-of-field camera. When the group portrait is shot, the preview image of the main camera is the portrait including the group portrait collected by the wide-angle camera, the preview image of the target auxiliary camera is the portrait A in the group portrait collected by the depth-of-field camera, after the user clicks the shooting control, the wide-angle camera is controlled to shoot the group portrait to obtain a group portrait-including group portrait, and the depth-of-field camera is controlled to shoot the portrait A to obtain a single image including the portrait A.
In some embodiments of the application, in combination with the above embodiments, after the user clicks the shooting control, the wide-angle camera is controlled to shoot the group portrait to obtain a group portrait image, and after the user clicks the shooting control again, the depth-of-field camera is controlled to shoot the portrait a to obtain a single image including the portrait a.
In some embodiments of the present application, with reference to the above embodiments, after obtaining a group portrait-containing group photo, after a user presses a shooting control for a long time, the depth-of-field camera is controlled to record a portrait a, so as to obtain a video including the portrait a. Further, during the video recording process, the user can adjust the shooting visual angle of the camera according to the actual requirement, so as to obtain a video of a person within a period of time (such as 3s,5s,10s and the like).
In the shooting method provided by the embodiment of the application, under the condition that a main shooting preview interface of a main camera is displayed, a shooting device receives a first input of a user, displays an auxiliary shooting preview interface of a target auxiliary camera, wherein a preview image displayed on the auxiliary shooting preview interface is a preview image of a target preview area in the main shooting preview interface, controls the main camera and the target auxiliary camera to shoot, and outputs a first main shooting file and an auxiliary shooting file. According to the method, the main shooting preview interface of the main camera and the auxiliary shooting preview interface of the target auxiliary camera are displayed, so that a user can quickly check images shot by different cameras and with different shooting effects.
Optionally, in some embodiments of the present application, before the secondary shooting preview interface of the target secondary camera is displayed in the step 202, the shooting method provided in the embodiment of the present application further includes the following steps A1 and A2:
step A1: the photographing device determines a target preview area.
Step A2: the shooting device determines a target auxiliary camera.
The preview image displayed on the auxiliary shooting preview interface is a preview image of a target preview area acquired by the target auxiliary camera; the target preview area is determined according to a first input, or the target preview area is an area where a recognized preset type of shooting object is located.
In some possible embodiments, the user may make a slide input or a click input in the main photographing preview interface, and the photographing device may determine the target preview area in the main photographing preview interface according to an input trajectory of the slide input or a position of the click input.
In other possible embodiments, the photographing apparatus may determine the target preview area in the photographing preview interface by recognizing the preview image in the main photographing preview interface and determining the target preview area in the photographing preview interface according to the recognition result.
For example, the camera may determine the target camera based on the preview image in the target preview area or determine the target secondary camera according to a first input of the user.
Therefore, the shooting device can select a specific preview area from the main shooting preview interface according to the first input of the user and shoot the target auxiliary camera of the shooting object corresponding to the specific preview area, so that an area image which needs to be paid attention by the user is obtained, and the subsequent viewing is facilitated.
Further optionally, in this embodiment of the present application, the step A1 may include the following steps B1 and B2:
step B1: in a case where the first input includes an input of the main photographing preview interface by the user, the photographing apparatus determines the target preview area according to the input information of the first input.
Wherein the input information includes an input position or an input track.
And step B2: and under the condition that the first input comprises an input for triggering an auxiliary shooting preview interface for displaying the target auxiliary camera, the shooting device performs object recognition on the main shooting preview interface, and under the condition that a preset type of shooting object is recognized, the area where the shooting object is located is determined as a target preview area.
In some possible implementations, the first input is a click input or a slide input of the user to the main shooting preview interface.
For example, in a case where the first input is a user's click input to the main photographing preview interface, the photographing apparatus may determine a target preview area in the main photographing preview interface according to an input position of the user's click input in the main photographing preview interface.
In some embodiments of the present application, as shown in fig. 2 (a), in the case that the main shooting preview interface 21 is displayed, an image of a night sky shot by a wide-angle camera is displayed in the shooting preview interface 21, and when a user needs to pay attention to a moon 22 in the night sky, the user clicks a position a, a position B, a position C, and a position D in the main shooting preview interface in sequence, the shooting device may regard the four input positions of the position a, the position B, the position C, and the position D as four vertices, connect two adjacent vertices to form a quadrangle, and determine an interface area surrounded by the quadrangle as a target preview area, or determine an interface area outside the interface area surrounded by the quadrangle as a target preview area.
Illustratively, in the case where the first input is a slide input by the user in the main shooting preview interface, the shooting apparatus receives a slide input by the user, and determines an interface area surrounded by a slide trajectory of the slide input as the target preview area, or determines an interface area other than the interface area surrounded by the slide trajectory of the slide input as the target preview area.
In some embodiments of the present application, taking the main camera as a wide-angle camera as an example, when a night sky is shot by using the wide-angle camera, as shown in fig. 2 (b), an image of the night sky shot by the wide-angle camera is displayed on the main shooting preview interface 21 displayed on the left screen, a desktop of the electronic device is displayed on the right screen, and if the user wants to focus on a moon in the night sky, the user manually performs a slide input frame on the main shooting preview interface 21 to select an area where the moon 22 is located, that is, an area in a dashed line frame in fig. 2, and the shooting device determines the area where the moon 22 is located as a target preview area.
In some embodiments of the present application, taking the main camera as a wide-angle camera as an example, in the case of using a wide-angle camera to perform a group photo of people, as shown in fig. 3, a left screen displayed main shooting preview interface 31 displays images of multiple people shot by the wide-angle camera, a right screen displays a desktop, and if a user wants to focus on a person 32 in the main shooting preview interface 31, the user manually performs a sliding input frame on the main shooting preview interface 31 to select an area 33 where the person is located, i.e., an area in a dashed line frame in fig. 3, and the shooting device determines the area 33 as a target preview area.
Exemplarily, what is shown in the embodiment corresponding to fig. 3 is a main shooting preview interface with a main camera displayed on a left screen, and when a desktop is displayed on a right screen, a user selects an area in the main shooting preview interface on the left screen.
It should be noted that the dashed box is for visually framing or marking the position of the area selected by the user, and may not exist in actual reality.
Therefore, the shooting device can determine the target preview area based on the input of the user, so that the user can select a preview image of a certain preview area needing important attention in the main shooting preview interface for viewing according to the actual requirement of the user, and the flexibility of user operation is improved.
In other possible implementations, the first input is used to trigger a secondary shooting preview interface displaying a target secondary camera, and the first input is input by a user to a target control in the primary shooting preview interface or input by folding the foldable screen.
For example, in the case that the main shooting preview interface is displayed, the user clicks a zoom multiple control in the main shooting preview interface, the shooting device performs object recognition on the main shooting preview interface, and in the case that a preset type of shooting object is recognized, the area where the shooting object is located is determined as the target preview area.
Illustratively, when object recognition is performed on the main shooting preview interface, the shooting device extracts a preview image in the main shooting preview interface, detects each shooting object contained in the preview image through an AI image recognition algorithm, intelligently recognizes an object type of each shooting object through the AI image recognition algorithm, then matches the object type of each shooting object with a preset type, and determines an area where the shooting object corresponding to the object type is located in the main shooting interface as a target preview area under the condition that the object type matched with the preset type exists.
In some possible embodiments, the AI image recognition algorithm may be a deep learning algorithm, and the photographing device detects the photographing object in the preview image and recognizes feature information of the photographing object by using an image recognition model trained by the deep learning algorithm, then matches the feature information of each photographing object with feature information in a preset database, and finally determines an object type corresponding to the matched feature information in the preset database as the object type of the photographing object. It should be noted that the preset database stores a corresponding relationship between the feature information and the object type of the photographic subject, and the photographic device can obtain the object type of the photographic subject by matching the feature information of the photographic subject.
It should be noted that, detecting each photographic subject in the preview image and identifying the feature information of each photographic subject by using the image recognition model trained by the deep learning algorithm is the prior art, and is not described herein again.
For example, taking a preset type as a person as an example, assuming that a landscape image of the person shot by a wide-angle camera is displayed in the main shooting preview interface, and the landscape image of the person includes a person image and a landscape image, the shooting device may recognize the landscape image of the person by using an image recognition algorithm, and after recognizing that the landscape image of the person includes a portrait of the person type, determine an area where the person is located as the target preview area.
Therefore, the shooting device can identify the object of the main shooting preview interface and automatically determine the target preview area according to the identification result, so that a user can independently view the preview image in the target preview area without operation, the intelligent degree of operation is improved, and the user operation is simplified.
Further optionally, in this embodiment of the present application, the step A2 may include the following steps C1 and C2:
step C1: the photographing device acquires an object type of a photographing object in the target preview area.
And step C2: the shooting device determines a target auxiliary camera matched with the type of the object.
The object type of the photographic subject is used to characterize an object class of the photographic subject. For example, the object types of the photographic subject include, but are not limited to, types of landscape, people, buildings, and food, and the object types of the photographic subject may be further classified according to actual needs, for example, the landscape types are further classified into a lake, a sky, a forest, a sun, a moon, and the like, the people are further classified into an old person, a child, a young year, and the like, and the object types may be other types that are not listed, and this is not limited in this embodiment of the present application.
For example, the camera may recognize the object type of the photographic object through an AI image recognition algorithm.
In some possible implementation manners, the shooting device may determine the target secondary camera matched with the object type through a preset corresponding relationship between the object type and the camera.
Exemplarily, if the camera corresponding to the preset landscape type of the shooting object is a tele-view camera, when the object type of the shooting object in the target preview area is recognized as landscape, the tele-view camera is determined as a target auxiliary camera matched with the object type; if the camera corresponding to the shooting object with the preset person type is the depth-of-field camera, when the object type of the shooting object in the target preview area is identified to be a landscape, the depth-of-field camera is determined to be the target auxiliary camera matched with the object type.
In other possible implementations, the photographing apparatus may determine the target secondary camera suitable for photographing the photographic subject by acquiring a subject type of the photographic subject and then determining the target secondary camera suitable for photographing the photographic subject through an AI algorithm.
Exemplarily, in the case that the preview image in the target preview area is a portrait, the shooting device determines the depth-of-field camera as the target secondary camera; under the condition that a preview image in the target preview area is a natural scene, determining a target auxiliary camera by the periscopic camera; and determining the target auxiliary camera by the 2-time camera if the preview image in the target preview area is the building.
Further, in the case where a target secondary camera suitable for photographing the photographic subject is determined, the photographing apparatus may acquire a preview image of the photographic subject through the target secondary camera.
In some examples, when the photographic subject is photographed by the target secondary camera, the photographing apparatus may adjust the photographing parameters of the target secondary camera according to the size of the photographic subject to obtain a preview image including only an image of the photographic subject. For example, when a person a in a group of persons is photographed by the target sub-camera, the focal distance is reduced to zoom in and enlarge a distant photographic subject, and then the target sub-camera may be moved so that the image of the person a is in the center of the preview image, thereby obtaining a preview image including only the image of the person a.
In other examples, when the target secondary camera captures the photographic subject, the capturing device may process raw image data collected by the target secondary camera to obtain image data including an image of the photographic subject only, and obtain a preview image including the image of the photographic subject only based on the image data. For example, when a person a in a group of persons is photographed by a target sub-camera, in the case of obtaining original image data of an image including the group of persons captured by the target sub-camera, the original image data is processed to obtain image data of an image including only the person a, and a preview image of the image including only the person a is obtained based on the image data of the image including only the person a.
In some embodiments of the present application, in combination with the above embodiments, in a case where a user manually slides the input frame on the main shooting preview interface 21 to select the area 23 where the moon 22 is located in a case where a night sky is shot by using a wide-angle camera, the shooting device detects the object type of the shooting object in the area 23 through an AI algorithm, determines to call a telephoto camera to shoot the moon in the night sky, and displays a preview image of the shot moon in the shooting preview interface of the telephoto camera to obtain an image showing only the moon in subsequent shooting.
In some embodiments of the present application, in combination with the above embodiments, when the wide-angle camera is used to capture a group photo of a person, and the user manually performs a sliding input box on the main capture preview interface 31 to select the area 33 where the person a needs to be focused is located, the capturing device detects the object type of the captured object in the area 33 through an AI algorithm, determines to invoke depth of field to capture the person a, and displays a preview image capturing the person a in the capture preview interface of the depth of field camera, so as to obtain an image with depth of field information, which only displays a specific person a, in subsequent capture.
Further, in the case of capturing the person a by the depth of field camera, the capturing device may process the original image data collected by the depth of field camera to obtain image data corresponding to the person a, and obtain a preview image of an image including only the person a based on the image data.
In other possible implementations, the shooting device may further determine, by an AI algorithm, a target secondary camera suitable for shooting the photographic subject based on information such as a size of the photographic subject, a display scale of the photographic subject in the preview image, and a distance between an actual photographic subject corresponding to the photographic subject and the camera.
It should be noted that the actual photographic subject corresponding to the photographic subject refers to a photographic subject in the actual photographic environment, for example, when a moon in the night sky is photographed, the photographic subject may be the moon in the preview image, and the actual photographic subject is the moon actually existing in the night sky.
It should be noted that different shooting effects are usually required for shooting objects of different object types, for example, when shooting a portrait, a depth-of-field camera can be used for shooting to obtain a background blurring to highlight the shooting effect of the portrait, and for scenes such as the moon in the night, a telephoto camera can be used for shooting to obtain a higher definition.
In this way, the shooting device can automatically recognize the object type of the shooting object in the target preview area, obtain the close-up shot of the shooting object in the target preview area, and can shoot the close-up image of the shooting object which needs to be focused by the user conveniently and efficiently without the need of selecting the shooting object which needs to be close-up shot by the user.
Further optionally, in this embodiment of the present application, the step A2 may include the following step D1:
step D1: and under the condition that the first input comprises folding input of a folding screen, the shooting device determines an auxiliary camera related to the folding angle of the folding input as a target auxiliary camera according to the association relation between the preset folding angle and the auxiliary camera.
Illustratively, the folding angle is an angle formed by the screen on both sides of an axis line when the flexible screen or the foldable screen is folded along the axis line.
Illustratively, the auxiliary camera includes one or more cameras of the electronic device.
For example, the association relationship between the folding angle and the secondary camera may be default by the system or set by the user.
For example, the relationship between the folding angle and the auxiliary camera may be: the folding angle is 30 degrees, the associated auxiliary camera is a tele-view camera, the folding angle is 60 degrees, the associated auxiliary camera is a depth-of-field camera, the folding angle is 90 degrees, and the associated auxiliary camera is a 2-time camera; or the folding angle is more than 30 degrees and less than 45 degrees, the associated auxiliary camera is a tele-view camera, the folding angle is more than 60 degrees and less than 75 degrees, and the associated auxiliary camera is a tele-view camera. The association relationship between the preset folding angle and the auxiliary camera can be flexibly set according to actual requirements, and the embodiment of the application is not limited to this.
For example, in the case that the electronic device is a folding screen or a flexible screen electronic device, the shooting device receives a folding input of the screen by a user, and when the shooting device detects that the foldable screen is folded and the folding angle is 90 °, the shooting device determines a 2-fold camera as the target auxiliary camera.
Therefore, the user can quickly trigger the shooting device to determine the target auxiliary camera through the input according to actual requirements, and therefore the user can conveniently check the preview image shot by the target auxiliary camera.
Optionally, in this embodiment of the application, in a case that the target preview area includes at least two preview areas, the target secondary camera includes at least two secondary cameras.
Optionally, the step 202 may include the following step 202b1 or step 202b2:
step 202b1: the shooting device responds to the first input, a main shooting preview interface of the main camera is displayed in a first screen area of the display screen, and auxiliary shooting preview interfaces of at least two auxiliary cameras are displayed in at least two screen areas of the display screen.
Step 202b2: the shooting device responds to the first input, a main shooting preview interface of the main camera is displayed on the first screen, and auxiliary shooting preview interfaces of at least two auxiliary cameras are displayed in at least two screen areas of the second screen.
Illustratively, one secondary camera corresponds to one target preview area.
In some possible implementation manners, for a large-screen electronic device, the first screen area may be any screen area of a display screen of the electronic device, and the at least two screen areas are any screen areas of the display screen except for the first screen area.
In some embodiments of the present application, as shown in fig. 4, taking the main camera as a wide-angle camera and the target secondary cameras as a depth-of-field camera and a telephoto camera as examples, when a group portrait is shot by the wide-angle camera, a main shooting preview interface 41 is displayed in a left screen of the electronic device, the main shooting preview interface 41 includes the group portrait collected by the wide-angle camera and a remote mountain peak, after a user slides the main shooting preview interface 41 to select a single portrait 42a in the group portrait and the remote mountain peak 42b, the shooting device displays a secondary shooting preview interface 43 in a top half screen area of the right screen, where the telephoto camera shoots the remote mountain peak 42b, and displays a secondary shooting preview interface 44 in a bottom half screen area of the right screen, where the depth-of-field camera shoots the single portrait 42 a.
In other possible implementations, for the foldable-screen electronic device, the first screen is one screen of the foldable screen, and the second screen is another screen of the foldable screen.
So, the shooting device can show the main preview interface of shooing of main camera respectively at the display screen and the supplementary preview interface of shooing of assisting the camera of a plurality of targets for the user can preview the image of a plurality of camera collections at same interface, the user of being convenient for looks over in real time and shoots the effect, and can be in same image scene, carry out more detailed show according to more detailed scenes that later stage user probably concerns, thereby more information of things in the more detailed show scene in limited scene.
Optionally, in this embodiment of the application, the first input includes a folding input for folding the screen, where the folding input is used to trigger a secondary shooting preview interface for displaying the target secondary camera.
Optionally, the step 202 may include the following step 202c:
step 202c: and the shooting device responds to the first input, and displays an auxiliary shooting preview interface of the target auxiliary camera under the condition that the folding angle of the folding input is a preset angle.
For example, the preset angle may be set by default or autonomously by the user.
For example, the preset angle may be an angle value, for example, the preset angle is 30 °,60 °, 90 °, or the like; the preset angle may be an angle range, for example, the preset angle is between 30 ° and 60 ° and includes any angle value between 30 ° and 60 °, or the preset angle is between 60 ° and 90 ° and includes any angle value between 60 ° and 90 °.
Illustratively, when a user performs an input of folding a foldable screen of the electronic device, the photographing apparatus detects a current size of a folding angle and displays an auxiliary photographing preview interface of the target auxiliary camera if the folding angle is a preset angle.
For example, taking the target auxiliary camera as the telephoto camera and the preset angle of 60 degrees as an example, in the folding process of the electronic device folded by the user, the shooting device displays a shooting preview interface of the telephoto camera when detecting that the current folding angle of the foldable screen reaches 60 degrees.
Therefore, the user can quickly trigger and display the target auxiliary shooting preview interface through the input of the folding screen, and the operation efficiency of the user is improved.
Optionally, in this embodiment of the present application, the first main shooting file and the auxiliary shooting file include images or videos.
Optionally, after step 203, the shooting apparatus performs a synthesizing process on the first main shot file and the auxiliary shot file to obtain a synthesized file, and how to obtain the synthesized file includes the following four cases:
and when the first main shooting file is a main shooting image and the auxiliary shooting file of the shooting device is an auxiliary shooting image, carrying out image fusion on the main shooting image and the auxiliary shooting image, and outputting a fused image.
And under the condition that the first main shooting file is a main shooting video and the auxiliary shooting file is an auxiliary shooting video, the shooting device performs video fusion on the main shooting video and the auxiliary shooting video and outputs a fused video.
And under the condition that the first main shooting file is a main shooting image and the auxiliary shooting file is an auxiliary shooting video, the shooting device performs video fusion on the main shooting image and the auxiliary shooting video and outputs a fused video.
And under the condition that the first main shooting file is a main shooting video and the auxiliary shooting file is an auxiliary shooting image, the shooting device performs video fusion on the main shooting video and the auxiliary shooting image and outputs a fused video.
Exemplarily, when the first main shooting file is a main shooting image and the auxiliary shooting file is an auxiliary shooting image, the shooting device determines a fusion area in the main shooting image and performs fusion processing on the fusion area of the auxiliary shooting image in the main shooting image to obtain a fused image, wherein the auxiliary shooting image is displayed in the fusion area in the fused image; alternatively, the image capturing apparatus may obtain an image of a main subject in the auxiliary image based on the auxiliary image, and then perform image fusion on a fusion region of the image of the main subject in the main image to obtain a fused image.
Illustratively, the above-described fusion region may be a region in a background image of the main-shot image.
For example, assuming that the main shot image is a group photo and the auxiliary shot image is a single image, the single image is fused in the upper right corner area where no portrait is displayed in the group photo, so as to obtain a fused image with the single image displayed in the upper right corner.
For example, when the auxiliary shot image is an image including the moon, the shooting device identifies the outline of the moon in the image through an edge detection algorithm, then obtains an image only displaying the moon based on the outline of the moon, and fuses the image in the fusion area in the main shot image.
Illustratively, when the first main shooting file is a main shooting video and the auxiliary shooting file is an auxiliary shooting video, the shooting device determines a fusion area of each image frame of the main shooting video, and performs fusion processing on each frame of the auxiliary shooting video in the fusion area of each image frame of the main shooting video to obtain a fused video, wherein one image frame of the auxiliary shooting video is displayed in the fusion area of each image frame of the fused video.
Further, in the case where the main shooting video includes a different number of image frames from those included in the auxiliary shooting video, the shooting device may capture the same number of video frames as the main shooting video and the auxiliary shooting video for fusion.
In some possible embodiments, assuming that the subsidiary video includes k image frames, that is, k image frames, the shooting device performs fusion processing on the 1 st frame of the subsidiary video in the fusion area of the 1 st frame of the main video to obtain the fused 1 st image frame, performs fusion processing on the 2 nd frame of the subsidiary video in the fusion area of the 2 nd frame of the main video to obtain the fused 2 nd image frame, performs fusion processing on the 3 rd frame of the subsidiary video in the fusion area of the 3 rd frame of the main video to obtain the fused 3 rd image frame, and so on until the k frame of the subsidiary video is subjected to fusion processing in the fusion area of the k frame of the main video to obtain the fused k th image frame, and finally synthesizes the fused k image frames into the fusion video.
For example, if the main shot image is a video including multiple people, which is referred to as a multi-person video for short, and the auxiliary shot image is a video including a single person, which is referred to as a single person video for short, image frames of the multi-person video and the single person video are intercepted, and each intercepted image frame of the single person video is fused or spliced in a fusion area of each intercepted image frame of the multi-person video, so that a fusion video including single person video information and multi-person video information is obtained.
For example, when the first main shot file is a main shot image and the subsidiary shot file is a subsidiary shot video, the shooting device may copy image frames of the main shot image to obtain a corresponding main shot video, where the number of the image frames of the main shot video and the subsidiary shot video is the same, and each image frame of the main shot video is the same, then determine a fusion region of the main shot image, and perform fusion processing on each image frame of the subsidiary shot video in each image frame of the main shot video corresponding to the main shot image to obtain a fused video.
In some possible embodiments, assuming that the auxiliary shooting video includes m image frames, the shooting device performs image frame copying on the main shooting image to obtain a main shooting video including m identical image frames, then the shooting device performs fusion processing on the 1 st frame of the auxiliary shooting video in the fusion area of the 1 st frame of the main shooting video to obtain a fused 1 st image frame, performs fusion processing on the 2 nd frame of the auxiliary shooting video in the fusion area of the 2 nd frame of the main shooting video to obtain a fused 2 nd image frame, performs fusion processing on the 3 rd frame of the auxiliary shooting video in the fusion area of the 3 rd frame of the main shooting video to obtain a fused 3 rd image frame, and so on until the m th frame of the auxiliary shooting video is subjected to fusion processing in the fusion area of the m th frame of the main shooting video to obtain a fused m th image frame, and finally synthesizes the fused m image frames into a fusion video.
For example, when the first main shooting file is a main shooting video and the auxiliary shooting file is an auxiliary shooting video, the shooting device may copy image frames of the auxiliary shooting video to obtain a corresponding auxiliary shooting video, where the number of the image frames of the auxiliary shooting video is the same as that of the image frames of the main shooting video, and each image frame of the auxiliary shooting video is the same, then determine a fusion region of each image frame of the main shooting video, and perform fusion processing on each image frame of the auxiliary shooting video corresponding to the auxiliary shooting video in the fusion region of each image frame of the main shooting video to obtain a fused video.
In some possible embodiments, assuming that the main shooting video includes n image frames, the shooting device performs image frame copying on the auxiliary shooting image to obtain an auxiliary shooting video including n identical image frames, then the shooting device performs fusion processing on the 1 st frame of the auxiliary shooting video in the fusion area of the 1 st frame of the main shooting video to obtain a fused 1 st image frame, performs fusion processing on the 2 nd frame of the auxiliary shooting video in the fusion area of the 2 nd frame of the main shooting video to obtain a fused 2 nd image frame, performs fusion processing on the 3 rd frame of the auxiliary shooting video in the fusion area of the 3 rd frame of the main shooting video to obtain a fused 3 rd image frame, and so on until the n th frame of the auxiliary shooting video is subjected to fusion processing in the fusion area of the n th frame of the main shooting video to obtain a fused n th image frame, and finally synthesizes the fused n image frames into the fused video.
Optionally, after the fused image or the fused video is obtained, the fused image or the fused video may be stored, or the fused image or the fused video may be displayed.
Optionally, in this embodiment of the present application, the step 202 may include the following step 202c1 or step 202c2:
step 202a1: the shooting device responds to the first input, a main shooting preview interface of the main camera is displayed in a first screen area of the display screen, and an auxiliary shooting preview interface of the target auxiliary camera is displayed in a second screen area of the display screen.
Step 202a2: the shooting device responds to the first input, displays a main shooting preview interface of the main camera on the first screen, and displays an auxiliary shooting preview interface of the target auxiliary camera on the second screen.
Illustratively, the first screen region is different from the second screen region. For example, the first screen area may be a top half screen area of the display screen, and the second screen area may be a bottom half screen area of the display screen; alternatively, the first screen area may be a left half screen area of the display screen, and the second screen area may be a right half screen area of the display screen.
Illustratively, the first screen and the second screen may be two screens of a folding screen electronic device, or two screen regions of a flexible screen electronic device.
It should be noted that, in the embodiment of the present application, the first screen and the second screen of the folding-screen electronic device may be referred to as a left screen and a right screen, the left half screen area of the large-screen or flexible-screen electronic device may be referred to as a left screen, and the right half screen area may be referred to as a right screen.
In some embodiments of the present application, as shown in fig. 5, a main camera is a wide-angle camera, and a target auxiliary camera is a depth-of-field camera. A shooting preview interface 51 of a wide-angle camera is displayed in a left screen of the electronic device, a preview image of a group portrait acquired by the wide-angle camera is displayed on the shooting preview interface 51, and after a user selects an area where a specific portrait A in the shooting preview interface 51 is located, a preview image of a person corresponding to the specific portrait A in the group portrait acquired by a depth-of-field camera in a shooting preview interface 52 is displayed in a right screen.
In some embodiments of the present application, as shown in fig. 6, a main camera is a wide-angle camera, and a target auxiliary camera is a telephoto camera. A shooting preview interface 61 of a wide-angle camera is displayed in a left screen of the electronic device, a preview image of a starry sky collected by the wide-angle camera is displayed on the shooting preview interface 61, and after a user selects an area where a moon 62 in the shooting preview interface 61 is located, a preview image of the moon collected by the tele-view camera in a shooting preview interface 63 is displayed in a right screen.
It should be noted that, in the embodiment corresponding to fig. 5, the electronic device may be a large-screen electronic device or a folding-screen electronic device, and in the case that the electronic device is a large-screen electronic device, the left screen may be a left half screen of a display screen of the large-screen electronic device, and the right screen may be a right half screen of the display screen of the large-screen electronic device; under the condition that the electronic equipment is the folding screen electronic equipment, the left screen can be a first screen of the folding screen electronic equipment, and the right screen can be a second screen of the folding screen electronic equipment.
According to the shooting method provided by the embodiment of the application, the multiple cameras are used for preview display and auxiliary shooting respectively according to the characteristics of the screen, and through the interactive combination of multiple shooting screens, the overall appearance of a scene is saved, and meanwhile, the preview image of a specific shooting object is displayed according to the requirements of a user, so that the user can conveniently check the preview images of multiple different shooting effects.
Further optionally, in this embodiment, after the step 203, the shooting method provided in this embodiment further includes the following step 205 or step 206:
step 205: under the condition that the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed in a first screen area and a second screen area of the display screen, the shooting device updates the auxiliary shooting preview interface to a first sub-area of the second screen area for displaying, and displays a first main shooting file and an auxiliary shooting file in a second sub-area of the second screen area.
Step 206: under the condition that the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed on the first screen and the second screen, the shooting device updates the auxiliary shooting preview interface to the first screen area of the second screen for displaying, and displays the first main shooting file and the auxiliary shooting file in the second screen area of the second screen.
For example, the first sub area and the second sub area may be different screen areas of the second screen area.
In one example, under the condition that the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed in a first screen area and a second screen area of the display screen, the shooting device receives shooting input of a user to a shooting control of the main shooting preview interface, and the shooting device respectively controls the main camera and the target auxiliary camera to shoot, so that a first main shooting file and an auxiliary shooting file are obtained.
In another example, when the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed in a first screen area and a second screen area of the display screen, the shooting device receives a first shooting input of a user to a shooting control of the main shooting preview interface, the shooting device controls the main camera to shoot to obtain a first main shooting file, then receives a second shooting input of the user to the shooting control of the main shooting preview interface, and the shooting device controls the target auxiliary camera to shoot to obtain an auxiliary shooting file.
In some embodiments of the present application, in combination with the above embodiments, a shooting preview interface 51 of a wide-angle camera is displayed in a left screen of an electronic device, the shooting preview interface 51 displays a preview image of group portraits collected by the wide-angle camera, a preview image of a person corresponding to a specific portrait a in the group portraits collected by a depth-of-field camera is displayed in a right screen, after a user clicks a shooting control 53, as shown in fig. 7 (a), a shooting device controls the wide-angle camera to shoot the group portraits to obtain an image 74 including the group portraits, and displays the preview image of the depth-of-field camera in a right screen upper half screen area 75a, and displays the image 74 in a right screen area 75b, and then, after the user clicks the shooting control 53 again, as shown in fig. 7 (b), controls the depth-of-field camera to shoot the specific person in the group portraits scene to obtain a 3D depth-of-field image 75 including the specific portrait, and displays the depth-of the group portraits image 74 and the depth-of the image 75 in the right screen area, respectively.
In this manner, by displaying the preview images of the panoramic image and the close-up image separately in different screen areas, the user can intuitively view the panoramic image and the close-up image.
In some embodiments of the present application, in combination with the above embodiments of the present application, a left screen of the electronic device displays a shooting preview interface of a full-scene picture shot by the wide-angle camera, and a right screen of the electronic device displays a preview interface of the main camera. The method comprises the steps that a user selects 2 concerned areas in a shooting preview interface of a left screen, a shooting device preferentially calls an optimal camera of the concerned areas selected for the first time according to a user selection sequence, then the optimal camera is switched on to shoot, and a preview image shot by the camera is displayed in a preview interface of a right screen. Then, a user clicks a left screen shooting control to trigger a left screen wide-angle camera to shoot, a picture preview of a left screen shot is displayed in a region below a right screen, then, the user clicks a left screen shooting button to trigger an optimal camera of a right screen to shoot, at the moment, two picture preview images are displayed below the right screen, one image is a global picture shot by the first wide-angle camera, the other image is a close-up picture of a region of interest selected by the user for the first time, at the moment, a shooting device is switched to display a preview image of the optimal camera of a second selected region in a right screen shooting preview interface, the user continues clicking a shooting button to trigger the optimal camera of the second selected region to shoot, a close-up picture of the second region selected by the user is obtained, and the close-up picture of the second region selected by the user is displayed on the right screen.
According to the shooting method provided by the embodiment of the application, according to the characteristics of a screen, a plurality of cameras are respectively used for preview display and auxiliary shooting, through the interactive combination of multiple shooting screens, the overall appearance of a scene is saved, meanwhile, a specific object in the scene is selected according to the requirements of a user, according to specific scenes (close shot and long shot), the most appropriate lens (telescopic shot and microspur) is called through an algorithm to carry out close-up on the selected object, the special effect can be increased, or customized arbitrary operation is carried out, for example, when a night scene is shot, the left screen is shot by a whole starry sky picture, the right screen can be shot by a moon at night, and the periscopic lens is called to shoot a super moon. Or on the interface of shooting the water horse of the vehicle, a certain specific scene is set on the right screen to shoot a section of dynamic video, the shooting is completed through the left screen and the right screen to synthesize an image, when a user clicks an object in the image, the purpose of supplementary display can be carried out according to the shooting details of the right screen, and the purpose of displaying the details of the object which the user wants to pay attention to in the later period as much as possible in the effective image attention scene is achieved.
Optionally, in this embodiment of the present application, after the step 203, the shooting method provided in this embodiment of the present application further includes the following steps D1 to D4:
step D1: the photographing apparatus receives a third input from the user.
Step D2: the shooting device responds to the third input, controls the main camera to shoot, and outputs a second main shooting file.
And D3: and under the condition that the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed in a first screen area and a second screen area of the display screen, displaying a second main shooting file in a second sub-area of the second screen area.
Step D4: and under the condition that the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed on the first screen and the second screen, displaying a second main shooting file in a second screen area of the second screen.
For example, the third input may be any feasible input such as a touch input, a voice input, or a gesture input of the user.
Illustratively, the second main shot file described above may be a main shot image or a main shot video.
Illustratively, after the first main shooting file and the auxiliary shooting file are obtained through shooting, the shooting device receives a third input of a shooting button from a user, and controls the main camera to shoot again to obtain a second main shooting file.
For example, for a large-screen electronic device, the shooting device may display the second main shooting file in a second sub-area partition of a second screen area of the electronic device, or display the second main shooting file in a superposition manner in the second sub-area.
For example, for the folding-screen electronic device, the shooting device may display the second main shooting file in a second screen area of a second screen of the electronic device in a partitioned manner, or display the second main shooting file in a second screen area of the second screen in an overlapped manner.
For example, after the first main shooting file is obtained by shooting, the user can adjust the shooting angle of view of the camera to continue the second main shooting file. For example, when a wide-angle camera is used to capture a group photo of a person, a user may first obtain a group photo of the person and a close-up of the person, and then adjust the capturing angle of the wide-angle camera to obtain a group photo of the person at another different capturing angle, thereby obtaining a plurality of main-captured images at different capturing angles. Therefore, the user can trigger the main camera to shoot a plurality of main shooting files and display the shot main shooting files, and the user can check the files conveniently.
Optionally, in this embodiment of the present application, after the step 203, the shooting method provided in this embodiment of the present application further includes the following steps E1 and E2, or includes the following steps E1 and E3:
step E1: the photographing apparatus receives a fourth input from the user.
Wherein the fourth input is an input for viewing the main shot file.
Step E2: and the shooting device responds to the fourth input, displays the main shooting file in the first screen area of the display screen, and displays the auxiliary shooting file in the second screen area of the display screen.
Step E3: the photographing apparatus displays the main photographing file on the first screen and the auxiliary photographing file on the second screen in response to the fourth input.
The fourth input is, for example, any feasible input such as a touch input, a voice input, and a gesture input of a user, which is not limited in this embodiment of the present application.
Illustratively, the fourth input is a user input at the gallery interface.
In some embodiments of the present application, as shown in fig. 8 (a), a main shot file is taken as a main shot image, and an auxiliary shot file is taken as an auxiliary shot image. In the case where the image identifier 82 of the main shot image is displayed on the gallery interface, the user presses the identifier 82 for a long time, as shown in fig. 8 (b), the shooting device displays the main shot image 81 on a screen 83 of the electronic device, and displays a subsidiary shot image 85 on a screen 84 of the electronic device, wherein an image area where the portrait 81a in the subsidiary shot image 81 is located corresponds to the subsidiary shot image 85, and the subsidiary shot image 85 is a close-up picture of a person corresponding to the portrait 81 a.
Therefore, when the user views the image, the shooting device can display the close-up image corresponding to the target image area in the image on the second screen and display the image on the first screen, so that the user can conveniently and quickly view the image content in a part of image area in the image through the displayed close-up image, and view the displayed image and the close-up image on different screens simultaneously, and the display flexibility is greatly improved.
Optionally, in this embodiment of the present application, after the step 203, the shooting method provided in this embodiment of the present application further includes the following steps F1 to F4:
step F1: the photographing apparatus receives a fifth input from the user.
Step F2: the photographing apparatus displays a main photographing file in response to a fifth input.
Step F3: the photographing apparatus receives a sixth input of the user to the target image area in the main photographing file.
Step F4: the photographing apparatus displays the subsidiary photographing file in response to a sixth input.
The target image area is an area corresponding to the target preview area in the main shooting file.
For example, in the case where the main-shot file is a main-shot image, the target image area may be an image area corresponding to a target preview area in a shooting preview interface of the main camera in the case where the main-shot image is shot by the main camera. For example, if the target preview area in the shooting preview interface of the main camera is the area of the portrait a in the group portraits, the target image area in the main shooting image shot by the main camera is the image area where the portrait a is located.
For example, the fifth input is any input with feasibility, such as a touch input, a voice input, a gesture input, and the like of a user, which is not limited in this embodiment of the present application.
For example, the sixth input is any feasible input such as a touch input, a voice input, and a gesture input of a user, which is not limited in this embodiment of the present application.
In some embodiments of the present application, as shown in fig. 9 (a), taking a main shot file as a main shot image and an auxiliary shot file as an auxiliary shot image, for example, a foldable screen of an electronic device is in an unfolded state, a main shot image 91 is displayed in the screen, the main shot image 91 includes person images of a plurality of persons, a user clicks an image area where the person image 91a in the main shot image 91 is located, as shown in fig. 9 (b), a shooting device displays an image auxiliary shot image 93 corresponding to the image area where the person image 91a in the main shot image 91 is located on the screen, and the auxiliary shot image 93 is a close-up picture of the person corresponding to the person image 91 a.
It should be noted that the dashed lines in the screens are used to indicate the folding axes of the electronic device, for example, in some implementations, the foldable screens can be folded along the folding axes shown by the dashed lines to obtain two screens at the left and right sides of the folding axes, and the dashed lines are not actually displayed on the screens. The above example shows that the main shot image 91 is displayed in full screen in the case where the folding screen is unfolded, and it is also possible to display the main shot image 91 only on the folding axis left side screen or the folding screen right side screen.
Therefore, when the user views the full-frame image, the close-up image of the image content in the partial image area needing attention in the full-frame image can be conveniently and quickly called out to be displayed, and the image content needing attention can be conveniently and carefully viewed through the close-up image.
According to the shooting method provided by the embodiment of the application, the execution main body can be a shooting device. The embodiment of the present application takes an example in which a shooting device executes a shooting method, and the shooting device provided in the embodiment of the present application is described.
The embodiment of the application provides a shooting device, as shown in fig. 10, a receiving module 1001, a display module 1002 and a control module 1003, wherein:
the receiving module 1001 is configured to receive a first input of a user when a main shooting preview interface of a main camera is displayed;
the display module 1002 is configured to display, in response to the first input received by the receiving module 1001, an auxiliary shooting preview interface of a target auxiliary camera, where a preview image displayed on the auxiliary shooting preview interface is a preview image of a target preview area in the main shooting preview interface;
the control module 1003 is configured to control the main camera and the target auxiliary camera to perform shooting, and output a first main shooting file and an auxiliary shooting file.
Optionally, in an embodiment of the present application, the apparatus further includes: a determination module;
the determining module is used for determining a target preview area before the display module displays a target auxiliary camera auxiliary shooting preview interface;
the determining module is further used for determining the target auxiliary camera before the display module displays the auxiliary shooting preview interface of the target auxiliary camera;
the preview image displayed on the auxiliary shooting preview interface is a preview image of the target preview area acquired by the target auxiliary camera; the target preview area is determined according to the first input, or the target preview area is an area where a recognized preset type of shooting object is located.
Optionally, in an embodiment of the present application, the determining module is specifically configured to determine, when a first input includes an input of a main shooting preview interface by a user, a target preview area according to input information of the first input, where the input information includes an input position or an input trajectory;
the determining module is specifically configured to perform object recognition on the main shooting preview interface when the first input includes an input for triggering a display of an auxiliary shooting preview interface of a target auxiliary camera, and determine an area where the shooting object is located as a target preview area when a preset type of shooting object is recognized.
Alternatively, in the embodiments of the present application,
the determining module is specifically configured to acquire an object type of the photographic object in the target preview area;
the determining module is specifically configured to determine the target auxiliary camera matched with the type of the object acquired by the acquiring module.
Alternatively, in the embodiments of the present application,
the determining module is specifically configured to determine, when the first input includes a folding input of a folding screen, an auxiliary camera associated with a folding angle of the folding input as a target auxiliary camera according to an association relationship between a preset folding angle and the auxiliary camera.
Optionally, in this embodiment of the present application, in a case that the target preview area includes at least two preview areas, the target secondary camera includes at least two secondary cameras;
the display module is specifically configured to, in response to the first input received by the receiving module, display a main shooting preview interface of the main camera in a first screen area of a display screen, and display auxiliary shooting preview interfaces of the at least two auxiliary cameras in at least two screen areas of the display screen;
or, the display module is specifically configured to display, in response to the first input received by the receiving module, a main shooting preview interface of the main camera on a first screen, and display auxiliary shooting preview interfaces of the at least two auxiliary cameras in at least two screen areas of a second screen.
Optionally, in an embodiment of the present application, the first input includes a folding input to fold a screen; the folding input is used for triggering an auxiliary shooting preview interface for displaying a target auxiliary camera;
the display module is specifically configured to, in response to the first input received by the receiving module, display an auxiliary shooting preview interface of the target auxiliary camera when the folding angle of the folding input is a preset angle.
Optionally, in this embodiment of the present application, the first main shot file and the auxiliary shot file include images or videos; the device further comprises: a processing module;
the processing module is used for carrying out image fusion on the main shooting image and the auxiliary shooting image under the condition that the first main shooting file is a main shooting image and the auxiliary shooting file is an auxiliary shooting image, and outputting a fused image;
the processing module is further configured to perform video fusion on the main shooting video and the auxiliary shooting video and output a fused video when the first main shooting file is a main shooting video and the auxiliary shooting file is an auxiliary shooting video;
the processing module is further configured to perform video fusion on the main shooting image and the auxiliary shooting video and output a fused video when the first main shooting file is a main shooting image and the auxiliary shooting file is an auxiliary shooting video;
the processing module is further configured to perform video fusion on the main shooting video and the auxiliary shooting image and output a fused video when the first main shooting file is a main shooting video and the auxiliary shooting file is an auxiliary shooting image.
Alternatively, in the embodiments of the present application,
the display module is specifically configured to display a main shooting preview interface of the main camera in a first screen area of a display screen in response to the first input received by the receiving module, and display an auxiliary shooting preview interface of the target auxiliary camera in a second screen area of the display screen;
or the display module is specifically configured to display, in response to the first input, a main shooting preview interface of the main camera on a first screen, and display an auxiliary shooting preview interface of the target auxiliary camera on a second screen.
Alternatively, in the embodiments of the present application,
the display module is further configured to update the auxiliary shooting preview interface to a first sub-area of a second screen area for display under the condition that the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed in the first screen area and the second screen area of a display screen, and display the first main shooting file and the auxiliary shooting file in a second sub-area of the second screen area;
the display module is further used for updating the auxiliary shooting preview interface to the first screen area of the second screen to be displayed under the condition that the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed on the first screen and the second screen, and displaying the first main shooting file and the auxiliary shooting file in the second screen area.
Alternatively, in the embodiments of the present application,
the receiving module is further used for receiving a third input of the user;
the control module is further configured to control the main camera to perform shooting in response to the third input received by the receiving module, and output a second main shooting file;
the display module is further configured to display the second main shooting file in a second sub-area of a second screen area under the condition that the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed in the first screen area and the second screen area of a display screen;
or the display module is further configured to display the second main shooting file in a second screen area of the second screen under the condition that the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed on the first screen and the second screen.
Alternatively, in the embodiments of the present application,
the receiving module is further configured to receive a fourth input of the user, where the fourth input is an input for viewing the main shot file;
the display module is further configured to display the main shooting file in a first screen area of a display screen and display the auxiliary shooting file in a second screen area of the display screen in response to the fourth input received by the receiving module;
or, the display module is further configured to display the main shooting file on a first screen and display the auxiliary shooting file on a second screen in response to the fourth input.
Alternatively, in the embodiments of the present application,
the receiving module is further used for receiving a fifth input of the user;
the display module is further configured to display the main shot file in response to the fifth input received by the receiving module;
the receiving module is further configured to receive a sixth input of the user to the target image area in the main shooting file;
the display module is further configured to display the subsidiary shooting file in response to the sixth input received by the receiving module;
and the target image area is an area corresponding to the target preview area in the main shot file.
In the shooting device provided by the embodiment of the application, under the condition that a main shooting preview interface of a main camera is displayed, the shooting device receives a first input of a user, displays an auxiliary shooting preview interface of a target auxiliary camera, wherein a preview image displayed on the auxiliary shooting preview interface is a preview image of a target preview area in the main shooting preview interface, controls the main camera and the target auxiliary camera to shoot, and outputs a first main shooting file and an auxiliary shooting file. According to the method, the main shooting preview interface of the main camera and the auxiliary shooting preview interface of the target auxiliary camera are displayed, so that a user can quickly check images shot by different cameras and with different shooting effects.
The shooting device in the embodiment of the present application may be an electronic device, or may be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (NAS), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The shooting device provided in the embodiment of the present application can implement each process implemented in the method embodiments of fig. 1 to 9, and is not described here again to avoid repetition.
Optionally, as shown in fig. 11, an electronic device 2000 is further provided in an embodiment of the present application, and includes a processor 2001 and a memory 2002, where the memory 2002 stores a program or an instruction that can be executed on the processor 2001, and when the program or the instruction is executed by the processor 2001, the steps of the foregoing shooting method embodiment are implemented, and the same technical effects can be achieved, and are not described again here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 12 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 110 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
Wherein the user input unit 107 is configured to receive a first input of a user in a case where a main shooting preview interface of a main camera is displayed;
the display unit 106 is configured to display a secondary shooting preview interface of a target secondary camera in response to the first input received by the user input unit 107, where a preview image displayed on the secondary shooting preview interface is a preview image of a target preview area in the primary shooting preview interface;
the processor 110 is configured to control the main camera and the target auxiliary camera to shoot, and output a first main shooting file and an auxiliary shooting file.
Optionally, in an embodiment of the present application, the apparatus further includes: a processor 110;
the processor 110 is configured to determine a target preview area before the display unit 106 displays a secondary shooting preview interface of a target secondary camera;
the processor 110 is further configured to determine a target secondary camera before the display unit 106 displays a secondary shooting preview interface of the target secondary camera;
the preview image displayed on the auxiliary shooting preview interface is a preview image of the target preview area acquired by the target auxiliary camera; the target preview area is determined according to the first input, or the target preview area is an area where a recognized preset type of shooting object is located.
Optionally, in this embodiment of the application, the processor 110 is specifically configured to, when the first input includes an input of the main shooting preview interface by a user, determine the target preview area according to input information of the first input, where the input information includes an input position or an input trajectory;
the processor 110 is specifically configured to perform object recognition on the main shooting preview interface when the first input includes an input for triggering a display of an auxiliary shooting preview interface of a target auxiliary camera, and determine an area where a shooting object is located as a target preview area when a preset type of shooting object is recognized.
Alternatively, in the embodiments of the present application,
the processor 110 is specifically configured to obtain an object type of the shooting object in the target preview area;
the processor 110 is specifically configured to determine a target secondary camera matched with the type of the object acquired by the acquisition module.
Alternatively, in the embodiments of the present application,
the processor 110 is specifically configured to, when the first input includes a folding input for folding a screen, determine, as the target auxiliary camera, an auxiliary camera associated with a folding angle of the folding input according to an association relationship between a preset folding angle and the auxiliary camera.
Optionally, in this embodiment of the present application, in a case that the target preview area includes at least two preview areas, the target secondary camera includes at least two secondary cameras;
the display unit 106 is specifically configured to, in response to the first input received by the user input unit 107, display a main shooting preview interface of the main camera in a first screen area of a display screen, and display auxiliary shooting preview interfaces of the at least two auxiliary cameras in at least two screen areas of the display screen;
or, the display unit 106 is specifically configured to, in response to the first input received by the user input unit 107, display a main shooting preview interface of the main camera on a first screen, and display auxiliary shooting preview interfaces of the at least two auxiliary cameras in at least two screen areas of a second screen.
Optionally, in an embodiment of the present application, the first input includes a folding input to fold a screen; the folding input is used for triggering an auxiliary shooting preview interface for displaying the target auxiliary camera;
the display unit 106 is specifically configured to, in response to the first input received by the user input unit 107, display an auxiliary shooting preview interface of the target auxiliary camera when the folding angle of the folding input is a preset angle.
Optionally, in this embodiment of the present application, the first main shot file and the auxiliary shot file include images or videos; the device further comprises: a processor 110;
the processor 110 is configured to perform image fusion on the main shot image and the auxiliary shot image and output a fused image when the first main shot file is a main shot image and the auxiliary shot file is an auxiliary shot image;
the processor 110 is further configured to perform video fusion on the main shooting video and the auxiliary shooting video and output a fused video when the first main shooting file is a main shooting video and the auxiliary shooting file is an auxiliary shooting video;
the processor 110 is further configured to perform video fusion on the main shooting image and the auxiliary shooting video and output a fused video when the first main shooting file is a main shooting image and the auxiliary shooting file is an auxiliary shooting video;
the processor 110 is further configured to perform video fusion on the main shooting video and the auxiliary shooting image and output a fused video when the first main shooting file is a main shooting video and the auxiliary shooting file is an auxiliary shooting image.
Alternatively, in the embodiments of the present application,
the display unit 106 is specifically configured to, in response to the first input received by the user input unit 107, display a main shooting preview interface of the main camera in a first screen area of a display screen, and display an auxiliary shooting preview interface of the target auxiliary camera in a second screen area of the display screen;
or, the display unit 106 is specifically configured to, in response to the first input, display a main shooting preview interface of the main camera on a first screen, and display an auxiliary shooting preview interface of the target auxiliary camera on a second screen.
Alternatively, in the embodiments of the present application,
the display unit 106 is further configured to update the auxiliary shooting preview interface to a first sub-area of a second screen area for displaying when the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed in the first screen area and the second screen area of a display screen, and display the first main shooting file and the auxiliary shooting file in a second sub-area of the second screen area;
the display unit 106 is further configured to update the auxiliary shooting preview interface to the first screen area of the second screen for display when the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed on the first screen and the second screen, and display the first main shooting file and the auxiliary shooting file in the second screen area of the second screen.
Alternatively, in the embodiments of the present application,
the user input unit 107 is further configured to receive a third input from the user;
the processor 110 is further configured to control the main camera to perform shooting in response to the third input received by the user input unit 107, and output a second main shooting file;
the display unit 106 is further configured to display the second main shooting file in a second sub-area of a second screen area when the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed in the first screen area and the second screen area of a display screen;
or, the display unit 106 is further configured to display the second main shooting file in a second screen area of the second screen when the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed on the first screen and the second screen.
Alternatively, in the embodiments of the present application,
the user input unit 107 is further configured to receive a fourth input from the user, where the fourth input is an input to view the main shot file;
the display unit 106 is further configured to display the main shot file in a first screen area of a display screen and display the auxiliary shot file in a second screen area of the display screen in response to the fourth input received by the user input unit 107;
or, the display unit 106 is further configured to display the main shot file on a first screen and display the auxiliary shot file on a second screen in response to the fourth input.
Alternatively, in the embodiments of the present application,
the user input unit 107 is further configured to receive a fifth input from the user;
the display unit 106 is further configured to display the main shot file in response to the fifth input received by the user input unit 107;
the user input unit 107 is further configured to receive a sixth input of a target image area in the main shot file from a user;
the display unit 106 is further configured to display the subsidiary photographic file in response to the sixth input received by the user input unit 107;
and the target image area is an area corresponding to the target preview area in the main shot file.
In the electronic device provided by the embodiment of the application, under the condition that the main shooting preview interface of the main camera is displayed, the shooting device receives a first input of a user, displays the auxiliary shooting preview interface of the target auxiliary camera, and controls the main camera and the target auxiliary camera to shoot, so as to output a first main shooting file and an auxiliary shooting file, wherein the preview image displayed on the auxiliary shooting preview interface is a preview image of a target preview area in the main shooting preview interface. According to the method, the main shooting preview interface of the main camera and the auxiliary shooting preview interface of the target auxiliary camera are displayed, so that a user can quickly check images with different shooting effects shot by different cameras, and because the preview image in the auxiliary shooting preview interface is the preview image of the target preview area in the main shooting preview interface, a full-frame image and close-up images of partial areas in the full-frame image can be quickly shot, and the efficiency of shooting the images with different shooting effects is improved.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, memory 109 may include volatile memory or non-volatile memory, or memory 109 may include both volatile and non-volatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). Memory 109 in the embodiments of the subject application includes, but is not limited to, these and any other suitable types of memory.
Processor 110 may include one or more processing units; optionally, the processor 110 integrates an application processor, which mainly handles operations related to the operating system, user interface, application programs, etc., and a modem processor, which mainly handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing shooting method embodiments, and achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one of 8230, and" comprising 8230does not exclude the presence of additional like elements in a process, method, article, or apparatus comprising the element. Further, it should be noted that the scope of the methods and apparatuses in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions recited, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (15)

1. A photographing method, characterized in that the method comprises:
receiving a first input of a user in a case where a main shooting preview interface of a main camera is displayed;
responding to the first input, displaying an auxiliary shooting preview interface of a target auxiliary camera, wherein a preview image displayed by the auxiliary shooting preview interface is a preview image of a target preview area in the main shooting preview interface;
and controlling the main camera and the target auxiliary camera to shoot, and outputting a first main shooting file and an auxiliary shooting file.
2. The method of claim 1, wherein before displaying the secondary capture preview interface of the target secondary camera, further comprising:
determining a target preview area;
determining a target auxiliary camera;
the preview image displayed on the auxiliary shooting preview interface is a preview image of the target preview area acquired by the target auxiliary camera; the target preview area is determined according to the first input, or the target preview area is an area where a recognized preset type of shooting object is located.
3. The method of claim 2, wherein the determining a target preview area comprises:
determining a target preview area according to input information of the first input under the condition that the first input comprises the input of the main shooting preview interface by a user, wherein the input information comprises an input position or an input track;
and under the condition that the first input comprises input for triggering an auxiliary shooting preview interface for displaying a target auxiliary camera, carrying out object recognition on the main shooting preview interface, and under the condition that a preset type of shooting object is recognized, determining the area where the shooting object is located as a target preview area.
4. The method of claim 2, wherein determining the target secondary camera comprises:
acquiring the object type of a shooting object in the target preview area;
and determining a target auxiliary camera matched with the object type.
5. The method of claim 2, wherein determining the target secondary camera comprises:
and under the condition that the first input comprises folding input of a folding screen, determining an auxiliary camera related to the folding angle of the folding input as a target auxiliary camera according to the association relationship between a preset folding angle and the auxiliary camera.
6. The method of claim 1, wherein in the event that the target preview area comprises at least two preview areas, the target secondary camera comprises at least two secondary cameras;
the displaying an auxiliary shooting preview interface of the target auxiliary camera in response to the first input includes:
responding to the first input, displaying a main shooting preview interface of the main camera in a first screen area of a display screen, and displaying auxiliary shooting preview interfaces of the at least two auxiliary cameras in at least two screen areas of the display screen;
or responding to the first input, displaying a main shooting preview interface of the main camera on a first screen, and displaying auxiliary shooting preview interfaces of the at least two auxiliary cameras in at least two screen areas of a second screen.
7. The method of claim 1, wherein the first input comprises a fold input to fold a screen; the folding input is used for triggering an auxiliary shooting preview interface for displaying the target auxiliary camera;
the displaying an auxiliary shooting preview interface of the target auxiliary camera in response to the first input includes:
and responding to the first input, and displaying an auxiliary shooting preview interface of the target auxiliary camera under the condition that the folding angle of the folding input is a preset angle.
8. The method of claim 1, wherein the first main shot file and the auxiliary shot file comprise images or videos;
the control the main camera with the camera is assisted to the target and shoots, and still include after first main file of taking a photograph and the file of taking a photograph of assisting is exported:
when the first main shooting file is a main shooting image and the auxiliary shooting file is an auxiliary shooting image, carrying out image fusion on the main shooting image and the auxiliary shooting image, and outputting a fused image;
when the first main shooting file is a main shooting video and the auxiliary shooting file is an auxiliary shooting video, performing video fusion on the main shooting video and the auxiliary shooting video, and outputting a fused video;
under the condition that the first main shooting file is a main shooting image and the auxiliary shooting file is an auxiliary shooting video, performing video fusion on the main shooting image and the auxiliary shooting video, and outputting a fused video;
and under the condition that the first main shooting file is a main shooting video and the auxiliary shooting file is an auxiliary shooting image, performing video fusion on the main shooting video and the auxiliary shooting image, and outputting a fused video.
9. The method of claim 1, wherein displaying a secondary capture preview interface of a target secondary camera in response to the first input comprises:
responding to the first input, displaying a main shooting preview interface of the main camera in a first screen area of a display screen, and displaying an auxiliary shooting preview interface of the target auxiliary camera in a second screen area of the display screen;
or responding to the first input, displaying a main shooting preview interface of the main camera on a first screen, and displaying an auxiliary shooting preview interface of the target auxiliary camera on a second screen.
10. The method of claim 9, wherein after controlling the main camera and the target auxiliary camera to capture images and outputting a first main capture file and an auxiliary capture file, the method further comprises:
under the condition that the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed in a first screen area and a second screen area of a display screen, updating the auxiliary shooting preview interface to a first sub-area of the second screen area for displaying, and displaying the first main shooting file and the auxiliary shooting file in a second sub-area of the second screen area;
and under the condition that the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed on a first screen and a second screen, updating the auxiliary shooting preview interface to a first screen area of the second screen for display, and displaying the first main shooting file and the auxiliary shooting file in a second screen area of the second screen.
11. The method according to claim 10, wherein after controlling the main camera and the target auxiliary camera to shoot and outputting a first main shooting file and an auxiliary shooting file, the method further comprises:
receiving a third input of the user;
responding to the third input, controlling the main camera to shoot, and outputting a second main shooting file;
under the condition that the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed in a first screen area and a second screen area of a display screen, displaying the second main shooting file in a second sub-area of the second screen area;
and under the condition that the main shooting preview interface and the auxiliary shooting preview interface are respectively displayed on a first screen and a second screen, displaying the second main shooting file in a second screen area of the second screen.
12. The method according to claim 1, wherein after controlling the main camera and the target auxiliary camera to shoot and outputting a main shooting file and an auxiliary shooting file, the method further comprises:
receiving a fourth input of a user, wherein the fourth input is an input for viewing the main shooting file;
responding to the fourth input, displaying the main shooting file in a first screen area of a display screen, and displaying the auxiliary shooting file in a second screen area of the display screen;
or displaying the main shooting file on a first screen and displaying the auxiliary shooting file on a second screen.
13. The method according to claim 1, wherein after controlling the main camera and the target auxiliary camera to shoot and outputting a main shooting file and an auxiliary shooting file, the method further comprises:
receiving a fifth input of the user;
displaying the main shooting file in response to the fifth input;
receiving a sixth input of a user to a target image area in the main shooting file;
in response to the sixth input, displaying the subsidiary shooting file;
and the target image area is an area corresponding to the target preview area in the main shooting file.
14. A camera, characterized in that the camera comprises: receiving module, display module and control module, wherein:
the receiving module is used for receiving a first input of a user under the condition that a main shooting preview interface of a main camera is displayed;
the display module is used for responding to the first input received by the receiving module and displaying an auxiliary shooting preview interface of a target auxiliary camera, and a preview image displayed by the auxiliary shooting preview interface is a preview image of a target preview area in the main shooting preview interface;
the control module is used for controlling the main camera and the target auxiliary camera to shoot and outputting a first main shooting file and an auxiliary shooting file.
15. An electronic device, comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the shooting method according to any one of claims 1 to 13.
CN202210908200.1A 2022-07-29 2022-07-29 Shooting method and device and electronic equipment Active CN115278030B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210908200.1A CN115278030B (en) 2022-07-29 2022-07-29 Shooting method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210908200.1A CN115278030B (en) 2022-07-29 2022-07-29 Shooting method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN115278030A true CN115278030A (en) 2022-11-01
CN115278030B CN115278030B (en) 2024-09-24

Family

ID=83771597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210908200.1A Active CN115278030B (en) 2022-07-29 2022-07-29 Shooting method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115278030B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106168892A (en) * 2016-07-06 2016-11-30 深圳市金立通信设备有限公司 A kind of split screen photographic method and terminal
CN108093171A (en) * 2017-11-30 2018-05-29 努比亚技术有限公司 A kind of photographic method, terminal and computer readable storage medium
CN109729266A (en) * 2018-12-25 2019-05-07 努比亚技术有限公司 A kind of image capturing method, terminal and computer readable storage medium
CN110149477A (en) * 2019-04-22 2019-08-20 珠海格力电器股份有限公司 Display method, display device, terminal and readable storage medium
CN112839166A (en) * 2020-12-02 2021-05-25 维沃移动通信(杭州)有限公司 Shooting method and device and electronic equipment
CN112954210A (en) * 2021-02-08 2021-06-11 维沃移动通信(杭州)有限公司 Photographing method and device, electronic equipment and medium
CN112954218A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
CN113141450A (en) * 2021-03-22 2021-07-20 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and medium
WO2021147482A1 (en) * 2020-01-23 2021-07-29 华为技术有限公司 Telephoto photographing method and electronic device
CN113473004A (en) * 2021-06-16 2021-10-01 维沃移动通信(杭州)有限公司 Shooting method and device
CN113497890A (en) * 2020-03-20 2021-10-12 华为技术有限公司 Shooting method and equipment
CN113709354A (en) * 2020-05-20 2021-11-26 华为技术有限公司 Shooting method and electronic equipment
CN113766129A (en) * 2021-09-13 2021-12-07 维沃移动通信(杭州)有限公司 Video recording method, video recording device, electronic equipment and medium
CN113840070A (en) * 2021-09-18 2021-12-24 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and medium
CN114143455A (en) * 2021-11-25 2022-03-04 维沃移动通信有限公司 Shooting method and device and electronic equipment

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106168892A (en) * 2016-07-06 2016-11-30 深圳市金立通信设备有限公司 A kind of split screen photographic method and terminal
CN108093171A (en) * 2017-11-30 2018-05-29 努比亚技术有限公司 A kind of photographic method, terminal and computer readable storage medium
CN109729266A (en) * 2018-12-25 2019-05-07 努比亚技术有限公司 A kind of image capturing method, terminal and computer readable storage medium
CN112954218A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
CN110149477A (en) * 2019-04-22 2019-08-20 珠海格力电器股份有限公司 Display method, display device, terminal and readable storage medium
WO2021147482A1 (en) * 2020-01-23 2021-07-29 华为技术有限公司 Telephoto photographing method and electronic device
CN113497890A (en) * 2020-03-20 2021-10-12 华为技术有限公司 Shooting method and equipment
CN113709354A (en) * 2020-05-20 2021-11-26 华为技术有限公司 Shooting method and electronic equipment
CN112839166A (en) * 2020-12-02 2021-05-25 维沃移动通信(杭州)有限公司 Shooting method and device and electronic equipment
CN112954210A (en) * 2021-02-08 2021-06-11 维沃移动通信(杭州)有限公司 Photographing method and device, electronic equipment and medium
CN113141450A (en) * 2021-03-22 2021-07-20 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and medium
CN113473004A (en) * 2021-06-16 2021-10-01 维沃移动通信(杭州)有限公司 Shooting method and device
CN113766129A (en) * 2021-09-13 2021-12-07 维沃移动通信(杭州)有限公司 Video recording method, video recording device, electronic equipment and medium
CN113840070A (en) * 2021-09-18 2021-12-24 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and medium
CN114143455A (en) * 2021-11-25 2022-03-04 维沃移动通信有限公司 Shooting method and device and electronic equipment

Also Published As

Publication number Publication date
CN115278030B (en) 2024-09-24

Similar Documents

Publication Publication Date Title
CN109040474B (en) Photo display method, device, terminal and storage medium
CN113840070B (en) Shooting method, shooting device, electronic equipment and medium
CN103152489A (en) Showing method and device for self-shooting image
CN103916587A (en) Photographing device for producing composite image and method using the same
CN107197144A (en) Filming control method and device, computer installation and readable storage medium storing program for executing
CN112887617B (en) Shooting method and device and electronic equipment
WO2018166069A1 (en) Photographing preview method, graphical user interface, and terminal
CN113329172B (en) Shooting method and device and electronic equipment
CN114125179B (en) Shooting method and device
CN112839166B (en) Shooting method and device and electronic equipment
WO2023174223A1 (en) Video recording method and apparatus, and electronic device
CN112333386A (en) Shooting method and device and electronic equipment
CN111770277A (en) Auxiliary shooting method, terminal and storage medium
CN114785969A (en) Shooting method and device
CN117918057A (en) Display device and device control method
CN114390206A (en) Shooting method and device and electronic equipment
CN114697530A (en) Photographing method and device for intelligent framing recommendation
CN114025237B (en) Video generation method and device and electronic equipment
CN115278030B (en) Shooting method and device and electronic equipment
CN115499589A (en) Shooting method, shooting device, electronic equipment and medium
CN114285988A (en) Display method, display device, electronic equipment and storage medium
CN114285922A (en) Screenshot method, screenshot device, electronic equipment and media
CN114500852A (en) Photographing method, photographing apparatus, electronic device, and readable storage medium
CN114285980A (en) Video recording method and device and electronic equipment
CN112672059B (en) Shooting method and shooting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant