CN112887603B - Shooting preview method and device and electronic equipment - Google Patents

Shooting preview method and device and electronic equipment Download PDF

Info

Publication number
CN112887603B
CN112887603B CN202110105879.6A CN202110105879A CN112887603B CN 112887603 B CN112887603 B CN 112887603B CN 202110105879 A CN202110105879 A CN 202110105879A CN 112887603 B CN112887603 B CN 112887603B
Authority
CN
China
Prior art keywords
camera
target
input
view mode
preview interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110105879.6A
Other languages
Chinese (zh)
Other versions
CN112887603A (en
Inventor
柳玙卿
王陈阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110105879.6A priority Critical patent/CN112887603B/en
Publication of CN112887603A publication Critical patent/CN112887603A/en
Application granted granted Critical
Publication of CN112887603B publication Critical patent/CN112887603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting preview method, a shooting preview device and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: the method comprises the steps of acquiring first image data acquired by a target camera, wherein the first image data comprises image data of M first target objects, the target camera comprises at least two cameras in X cameras, and displaying a first shooting preview interface based on the first image data, wherein the first shooting preview interface comprises three-dimensional live-action images of N first target objects, so that information of different angles of the three-dimensional objects presented in the shooting preview interface is realized.

Description

Shooting preview method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a shooting preview method, a shooting preview device and electronic equipment.
Background
At present, a camera of an electronic device can only perform plane display of a current object for previewing or photographing, that is, only a section of the current object at a certain angle is presented, and sufficient information of the current object cannot be acquired in one image.
In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art: when a user shoots and previews a three-dimensional object, only information of a certain angle of the three-dimensional object can be presented in a shooting and previewing interface, and information of different angles of the three-dimensional object cannot be presented in the shooting and previewing interface.
Disclosure of Invention
The embodiment of the application aims to provide a shooting preview method, a shooting preview device and electronic equipment, which can solve the problem that information of different angles of a three-dimensional object cannot be reflected in a shooting preview interface in the prior art.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a shooting preview method, where the method is applied to an electronic device, where the electronic device includes X cameras of at least one type, and the method includes:
acquiring first image data acquired by a target camera, wherein the first image data comprises image data of M first target objects, and the target camera comprises at least two cameras in the X cameras;
and displaying a first shooting preview interface based on the first image data, wherein the first shooting preview interface comprises three-dimensional live-action images of N first target objects, X, M and N are positive integers, X is more than or equal to 2, and N is less than or equal to M.
In a second aspect, an embodiment of the present application provides a shooting apparatus, where the shooting apparatus is applied to an electronic device, where the electronic device includes at least one type of X cameras, and the apparatus includes:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring first image data acquired by a target camera, the first image data comprises image data of M first target objects, and the target camera comprises at least two cameras in the X cameras;
and the first display module is used for displaying a first shooting preview interface based on the first image data, wherein the first shooting preview interface comprises N three-dimensional live-action images of the first target object, X, M and N are positive integers, X is more than or equal to 2, and N is less than or equal to M.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, first image data acquired by a target camera is acquired, wherein the first image data includes image data of M first target objects, the target camera includes at least two cameras in X cameras, and a first shooting preview interface is displayed based on the first image data, wherein the first shooting preview interface includes three-dimensional live-action images of N first target objects, so that information of different angles of the three-dimensional objects is presented in the shooting preview interface.
Drawings
Fig. 1 is a flowchart illustrating steps of a shooting preview method provided in an embodiment of the present application;
fig. 2 is a schematic view of a camera of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of imaging of a camera provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of another camera imaging provided by the embodiments of the present application;
FIG. 5 is a schematic diagram of still another camera imaging provided by an embodiment of the present application;
fig. 6 is a schematic diagram of a first shooting preview interface provided in an embodiment of the present application;
fig. 7 is a schematic rotation diagram of a three-dimensional live-action image according to an embodiment of the present application;
fig. 8 is a schematic diagram of a third shooting preview interface provided in the embodiment of the present application;
FIG. 9 is a schematic illustration of a mode-selection interface provided by an embodiment of the present application;
fig. 10 is a schematic diagram of a fourth shooting preview interface provided in an embodiment of the present application;
fig. 11 is a schematic view of a zoom progress bar combination provided in an embodiment of the present application;
fig. 12 is a schematic diagram of another first shooting preview interface provided in an embodiment of the present application;
fig. 13 is a schematic view of a first shooting preview interface after a view mode is switched according to an embodiment of the present application;
fig. 14 is a schematic diagram of a further first shooting preview interface provided in an embodiment of the present application;
fig. 15 is a schematic view of a first shooting preview interface after switching of a view mode according to an embodiment of the present application;
FIG. 16 is a schematic view of a camera selection interface provided by an embodiment of the present application;
fig. 17 is a schematic diagram of a fifth shooting preview interface display provided in an embodiment of the present application;
FIG. 18 is a schematic illustration of a preview thumbnail display provided by an embodiment of the present application;
fig. 19 is a schematic structural diagram of a shooting preview device provided in an embodiment of the present application;
fig. 20 is a schematic hardware configuration diagram of an electronic device implementing an embodiment of the present application;
fig. 21 is a schematic hardware configuration diagram of another electronic device for implementing the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be implemented in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting preview method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of a shooting preview method provided in an embodiment of the present application, where the method is applied to an electronic device, and the electronic device includes X cameras of at least one type. The X cameras may include the same type of camera, but at different locations. The X cameras may also include different types of cameras, with the different types of cameras being located at different positions. The type of the camera can be divided according to the focal length and the view angle of the camera. The method may comprise the steps of:
step 101, first image data collected by a target camera is obtained, wherein the first image data comprises image data of M first target objects, and the target camera comprises at least two cameras in X cameras.
Referring to fig. 2, fig. 2 is a schematic view of a camera of an electronic device according to an embodiment of the present disclosure. The electronic equipment comprises three types of cameras, wherein the three types of cameras comprise a main camera, a wide-angle camera and a macro camera. As shown in fig. 2, there are three cameras of each type, and the cameras of the same type are respectively located at the left side, the right side and the middle, and because the positions of the cameras are different, the pictures obtained when a subject is framed will be different, for example, the main camera 201 located at the left side can obtain an image of the left side of a certain subject, the main camera 202 located at the middle can obtain an image of the front side of the subject, and the main camera 203 located at the right side can obtain an image of the right side of the subject.
The formation of image of three main cameras is the formation of image of normal field of vision angle, and the person of supposing to take a picture wants to possess more field of vision angles, the optional camera of switching different grade type, like wide angle camera, macro camera etc..
In this embodiment, a plurality of main cameras are used for imaging by default, for example, three main cameras as shown in fig. 2 are used for imaging, the three main cameras are target cameras, and the first image data includes image data respectively acquired by the three main cameras. For example, referring to fig. 3, fig. 4, and fig. 5, fig. 3 is a schematic diagram of one camera imaging provided in the embodiment of the present application, fig. 4 is a schematic diagram of another camera imaging provided in the embodiment of the present application, and fig. 5 is a schematic diagram of still another camera imaging provided in the embodiment of the present application. Fig. 3 illustrates imaging of a rectangular solid object a by the main camera 201, fig. 4 illustrates imaging of a rectangular solid object a by the main camera 202, and fig. 5 illustrates imaging of a rectangular solid object a by the main camera 203.
In fig. 3, 4, and 5, the first image data acquired by the three main cameras is taken as the image data including the rectangular solid object a as an example. In practical applications, the first image data acquired by the three main cameras may include image data of a plurality of three-dimensional objects, for example, the main camera 201 acquires first image data of a three-dimensional object B and a three-dimensional object C, the main camera 202 acquires first image data of a three-dimensional object C and a three-dimensional object D, and the main camera 203 acquires first image data of a three-dimensional object C and a three-dimensional object D, in which case, the M first target objects include the three-dimensional object B, the three-dimensional object C, and the three-dimensional object D.
And 102, displaying a first shooting preview interface based on the first image data, wherein the first shooting preview interface comprises three-dimensional live-action images of N first target objects.
Wherein X, M and N are positive integers, X is more than or equal to 2, and N is less than or equal to M. The three-dimensional live-action images of the N first target objects are constructed based on the first image data. The three-dimensional live-action image can be constructed by using an Augmented Reality (AR) technology.
As described with reference to fig. 3, 4, and 5, since the image data of the cuboid object a obtained by the main cameras located at different positions are different, the image data of the cuboid object a obtained by the main cameras located at different positions may be synthesized, so that a three-dimensional live-action image of the cuboid object a may be constructed, and a first shooting preview interface may be displayed, where the first shooting preview interface includes the three-dimensional live-action image of the cuboid object a. As shown in fig. 6, fig. 6 is a schematic diagram of a first shooting preview interface according to an embodiment of the present application. The three-dimensional live-action image of the cuboid object A can be displayed with a discontinuously displayed label, the label can indicate that the three-dimensional live-action image is a rotatable image, and a user can quickly recognize that the three-dimensional live-action image can rotate according to the label.
It should be noted that, in the prior art, when a shooting preview of a three-dimensional object is performed, only information of a certain angle of the three-dimensional object can be presented in the shooting preview interface, for example, when a shooting preview of a rectangular solid object a is performed only by using the main camera 202, only information of the rectangular solid object a as shown in fig. 4, that is, only information of the front side of the rectangular solid object a can be presented in the shooting preview interface. In the embodiment, the rectangular solid object a in the first shooting preview interface may present a preview effect as shown in fig. 6, that is, the user may simultaneously preview the information on the front, top and right side of the rectangular solid object a in the first shooting preview interface.
According to the shooting preview method provided by the embodiment of the application, the first image data collected by the target cameras are obtained, wherein the first image data comprise image data of M first target objects, the target cameras comprise at least two cameras in X cameras, and a first shooting preview interface is displayed based on the first image data, wherein the first shooting preview interface comprises three-dimensional live-action images of N first target objects, so that information of different angles of the three-dimensional objects is presented in the shooting preview interface.
Optionally, after the step 102 of displaying the first shooting preview interface based on the first image data, the method may further include the following steps:
receiving first input of a user to a tth three-dimensional live-action image in the three-dimensional live-action images of the N first target objects;
rotating the tth three-dimensional live-action image in response to the first input;
wherein t is a positive integer and is less than or equal to N.
The first input may be any one of a leftward press-and-slide input, a rightward press-and-slide input, an upward press-and-slide input, and a downward press-and-slide input of the user with respect to the tth three-dimensional live view image. If the first input is the input of leftward pressing and sliding of the tth three-dimensional live-action image, rotating the tth three-dimensional live-action image leftward; and if the first input is the input of the right-pressing sliding of the tth three-dimensional live-action image, rotating the tth three-dimensional live-action image to the right.
It should be noted that, after the user releases his hand, the entire first shooting preview interface is also reconstructed, and the three-dimensional live-action image in the first shooting preview interface is reprocessed according to the angle of the three-dimensional object taken by the current camera, so as to ensure that the image obtained by previewing and the current scene visible to the eyes are synchronized to a certain degree.
For example, referring to fig. 7, fig. 7 is a schematic rotation diagram of a three-dimensional live-action image according to an embodiment of the present disclosure. After the three-dimensional live view image of the rectangular solid object a shown in fig. 6 is rotated to the right by a certain angle, the three-dimensional live view image of the rectangular solid object a is displayed as shown in fig. 7. Before rotation, a user cannot see the information of the left side face of the cuboid object A, the information of the left side face of the cuboid object A can be seen after the three-dimensional live-action image of the cuboid object A is rotated rightwards, and if the user wants to continuously preview the information of the back face of the cuboid object A, the three-dimensional live-action image of the cuboid object A can be continuously rotated rightwards, so that the information of more angles of the cuboid object A can be previewed.
The shooting preview method provided by the embodiment of the application is suitable for a user to preview scenes of information of different angles of a three-dimensional object, and the user can preview information of more angles of the three-dimensional object through the first input of the user.
Optionally, after the step 102 displays the first shooting preview interface based on the first image data, the method may further include the following steps:
displaying a second shooting preview interface under the condition that relative space state data between the electronic equipment and the M first target objects are changed, wherein the second shooting preview interface comprises three-dimensional live-action images of the P second target objects;
the three-dimensional live-action images of the P second target objects are constructed based on the acquired second image data acquired by the target camera, the second image data comprise image data of Q second target objects, P and Q are positive integers, and P is less than or equal to Q.
In the embodiment of the application, after the first shooting preview interface is displayed, if a user moves a mobile phone or changes shooting angles of M first target objects, or changes positions of the first target objects, angles of three-dimensional objects acquired by a target camera change, so that relative spatial state data between the mobile phone and the first target objects change, a second shooting preview picture is displayed according to second image data acquired by the target camera, that is, synchronization between the shooting preview interface and the image data acquired by the target camera is ensured. For example, after the first shooting preview interface is displayed, and after the user moves the mobile phone, if the target camera acquires image data of the three-dimensional object E and the three-dimensional object F at different angles, P second target objects in the second shooting preview interface include the three-dimensional object E and the three-dimensional object F.
The shooting preview method provided by the embodiment of the application is suitable for synthesizing the image data acquired by the target camera in real time through the shooting preview interface, previewing and displaying the three-dimensional live-action image of the synthesized three-dimensional object, and achieving the purpose that the shooting preview picture and the human eye visible scene keep synchronism.
Optionally, after the step 102 displays the first shooting preview interface based on the first image data, the method may further include the following steps:
and receiving input of a user to a shooting control on the first shooting preview interface, and responding to the input to execute shooting to obtain a target image.
In this embodiment of the application, the target image still includes three-dimensional live-action images of the N first target objects, and the user may rotate the three-dimensional live-action images in the target image to view information on the back side of the three-dimensional live-action images.
The shooting preview method provided by the embodiment of the application is suitable for obtaining a scene of a target image comprising a three-dimensional live-action image, the shot image still comprises the three-dimensional live-action image, and a user can obtain information of more different angles of N first target objects from the target image.
Optionally, before acquiring the first image data acquired by the target camera, the method further includes:
receiving a second input of the target shooting control on the third shooting preview interface from the user;
in response to the second input, a mode selection interface is displayed.
As shown in fig. 8, fig. 8 is a schematic diagram of a third shooting preview interface provided in an embodiment of the present application, where the target shooting control is, for example, a 3-dimensional (3D, 3-Dimension) shooting control shown in fig. 8, and the third input is, for example, a click input or a double-click input of the 3D shooting control by a user. The electronic device displays a mode selection interface in response to the third input, where the displayed mode selection interface is as shown in fig. 9, and fig. 9 is a schematic diagram of a mode selection interface provided in an embodiment of the present application. The mode selection interface can provide two modes, including an intelligent mode and a personalized selection mode, and a fourth shooting preview interface of a default view mode is displayed under the condition that the user selects the intelligent mode; when the user selects the personalized selection mode, a camera selection interface is displayed so that the user can select a camera to be used from the camera selection interface, the user can click the control 901 associated with the intelligent mode to select the intelligent mode, and click the control 902 associated with the personalized selection mode to select the personalized selection mode.
The shooting preview method provided by the embodiment of the application is suitable for providing different modes of selected scenes for a user, and is convenient for the user to select the mode suitable for the user requirements according to actual requirements.
Optionally, after the mode selection interface is displayed in response to the second input, the method may further include the following steps:
receiving a third input of the first control on the mode selection interface from the user;
in response to the third input, displaying a fourth capture preview interface, wherein the fourth capture preview interface includes a zoom progress bar thereon, a first position of the zoom progress bar including a slider bar, the first position indicating a first view mode associated with a first target camera of the X cameras.
In this embodiment of the application, the first control is, for example, an intelligent mode control, and a fourth shooting preview interface displayed is as shown in fig. 10, where fig. 10 is a schematic diagram of a fourth shooting preview interface provided in this embodiment of the application. The first position of the zoom progress bar 1001 includes a slide bar 1002. The fourth shooting preview interface is a shooting interface in a default view mode, and the default view mode is a normal view mode. And after receiving the fourth input, the electronic equipment responds to the fourth input and displays a fourth shooting preview interface in the normal view mode.
It should be noted that, in the embodiment of the present application, three view modes may be formed by the X cameras, that is, three view modes may be provided in the embodiment of the present application, where the three view modes include a short view mode, a normal view mode, and a long view mode. For example, in fig. 2, three main cameras constitute a normal view mode, the wide-angle camera 204, the wide-angle camera 205, and the wide-angle camera 206 constitute a long view mode, and the macro camera 207, the macro camera 208, and the macro camera 209 constitute a short view mode.
Wherein, the X cameras may also include other combination forms to form the three view modes, for example, the macro camera 207, the main camera 202, and the macro camera 209 form a short view mode, that is, the short view mode corresponds to the macro camera (left) + the main camera (middle) + the macro camera (right) in the mode two shown in fig. 11, and fig. 11 is a zoom progress bar combination schematic diagram provided in the embodiment of the present application; the main camera 201, the wide-angle camera 205 and the main camera 203 form a normal view mode, that is, the main camera (left) + the wide-angle camera (middle) + the main camera (right) in the second mode shown in fig. 11 form a normal view mode; the three wide-angle cameras are grouped into a long-view mode, that is, the wide-angle camera + wide-angle camera group long-view mode shown in the second mode in fig. 11. The three view modes may also include other combinations, such as the combination three in fig. 11, which are not described herein for purposes of space limitation. The fourth shooting preview interface may further include a setting control 1003, after the user clicks the setting control 1003, the interface shown in fig. 11 may be displayed, the default combination form is the combination one shown in fig. 11, the user may further select another combination form through the interface shown in fig. 11, for example, the combination two is selected as three view modes on the zoom progress bar, and the user may click the small circle control behind the combination two to select the control, that is, the mode two may be used as the three view modes on the zoom progress bar.
When the fourth photographing preview interface is an interface in the normal view mode, the first target camera includes the main camera 201, the main camera 202, and the main camera 203, and the first view mode is the normal view mode.
Optionally, after the fourth photographing preview interface is displayed in response to the fourth input, the method may further include the steps of:
determining a target view mode from at least one view mode consisting of X cameras;
determining a target camera associated with a target view mode;
correspondingly, based on the first image data, a first shooting preview interface is displayed, which can be specifically realized by the following steps:
and displaying a first shooting preview interface according to the target view field mode based on the first image data.
As shown in fig. 11, if the user selects the first combination, the three view modes on the zoom progress bar are matched with the first combination, if the user selects the second combination, the three view modes on the zoom progress bar are matched with the second combination, and if the user selects the third combination, the three view modes on the zoom progress bar are matched with the third combination. Taking the matching of the three view modes and the combination on the zoom progress bar as an example, as shown in fig. 10, if the user drags the slider 1002 to the long view mode, the target view mode is the long view mode; if the user drags the slider 1002 to the short view mode, the target view mode is the short view mode, and if the user does not perform any operation, the target view mode is the normal view mode.
The shooting preview method is suitable for a user to switch scenes with different view modes, so that the user can switch to the view mode meeting the user requirement according to the requirement of the user, and a first shooting preview interface of the view mode meeting the requirement of the user is displayed.
Optionally, determining the target view mode from at least one view mode composed of X cameras may specifically include the following steps:
in a case where a fourth input of the zoom progress bar or the slider by the user is received, updating the slider to a second position display of the zoom progress bar in response to the fourth input, and determining a second view mode indicated with the second position as a target view mode;
wherein the second view mode is associated with a second target camera of the X cameras.
In the embodiment of the present application, the fourth input is, for example, an input that the user clicks the second position on the zoom progress bar, or an input that the user slides the slide bar to the second position.
The shooting preview method provided by the embodiment of the application is suitable for a user to switch scenes with different view modes, so that the user can change the view modes by operating the zooming progress bar or the sliding bar.
Optionally, determining the target view mode from at least one view mode composed of X cameras may include the following steps:
in a case where a fourth input of the zoom progress bar or the slider by the user is not received, the first view mode is determined as the target view mode.
And when the electronic equipment does not receive the fourth input, determining the first visual field mode as the target visual field mode by default, namely determining the normal visual field mode as the target visual field mode.
The shooting preview method provided by the embodiment of the application is suitable for scenes without switching different view modes of a user, so that the default view mode is used as the target view mode.
Optionally, after the first shooting preview interface is displayed according to the target view mode based on the first image data, the method may further include the following steps:
under the condition that the first shooting preview interface does not comprise part or all of the K three-dimensional live-action images, switching the first shooting preview interface displayed according to the target view mode into display according to a third view mode;
and K is a positive integer, K is less than or equal to N, and the view angle of the third view mode is greater than that of the target view mode.
Referring to fig. 12, fig. 12 is a schematic view of another first shooting preview interface provided in an embodiment of the present application. If the first shooting preview interface includes three-dimensional live-action images, for example, three-dimensional live-action images such as three-dimensional live-action image 1201, three-dimensional live-action image 1202 and three-dimensional live-action image 1203 in fig. 12. In this case, the first shooting preview interface may be switched to the normal view mode to display, and the first shooting preview interface that is switched to the normal view mode to display is shown in fig. 13, where fig. 13 is a schematic diagram of the first shooting preview interface after the view mode is switched provided in the embodiment of the present application.
The shooting preview method provided by the embodiment of the application is suitable for completely displaying the K three-dimensional live-action images in the first shooting preview interface by switching the view field mode under the condition that the K three-dimensional live-action images are moved out of the first shooting preview interface.
Optionally, after the first shooting preview interface is displayed according to the target view mode based on the first image data, the method may further include the following steps:
when the area occupied by the N three-dimensional live-action images in the first shooting preview interface is smaller than or equal to the preset area and the first shooting preview interface does not comprise part or all of the N three-dimensional live-action images, switching the first shooting preview interface displayed based on the target view mode into display based on the fourth view mode;
wherein the view angle of the fourth view mode is smaller than the view angle of the target view mode.
Referring to fig. 14, fig. 14 is a schematic diagram of a further first shooting preview interface provided in an embodiment of the present application. If the first capturing preview interface includes three-dimensional real images, for example, three-dimensional real images such as the three-dimensional real image 1401, the three-dimensional real image 1402, and the three-dimensional real image 1403 in fig. 14. In the long view mode, the area occupied by the three-dimensional live view image 1401, the three-dimensional live view image 1402, and the three-dimensional live view image 1403 is small, in this case, the first shooting preview interface may be switched to the normal view mode for display, and the first shooting preview interface that is switched to the normal view mode for display is shown in fig. 15, where fig. 15 is a schematic view of the first shooting preview interface after another view mode is switched, according to the embodiment of the present application.
The shooting preview method provided by the embodiment of the application is suitable for increasing the area occupied by each three-dimensional live-action image in the N three-dimensional live-action images by switching the view field mode under the condition that the area occupied by the N three-dimensional live-action images is smaller so as to ensure that the displayed N three-dimensional live-action images can have a sufficiently large presentation effect.
Optionally, after the step 102 of displaying the first shooting preview interface based on the first image data, the method may further include the following steps:
and receiving input of a user to a return control in the first shooting preview interface, and responding to the input to display a fourth shooting preview interface.
The first shooting preview interface further includes a return control, and the control corresponding to the previous return interface shown in fig. 6 is a return control 601. The sixth input is, for example, an input of the user clicking on the return control 601, and the electronic device displays a fourth photographing preview interface of the default view mode in response to the sixth input. And displaying a fourth shooting preview interface in the normal view mode when the default view mode is the normal view mode. For example, after the user performs a fourth input to the first control on the mode selection interface, the electronic device displays a fourth shooting preview interface of the default view mode in response to the fourth input, and then the user switches the view mode by operating a zoom progress bar or a slider on the fourth shooting preview interface, for example, switching the default view mode to the long view mode, and displaying the first shooting preview interface after switching to the long view mode.
The shooting preview method is suitable for the scene that the user returns to the interface of the default view mode, and the user can conveniently return to the interface of the default view mode quickly by returning to the control through convenient operation.
Optionally, after the mode selection interface is displayed in response to the second input, the method may further include the following steps:
receiving a sixth input of the user to a second control on the mode selection interface;
and responding to a sixth input, and displaying a camera selection interface, wherein the camera selection interface comprises X camera controls, Y camera controls in the X camera controls are displayed in a first target mode, Y is a positive integer, and Y is not more than X.
In this embodiment of the application, the second control is, for example, a personalized selection mode control on the mode selection interface as shown in fig. 9, the seventh input is a single-click input or a double-click input performed by the user on the personalized selection mode control, the electronic device displays, in response to the seventh input, a camera selection interface, where the camera selection interface is as shown in fig. 16, and fig. 16 is a schematic diagram of the camera selection interface provided in this embodiment of the application.
It should be noted that fig. 16 is a diagram that maps the camera on the electronic device shown in fig. 2 into the screen, that is, the camera selection interface shown in fig. 16 includes 9 camera controls, X is equal to 9, and each circular control represents one camera control.
The main camera controls in the first row shown in fig. 16 from left to right are a main camera control 1601, a main camera control 1602, and a main camera control 1603 in sequence, the main camera control 1601 corresponds to the main camera 201 in fig. 2, the main camera control 1602 corresponds to the main camera 202 in fig. 2, and the main camera control 1603 corresponds to the main camera 203 in fig. 2. The wide-angle camera controls in the second row from left to right are a wide-angle camera control 1604, a wide-angle camera control 1605, and a wide-angle camera control 1606 in sequence, the wide-angle camera control 1604 corresponds to the wide-angle camera 204 in fig. 2, the wide-angle camera control 1605 corresponds to the wide-angle camera 205 in fig. 2, and the wide-angle camera control 1606 corresponds to the wide-angle camera 206 in fig. 2. The macro camera controls in the third row from left to right correspond to the macro camera 207, the macro camera 208 and the macro camera 209 in fig. 2 in sequence. The third row of macro camera controls from left to right sequentially comprises a macro camera control 1607, a macro camera control 1608 and a macro camera control 1609, the macro camera control 1607 corresponds to the macro camera 207 in fig. 2, the macro camera control 1608 corresponds to the macro camera 208 in fig. 2, and the macro camera control 1609 corresponds to the macro camera 209 in fig. 2.
In a case that the hardware of the electronic device supports all the cameras shown in fig. 2, the camera controls shown in fig. 16 may all be displayed in the first target color, for example, in a case that the first target color is blue, 9 camera controls all display blue, that is, the blue represents that the camera control is selectable, and a user may select a camera associated with one blue camera control by clicking the camera control. If the electronic device supports that 7 camera controls in the 9 camera controls are available, the 7 camera controls are displayed in a first target mode, the remaining 2 camera controls are displayed in other modes, and the first target mode may be a color display mode, for example, the remaining 2 camera controls display gray, which represents that the 2 camera controls are not selectable.
The shooting preview method provided by the embodiment of the application is suitable for displaying the cameras which can be selected by the user in the first target mode, so that the user can quickly judge which cameras are selectable cameras according to the displayed first target mode.
Optionally, after the camera selection interface is displayed in response to the sixth input, the method may further include the following steps:
receiving a seventh input of the user to the ith camera control in the Y camera controls;
displaying a fifth photographing preview interface in response to the seventh input;
and under the condition that i is greater than 1, the fifth shooting preview interface comprises an ith camera control and a composite image which are displayed according to a second target color, the composite image is obtained by synthesizing a first image acquired by a camera associated with the ith camera control into a second image, the second image is obtained based on images acquired by m cameras associated with m camera controls, m is equal to i-1, and the m camera controls are camera controls selected by a user from the Y camera controls before a seventh input of the user to the ith camera control is received.
It should be noted that, when i is equal to 1, a first image acquired by a camera associated with the 1 st camera control is displayed on the fifth shooting preview interface, and the 1 st camera control is displayed according to the second target color. For example, the eighth input is, for example, a click input performed by a user on a first camera control shown in fig. 16, and if the first camera control is the camera control associated with the main camera 201, the camera control associated with the main camera 201 is the main camera control 1601 of fig. 16, and the first camera control clicked by the user is the main camera control 1601, a first image acquired by the main camera 201 associated with the main camera control is displayed, and at the same time, the main camera control 1601 is displayed in a second target color, for example, purple. When the user selects only one camera, the displayed first image is an image with a two-dimensional effect.
In this embodiment of the application, when i is greater than 1, the user selects a first camera control, and displays an image acquired by a camera associated with the first camera control, where the first camera control is, for example, a main camera control 1601, and then, the user continues to select a second camera control, that is, i =2, the second camera control selected by the user is, for example, a main camera control 1602, and the main camera control 1602 is associated with the main camera 202, and then the image acquired by the main camera 202 is synthesized into an image acquired by the main camera 201 to obtain a synthesized image, and the synthesized image is displayed on a fifth shooting preview interface, where the obtained synthesized image is, for example, an image 1701 shown in fig. 17, and fig. 17 is a schematic diagram of a display of the fifth shooting preview interface provided in this embodiment of the application. If the user selects a second camera control followed by a third camera control, i.e., i =3, the third camera control selected by the user is, for example, wide angle camera control 1606 being associated with wide angle camera 206, the images acquired by wide angle camera 206 are composited into image 1701 as in fig. 17, and a new composite image is displayed at the location of image 1701.
When i is equal to 2, the first image is an image acquired by the main camera 202, the second image is an image acquired by the main camera 201, and the composite image is an image obtained by combining the image acquired by the main camera 202 with the image acquired by the main camera 201. In the case where i is equal to 3, the first image is an image acquired by the wide-angle camera 206, and the second image is a composite image obtained based on the image acquired by the main camera 201 and the image acquired by the main camera 202.
The embodiment of the application provides a shooting preview method, which is suitable for a scene for synthesizing images acquired by a camera selected by a user in real time, so that real-time synthesis is realized according to the sequence of the camera selected by the user, and the user can see the intermediate effect of real-time synthesis. Due to the adoption of the shooting preview method provided by the embodiment of the application, the user can manually select the camera, so that the personalized requirements of the user can be met.
Optionally, after the fifth photographing preview interface is displayed in response to the seventh input, the method may further include the steps of:
receiving an eighth input of the user to the first target control on the fifth shooting preview interface;
and in response to the eighth input, determining the camera associated with the m cameras and the ith camera control as the target camera.
In this embodiment, the first target control is, for example, a confirmation control 1703 on the fifth shooting preview interface, and the ninth input is, for example, an input of the user clicking the confirmation control 1703. With reference to the above example, after the user selects the main camera control 1601, the main camera control 1602, and the wide-angle camera control 1606, the user clicks the confirmation control 1703, and in response to the ninth input, the main camera 201, the main camera 202, and the wide-angle camera 206 are determined as the target cameras.
The shooting preview method provided by the embodiment of the application is suitable for a user to select the camera meeting the own requirements according to the own requirements, so that the personalized requirements of the user are met.
Optionally, after the camera selection interface is displayed in response to the sixth input, the method may further include the following steps:
receiving ninth input of a user to a second target control on the camera selection interface;
responding to the ninth input, and displaying Y preview thumbnails acquired by Y cameras related to the Y camera controls;
receiving tenth input of j preview thumbnails in the Y preview thumbnails by the user;
in response to a tenth input, determining the cameras associated with the j preview thumbnails as target cameras;
wherein j is a positive integer and is less than or equal to Y.
In this embodiment, the second target control is, for example, a one-key preview control on the camera selection interface, the ninth input is, for example, an input of a user clicking the one-key preview control, and in response to the ninth input, Y preview thumbnails are displayed. For example, in a case where the macro camera 207 and the macro camera 209 are unavailable, Y is equal to 7, an interface as shown in fig. 18 is displayed, and fig. 18 is a schematic view of a preview thumbnail display provided by an embodiment of the present application. The interface shown in fig. 18 displays 7 preview thumbnails, with the macro camera control associated with the macro camera 207 and the macro camera control associated with the macro camera 209 being displayed in gray. The user can select j preview thumbnails in the 7 preview thumbnails, wherein the tenth input can be an input of clicking j preview thumbnails in the 7 preview thumbnails by the user, and the camera associated with the j preview thumbnails clicked by the user is determined as the target camera.
The shooting preview method provided by the embodiment of the application is suitable for a user to check the scene of the preview image acquired by each available camera, and the user can select the camera according to the displayed preview thumbnail, so that the user can select the camera according to the requirement of the user.
Optionally, after the fifth photographing preview interface is displayed in response to the seventh input, the method may further include the following steps:
receiving eleventh input of a user to a return control on the fifth shooting preview interface;
in response to the eleventh input, displaying the second image.
In the embodiment of the present application, the return control on the fifth shooting preview interface is, for example, the control 1702 shown in fig. 17, the eleventh input is, for example, an input of clicking the control 1702 by the user, and the electronic device displays the second image in response to the eleventh input, that is, the second image is displayed at the position where the image 1701 is located. For example, if the user selects a first camera control, the image acquired by the camera associated with the first camera control is displayed, the first camera control is, for example, the main camera control 1601, and then, the user continues to select a second camera control, the second camera control selected by the user is, for example, the main camera control 1602, and the main camera control 1602 is associated with the main camera 202, the image acquired by the main camera 202 is synthesized into the image acquired by the main camera 201, so as to obtain a synthesized image 1, and the synthesized image 1 is displayed. Then, if the third camera control selected by the user is the wide-angle camera control 1606, the composite image 2 is displayed, and the composite image 2 is obtained by combining the images acquired by the wide-angle camera 206 with the composite image 1, and after the composite image 2 is displayed, the user clicks the return control 1702 shown in fig. 17, and then the electronic device responds to the eleventh input of clicking the return control 1702, and displays the composite image 1.
In the shooting preview method provided by the embodiment of the present application, the execution subject may be a shooting device, or a control module in the shooting device for executing the shooting method. The embodiment of the present application takes a method for executing shooting by a shooting device as an example, and describes a shooting device provided by the embodiment of the present application.
Referring to fig. 19, fig. 19 is a schematic structural diagram of a shooting preview apparatus provided in an embodiment of the present application, where the apparatus is applied to an electronic device, the electronic device includes X cameras of at least one type, and the shooting preview apparatus 1900 includes:
an obtaining module 1901, configured to obtain first image data acquired by a target camera, where the first image data includes image data of M first target objects, and the target camera includes at least two cameras in the X cameras;
the first display module 1902 is configured to display a first shooting preview interface based on the first image data, where the first shooting preview interface includes three-dimensional live-action images of N first target objects, X, M, and N are positive integers, X is greater than or equal to 2, and N is less than or equal to M.
Optionally, the method further includes:
the first receiving module is used for receiving first input of a user to the tth three-dimensional live-action image in the three-dimensional live-action images of the N first target objects;
a rotation module for rotating the tth three-dimensional live-action image in response to the first input;
wherein t is a positive integer and is less than or equal to N.
Optionally, the method further includes:
the second display module is used for displaying a second shooting preview interface under the condition that relative space state data between the electronic equipment and the M first target objects are changed, wherein the second shooting preview interface comprises three-dimensional live-action images of P second target objects;
the three-dimensional live-action images of the P second target objects are constructed based on the acquired second image data acquired by the target camera, the second image data comprise image data of Q second target objects, P and Q are positive integers, and P is not more than Q.
Optionally, the method further includes:
the second receiving module is used for receiving second input of the target shooting control on the third shooting preview interface by the user;
and the third display module is used for responding to the second input and displaying a mode selection interface.
Optionally, the method further includes:
the third receiving module is used for receiving a third input of the user to the first control on the mode selection interface;
a fourth display module, configured to display a fourth shooting preview interface in response to the third input, where the fourth shooting preview interface includes a zoom progress bar thereon, a first position of the zoom progress bar includes a slider bar, and the first position indicates a first view mode associated with a first target camera of the X cameras.
Optionally, the method further includes:
the first determining module is used for determining a target view mode from at least one view mode consisting of the X cameras;
a second determination module for determining the target camera associated with the target view mode;
the first display module is specifically configured to display the first shooting preview interface according to the target view mode based on the first image data.
Optionally, the first determining module is specifically configured to, when a fourth input to the zoom progress bar or the slider bar by the user is received, respond to the fourth input, update the slider bar to a second position of the zoom progress bar for display, and determine a second view mode indicated by the second position as the target view mode;
wherein the second field of view mode is associated with a second target camera of the X cameras.
Optionally, the first determining module is specifically configured to determine the first view mode as the target view mode when a fourth input of the zoom progress bar or the slider bar by the user is not received.
Optionally, the method further includes:
the first switching module is used for switching the first shooting preview interface displayed according to the target view mode to be displayed according to a third view mode under the condition that the first shooting preview interface does not comprise part or all of the K three-dimensional live-action images;
and K is a positive integer, K is less than or equal to N, and the view angle of the third view mode is larger than that of the target view mode.
Optionally, the method further includes:
the second switching module is used for switching the first shooting preview interface displayed based on the target view mode into display based on a fourth view mode under the condition that the area of the area occupied by the N three-dimensional live-action images in the first shooting preview interface is smaller than or equal to a preset area and the first shooting preview interface does not comprise part or all of the N three-dimensional live-action images;
wherein a field angle of the fourth field of view mode is smaller than a field angle of the target field of view mode.
Optionally, the method further includes:
the fourth receiving module is used for receiving sixth input of the user to the second control on the mode selection interface;
and the sixth display module is used for responding to the sixth input and displaying a camera selection interface, wherein the camera selection interface comprises X camera controls, Y camera controls in the X camera controls are displayed in a first target mode, Y is a positive integer, and Y is not more than X.
Optionally, the method further includes:
a fifth receiving module, configured to receive a seventh input to an ith camera control in the Y camera controls by a user;
a seventh display module, configured to display a fifth shooting preview interface in response to the seventh input;
and under the condition that i is greater than 1, the fifth shooting preview interface comprises an ith camera control and a composite image which are displayed according to a second target color, the composite image is obtained by synthesizing a first image acquired by a camera associated with the ith camera control into a second image, the second image is obtained based on images acquired by m cameras associated with m camera controls, m is equal to i-1, and the m camera controls are camera controls selected by a user from the Y camera controls before receiving eighth input of the user to the ith camera control.
Optionally, the method further includes:
a sixth receiving module, configured to receive an eighth input of the first target control on the fifth shooting preview interface from the user;
a third determining module, configured to determine, in response to the eighth input, a camera associated with the m cameras and the ith camera control as the target camera.
Optionally, the method further includes:
the seventh receiving module is used for receiving ninth input of a user on a second target control on the camera selection interface;
an eighth display module, configured to display, in response to the ninth input, Y preview thumbnails acquired by Y cameras associated with the Y camera controls;
an eighth receiving module, configured to receive a tenth input of the user to j preview thumbnails in the Y preview thumbnails;
a fourth determining module, configured to determine, in response to the tenth input, a camera associated with the j preview thumbnails as the target camera;
wherein j is a positive integer and is less than or equal to Y.
The shooting device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a network attached Storage (MAS), a Personal Computer (PC), a Television (TV), a teller machine (teller machine), a self-service machine, and the like, which are not limited in particular.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an android (android) operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not particularly limited.
The shooting device provided in the embodiment of the present application can implement each process implemented by the shooting device in the method embodiment of fig. 1, and is not described here again in order to avoid repetition.
Optionally, an electronic device is further provided in an embodiment of the present application, as shown in fig. 20, fig. 20 is a schematic diagram of a hardware structure of an electronic device implementing the embodiment of the present application. The electronic device 2000 includes a processor 2001, a memory 2002, and a program or an instruction stored in the memory 2002 and executable on the processor 2001, where the program or the instruction implements each process of the embodiment of the shooting preview method when executed by the processor 2001, and can achieve the same technical effect, and is not described herein again to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 21 is a schematic hardware configuration diagram of another electronic device for implementing the embodiment of the present application.
The electronic device 2100 includes, but is not limited to: a radio frequency unit 2101, a network module 2102, an audio output unit 2103, an input unit 2104, a sensor 2105, a display unit 2106, a user input unit 2107, an interface unit 2108, a memory 2109, and a processor 2110.
Those skilled in the art will appreciate that the electronic device 2100 may also include a power source (e.g., a battery) for powering the various components, and the power source may be logically coupled to the processor 2110 via a power management system, such that the functions of managing charging, discharging, and power consumption are performed via the power management system. The electronic device structure shown in fig. 21 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted.
The processor 2110 is configured to obtain first image data acquired by a target camera, where the first image data includes image data of M first target objects, and the target camera includes at least two cameras among the X cameras;
and displaying a first shooting preview interface through a display unit 2106 based on the first image data, wherein the first shooting preview interface comprises three-dimensional live-action images of N first target objects, X, M and N are positive integers, X is more than or equal to 2, and N is less than or equal to M.
A user input unit 2107 for receiving a first input of a user to a tth three-dimensional live view image of the three-dimensional live view images of the N first target objects;
a processor 2110 for rotating said t-th three-dimensional live-action image in response to said first input;
wherein t is a positive integer and t is less than or equal to N.
The processor 2110 is further configured to, in a case that relative spatial state data between the electronic device and the M first target objects changes, display a second shooting preview interface through the display unit 2106, where the second shooting preview interface includes three-dimensional live-action images of P second target objects;
the three-dimensional live-action images of the P second target objects are constructed based on the acquired second image data acquired by the target camera, the second image data comprise image data of Q second target objects, P and Q are positive integers, and P is not more than Q.
A user input unit 2107, configured to receive a second input of the target shooting control on the third shooting preview interface from the user;
a processor 2110 for displaying a mode selection interface through the display unit 2106 in response to the second input.
A user input unit 2107, configured to receive a third input by a user to the first control on the mode selection interface;
a processor 2110 for displaying a fourth photographing preview interface through the display unit 2106 in response to the third input, wherein the fourth photographing preview interface includes thereon a zoom progress bar, a first position of the zoom progress bar includes a slide bar, and the first position indicates a first view mode associated with a first target camera of the X cameras.
The processor 2110 is used for determining a target view mode from at least one view mode consisting of the X cameras;
determining the target camera associated with the target view mode;
the first photographing preview interface is displayed through the display unit 2106 in the target view mode based on the first image data.
A processor 2110, configured to, in a case where a fourth input to the zoom progress bar or the slider bar by the user is received, update the slider bar to a second position display of the zoom progress bar through a display unit 2106 in response to the fourth input, and determine a second view mode indicated by the second position as the target view mode;
wherein the second field of view mode is associated with a second target camera of the X cameras.
A processor 2110 for determining the first view mode as the target view mode if a fifth input to the zoom progress bar or the slider bar by a user is not received.
The processor 2110 is configured to switch the first shooting preview interface displayed according to the target view mode to be displayed according to a third view mode when the first shooting preview interface does not include part or all of the K three-dimensional live view images;
and K is a positive integer, K is less than or equal to N, and the view angle of the third view mode is larger than that of the target view mode.
The processor 2110 is configured to switch the first shooting preview interface displayed based on the target view mode to be displayed based on a fourth view mode when an area occupied by N three-dimensional live view images in the first shooting preview interface is smaller than or equal to a preset area and the first shooting preview interface does not include part or all of the N three-dimensional live view images;
wherein a field of view angle of the fourth field of view mode is smaller than a field of view angle of the target field of view mode.
A user input unit 2107, configured to receive a sixth input from the user to the second control on the mode selection interface;
the processor 2110 is configured to display a camera selection interface through the display unit 2106 in response to the sixth input, where the camera selection interface includes X camera controls, Y of the X camera controls are displayed in a first target manner, Y is a positive integer, and Y is not greater than X.
A user input unit 2107, configured to receive a seventh input of the ith camera control in the Y camera controls from the user;
a processor 2110 for displaying a fifth photographing preview interface through the display unit 2106 in response to the seventh input;
and under the condition that i is greater than 1, the fifth shooting preview interface comprises an ith camera control and a composite image which are displayed according to a second target color, the composite image is obtained by synthesizing a first image acquired by a camera associated with the ith camera control into a second image, the second image is obtained based on images acquired by m cameras associated with m camera controls, m is equal to i-1, and the m camera controls are camera controls selected by a user from the Y camera controls before a seventh input of the user to the ith camera control is received.
A user input unit 2107, configured to receive an eighth input by the user to the first target control on the fifth shooting preview interface;
a processor 2110 for determining, in response to the eighth input, a camera associated with the m cameras and the i-th camera control as the target camera.
A user input unit 2107, configured to receive a ninth input of the second target control on the camera selection interface from the user;
a processor 2110 for displaying, via a display unit 2106, Y preview thumbnails acquired by Y cameras associated with the Y camera controls in response to the ninth input;
a user input unit 2107 for receiving a tenth input by the user on j preview thumbnails among the Y preview thumbnails;
a processor 2110 for determining, in response to the tenth input, the camera associated with the j preview thumbnails as the target camera;
wherein j is a positive integer and is less than or equal to Y.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above noise reduction function control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer-readable storage media such as computer Read-only Memory (ROM), random-access Memory (RAM), magnetic or optical disks, and so forth.
It should be understood that in the embodiment of the present application, the input unit 2104 may include a Graphics Processor (GPU) 21041 and a microphone 21042, and the Graphics processor 21041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 2106 may include a display panel 21061, and the display panel 21061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 2107 includes a touch panel 21071 and other input devices 21072. The touch panel 21071 is also referred to as a touch screen. The touch panel 21071 may include two portions, a touch detection device and a touch controller. Other input devices 21072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 2109 may be used for storing software programs as well as various data, including but not limited to application programs and operating systems. The processor 2110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 2110.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above shooting preview method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A shooting preview method is applied to an electronic device, wherein the electronic device comprises X cameras of at least one type, and the method comprises the following steps:
acquiring first image data acquired by a target camera, wherein the first image data comprises image data of M first target objects, and the target camera comprises at least two cameras in the X cameras;
displaying a first shooting preview interface according to a target view mode based on the first image data, wherein the first shooting preview interface comprises three-dimensional live-action images of N first target objects, X, M and N are positive integers, X is more than or equal to 2, and N is less than or equal to M;
wherein the target view mode is a view mode associated with the target camera;
before the acquiring the first image data collected by the target camera, the method further comprises:
receiving a second input of the target shooting control on the third shooting preview interface from the user;
displaying a mode selection interface in response to the second input;
receiving a third input of a user to a first control on the mode selection interface;
in response to the third input, displaying a fourth capture preview interface, wherein the fourth capture preview interface includes a zoom progress bar thereon, a first position of the zoom progress bar including a slider bar, the first position indicating a first view mode associated with a first target camera of the X cameras.
2. The method of claim 1, further comprising, after said displaying a first capture preview interface in a target view mode based on said first image data:
receiving first input of a user to a tth three-dimensional live-action image in the three-dimensional live-action images of the N first target objects;
rotating the tth three-dimensional live-action image in response to the first input;
wherein t is a positive integer and t is less than or equal to N.
3. The method of claim 1, further comprising, after said displaying a first capture preview interface in a target view mode based on said first image data:
displaying a second shooting preview interface under the condition that relative space state data between the electronic equipment and the M first target objects are changed, wherein the second shooting preview interface comprises three-dimensional live-action images of P second target objects;
the three-dimensional live-action images of the P second target objects are constructed based on the acquired second image data acquired by the target camera, the second image data comprise image data of Q second target objects, P and Q are positive integers, and P is not more than Q.
4. The method of claim 1, further comprising, after said displaying a fourth capture preview interface in response to the third input:
determining the target view mode from at least one view mode consisting of the X cameras;
determining the target camera associated with the target view mode.
5. The method according to claim 4, wherein said determining the target view mode from the at least one view mode of the X cameras comprises:
in a case where a fourth input to the zoom progress bar or the slider bar by the user is received, updating the slider bar to a second position display of the zoom progress bar in response to the fourth input, and determining a second view mode indicated with the second position as the target view mode;
wherein the second field of view mode is associated with a second target camera of the X cameras.
6. The method of claim 1, further comprising, after said displaying a first capture preview interface in a target view mode based on said first image data:
under the condition that the first shooting preview interface does not comprise part or all of the K three-dimensional live-action images, switching the first shooting preview interface displayed according to the target view mode into display according to a third view mode;
and K is a positive integer, K is less than or equal to N, and the view angle of the third view mode is larger than that of the target view mode.
7. The method of claim 1, further comprising, after said displaying a first capture preview interface in a target view mode based on said first image data:
when the area occupied by the N three-dimensional live-action images in the first shooting preview interface is smaller than or equal to a preset area and the first shooting preview interface does not comprise part or all of the N three-dimensional live-action images, switching the first shooting preview interface displayed based on the target view mode into a fourth view mode;
wherein a field of view angle of the fourth field of view mode is smaller than a field of view angle of the target field of view mode.
8. The method of claim 1, further comprising, after said displaying a mode selection interface in response to said second input:
receiving a sixth input by the user to a second control on the mode selection interface;
and responding to the sixth input, and displaying a camera selection interface, wherein the camera selection interface comprises X camera controls, Y camera controls in the X camera controls are displayed in a first target mode, Y is a positive integer, and Y is not more than X.
9. The method of claim 8, further comprising, after said displaying a camera selection interface in response to said sixth input:
receiving a seventh input of a user to an ith camera control in the Y camera controls;
displaying a fifth photographing preview interface in response to the seventh input;
and under the condition that i is larger than 1, the fifth shooting preview interface comprises an ith camera control and a composite image, wherein the ith camera control and the composite image are displayed according to a second target color, the composite image is obtained by synthesizing a first image acquired by a camera associated with the ith camera control into a second image, the second image is obtained based on images acquired by m cameras associated with m camera controls, m is equal to i-1, and the m camera controls are camera controls selected by a user from the Y camera controls before a seventh input of the user to the ith camera control is received.
10. The method of claim 9, wherein after said displaying a fifth capture preview interface in response to said seventh input, further comprising:
receiving an eighth input of the first target control on the fifth shooting preview interface by the user;
in response to the eighth input, determining the camera associated with the m cameras and the ith camera control as the target camera.
11. The method of claim 8, further comprising, after said displaying a camera selection interface in response to said sixth input:
receiving a ninth input of a user to a second target control on the camera selection interface;
responding to the ninth input, and displaying Y preview thumbnails acquired by Y cameras associated with the Y camera controls;
receiving tenth input of j preview thumbnails in the Y preview thumbnails by a user;
in response to the tenth input, determining the camera associated with the j preview thumbnails as the target camera;
wherein j is a positive integer and is less than or equal to Y.
12. A shooting preview apparatus, which is applied to an electronic device including X cameras of at least one type, the apparatus comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring first image data acquired by a target camera, the first image data comprises image data of M first target objects, and the target camera comprises at least two cameras in the X cameras;
the first display module is used for displaying a first shooting preview interface according to a target view field mode based on the first image data, wherein the first shooting preview interface comprises three-dimensional live-action images of N first target objects, X, M and N are positive integers, X is more than or equal to 2, and N is less than or equal to M; wherein the target view mode is a view mode associated with the target camera;
the second receiving module is used for receiving second input of a user to a target shooting control on a third shooting preview interface before the acquisition module acquires the first image data acquired by the target camera;
a third display module for displaying a mode selection interface in response to the second input;
the third receiving module is used for receiving a third input of the user to the first control on the mode selection interface;
a fourth display module, configured to display a fourth shooting preview interface in response to the third input, where the fourth shooting preview interface includes a zoom progress bar thereon, a first position of the zoom progress bar includes a slider bar, and the first position indicates a first view mode associated with a first target camera of the X cameras.
CN202110105879.6A 2021-01-26 2021-01-26 Shooting preview method and device and electronic equipment Active CN112887603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110105879.6A CN112887603B (en) 2021-01-26 2021-01-26 Shooting preview method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110105879.6A CN112887603B (en) 2021-01-26 2021-01-26 Shooting preview method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112887603A CN112887603A (en) 2021-06-01
CN112887603B true CN112887603B (en) 2023-01-24

Family

ID=76052311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110105879.6A Active CN112887603B (en) 2021-01-26 2021-01-26 Shooting preview method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112887603B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113747076A (en) * 2021-09-26 2021-12-03 维沃移动通信有限公司 Shooting method and device and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1910577A (en) * 2004-01-15 2007-02-07 松下电器产业株式会社 Image file list display device
CN107222677A (en) * 2017-05-27 2017-09-29 成都通甲优博科技有限责任公司 The method and device that multi-cam is opened simultaneously
CN108833796A (en) * 2018-09-21 2018-11-16 维沃移动通信有限公司 A kind of image capturing method and terminal
CN109600550A (en) * 2018-12-18 2019-04-09 维沃移动通信有限公司 A kind of shooting reminding method and terminal device
CN109769091A (en) * 2019-02-22 2019-05-17 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN109859307A (en) * 2018-12-25 2019-06-07 维沃移动通信有限公司 A kind of image processing method and terminal device
CN110505411A (en) * 2019-09-03 2019-11-26 RealMe重庆移动通信有限公司 Image capturing method, device, storage medium and electronic equipment
CN111787224A (en) * 2020-07-10 2020-10-16 深圳传音控股股份有限公司 Image acquisition method, terminal device and computer-readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1910577A (en) * 2004-01-15 2007-02-07 松下电器产业株式会社 Image file list display device
CN107222677A (en) * 2017-05-27 2017-09-29 成都通甲优博科技有限责任公司 The method and device that multi-cam is opened simultaneously
CN108833796A (en) * 2018-09-21 2018-11-16 维沃移动通信有限公司 A kind of image capturing method and terminal
CN109600550A (en) * 2018-12-18 2019-04-09 维沃移动通信有限公司 A kind of shooting reminding method and terminal device
CN109859307A (en) * 2018-12-25 2019-06-07 维沃移动通信有限公司 A kind of image processing method and terminal device
CN109769091A (en) * 2019-02-22 2019-05-17 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN110505411A (en) * 2019-09-03 2019-11-26 RealMe重庆移动通信有限公司 Image capturing method, device, storage medium and electronic equipment
CN111787224A (en) * 2020-07-10 2020-10-16 深圳传音控股股份有限公司 Image acquisition method, terminal device and computer-readable storage medium

Also Published As

Publication number Publication date
CN112887603A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
US11758265B2 (en) Image processing method and mobile terminal
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
US20160180593A1 (en) Wearable device-based augmented reality method and system
CN112492212B (en) Photographing method and device, electronic equipment and storage medium
WO2022161260A1 (en) Focusing method and apparatus, electronic device, and medium
CN113766129A (en) Video recording method, video recording device, electronic equipment and medium
CN113794829B (en) Shooting method and device and electronic equipment
CN114125179B (en) Shooting method and device
CN112995500A (en) Shooting method, shooting device, electronic equipment and medium
CN113329172B (en) Shooting method and device and electronic equipment
US20120162459A1 (en) Image capturing apparatus and image patchwork method thereof
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112887603B (en) Shooting preview method and device and electronic equipment
KR102108246B1 (en) Method and apparatus for providing video in potable device
CN112422812B (en) Image processing method, mobile terminal and storage medium
CN112784081A (en) Image display method and device and electronic equipment
CN115379195A (en) Video generation method and device, electronic equipment and readable storage medium
CN112954197B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112653841B (en) Shooting method and device and electronic equipment
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113286085A (en) Display control method and device and electronic equipment
CN114339073B (en) Video generation method and video generation device
CN112672059B (en) Shooting method and shooting device
CN112367562B (en) Image processing method and device and electronic equipment
CN114339029B (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant