CN111083371A - Shooting method and electronic equipment - Google Patents

Shooting method and electronic equipment Download PDF

Info

Publication number
CN111083371A
CN111083371A CN201911374454.4A CN201911374454A CN111083371A CN 111083371 A CN111083371 A CN 111083371A CN 201911374454 A CN201911374454 A CN 201911374454A CN 111083371 A CN111083371 A CN 111083371A
Authority
CN
China
Prior art keywords
preview
input
target area
preview picture
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911374454.4A
Other languages
Chinese (zh)
Inventor
王家伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911374454.4A priority Critical patent/CN111083371A/en
Publication of CN111083371A publication Critical patent/CN111083371A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses a shooting method and electronic equipment, wherein the method comprises the following steps: under the condition that a first preview picture and a second preview picture are displayed on a shooting preview interface, receiving a first input of selecting a target area in the first preview picture; the first preview picture and the second preview picture are respectively view-finding pictures of the cameras on the electronic equipment in different directions; responding to the first input, and displaying an image corresponding to the target area in the second preview screen; receiving a second input of shooting a second preview picture and a target area; in response to a second input, a fused image of the second preview screen and the target area is displayed. The problem that the shooting image has low flaking rate is solved.

Description

Shooting method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a shooting method, a shooting device, electronic equipment and a storage medium.
Background
With the development of electronic devices and mobile internet, electronic devices become an indispensable part of life of people. In life, users often use electronic devices to capture a nice moment of life.
However, the user can photograph only a single scene during photographing, and thus, a side landscape may be missed. Or when the user finds the beautiful scenes of the multiple scenes, the user wants to splice some parts of the multiple scenes, although post-synthesis processing can be performed. However, sometimes, the picture features between the images are difficult to synthesize an image required by a user by means of post-processing, resulting in a low image-filming rate.
Disclosure of Invention
The embodiment of the invention provides a shooting method, a shooting device, electronic equipment and a storage medium, and aims to solve the problem that the shooting image has a low slice rate in the related art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a shooting method applied to an electronic device, where the method may include:
under the condition that a first preview picture and a second preview picture are displayed on a shooting preview interface, receiving a first input of selecting a target area in the first preview picture; the first preview picture and the second preview picture are respectively view-finding pictures of the cameras on the electronic equipment in different directions;
responding to the first input, and displaying an image corresponding to the target area in the second preview screen;
receiving a second input of shooting a second preview picture and a target area;
in response to a second input, a fused image of the second preview screen and the target area is displayed.
In a second aspect, an embodiment of the present invention provides a shooting apparatus applied to an electronic device, where the shooting apparatus may include:
the device comprises a receiving module, a display module and a display module, wherein the receiving module is used for receiving a first input of selecting a target area in a first preview picture under the condition that the first preview picture and a second preview picture are displayed on a shooting preview interface; the first preview picture and the second preview picture are respectively view-finding pictures of the cameras on the electronic equipment in different directions;
the display module is used for responding to the first input and displaying the image corresponding to the target area in the second preview picture;
the receiving module is also used for receiving a second input of shooting a second preview picture and a target area;
and the processing module is used for responding to a second input and displaying a second preview picture and a fused image of the target area.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the electronic device implements the shooting method according to the first aspect.
In a fourth aspect, there is provided a computer-readable storage medium having stored thereon a computer program for causing a computer to execute the photographing method according to the first aspect if the computer program is executed in the computer.
In the embodiment of the invention, when a first preview picture and a second preview picture are displayed on a shooting preview interface, a first input of selecting a target area in the first preview picture is received; the first preview picture and the second preview picture are respectively view-finding pictures of the cameras on the electronic equipment in different directions; then, a target area is determined to be selected in the first preview picture, an image corresponding to the target area is displayed in the second preview picture, and a fused image of the second preview picture and the target area is displayed according to user input. Therefore, the shooting method in the embodiment of the invention can enable the user to capture the scenery needed by the user nearby without missing the view. On this basis, carry out local editing when shooting the preview, can observe the effect that the image was shot in real time, and then select suitable shooting opportunity to the realization does not need post processing's image to shoot, effectual saving shooting time reduces the shooting cost, has avoided also making the satisfied risk of user through post processing simultaneously, thereby improves the one slice rate of shooting the image.
Drawings
The present invention will be better understood from the following description of specific embodiments thereof taken in conjunction with the accompanying drawings, in which like or similar reference characters designate like or similar features.
Fig. 1 is a schematic view of an application scenario of a shooting method according to an embodiment of the present invention;
fig. 2 is a schematic view of an application scenario of another shooting method according to an embodiment of the present invention;
fig. 3 is a schematic view of a position of a camera of an electronic device according to an embodiment of the present invention;
fig. 4 is a schematic view of an application scenario of another shooting method according to an embodiment of the present invention;
fig. 5 is a flowchart of a shooting method according to an embodiment of the present invention;
fig. 6 is a flowchart of a method for implementing a shooting method according to an embodiment of the present invention;
fig. 7 is an interface schematic diagram of a plurality of preview pictures on a shooting preview interface according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an interface for determining a target image according to an embodiment of the present invention;
fig. 9 is an interface schematic diagram of an adjustment target image based on a second preview screen according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 11 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the problems in the related art, embodiments of the present invention provide a shooting method, an apparatus, an electronic device, and a storage medium, so as to solve the problem in the related art that the slice rate of a shot image is low.
The shooting method provided by the embodiment of the invention can be applied to the following application scenes, and the following detailed description is given:
the user wants to take the same image with objects in different orientations. As shown in fig. 1, receiving an operation of a user to start a camera, and displaying a first preview picture and a second preview picture on a shooting preview interface, where the first preview picture and the second preview picture are respectively view-finding pictures of cameras in different directions of an electronic device; then, receiving a first input of selecting a target area on a first preview picture; responding to the first input, and displaying an image corresponding to the target area in the second preview screen; receiving a second input of shooting a second preview picture and a target area; and generating a fused image of the second preview screen and the target area in response to the second input, and displaying the fused image to the user.
Therefore, the shooting method in the embodiment of the invention can enable the user to capture the scenery needed by the user nearby without missing the view. On this basis, carry out local editing when shooting the preview, can observe the effect that the image was shot in real time, and then select suitable shooting opportunity to the realization does not need post processing's image to shoot, effectual saving shooting time reduces the shooting cost, has avoided also making the satisfied risk of user through post processing simultaneously, thereby improves the one slice rate of shooting the image.
In addition, as shown in fig. 2, a first input of selecting a target area in the first preview screen is received, and when the target area is the complete first preview screen, the second input may be responded to generate a stitched image of the second preview screen and the panorama of the target area, that is, the stitched image in which the fused image is the panorama. Here, it is satisfied that the user desires to take photos of himself in a plurality of scenes without moving the electronic device to take a panoramic image as in the related art, so that the user having poor image pickup technology can stably take the panoramic image. Meanwhile, if the spliced images are subjected to 3D processing, the 3D panoramic picture which has perfect stereoscopic impression and cannot be distorted in space can be synthesized by the method.
It should be noted that, as shown in fig. 3, the electronic device in the embodiment of the present invention is equipped with cameras with multiple orientations. Wherein the cameras of the plurality of orientations may include a plurality of cameras of: rear camera 31, front camera 32, left camera 33, right camera 34, top camera 35, lower camera 36.
Here, the camera in each orientation in the embodiment of the present invention may be set as at least one camera, for example: the rear camera 31 has double-shot or triple-shot. In some possible implementation scenarios, the first preview picture and the second preview picture may be acquired by any two cameras in the multiple cameras, that is, two cameras in different directions may acquire pictures, or two cameras in the same direction and different images may acquire pictures. For example, as shown in fig. 4, when a user wants to capture an intersection portion in different frames acquired by two cameras (i.e., frames acquired by different cameras in the same orientation or cameras in different orientations), and each frame is a frame in which an object in the frame is presented at a different zoom factor, the above-mentioned manner may also be adopted, so that the user can acquire a plurality of objects displayed at different zoom factors in one fused image without performing post-image processing in different capture ranges and zoom factors.
In summary, based on the application scenario, the following describes the shooting method provided by the embodiment of the present invention in detail.
Fig. 5 is a flowchart of a shooting method according to an embodiment of the present invention.
As shown in fig. 5, the shooting method may specifically include steps 510 to 530, and specifically may include:
step 510, receiving a first input of selecting a target area in a first preview picture under the condition that the first preview picture and a second preview picture are displayed on a shooting preview interface; step 520, responding to the first input, and displaying an image corresponding to the target area in the second preview screen; step 530, receiving a second input of shooting a second preview picture and a target area; and 540, responding to the second input, and displaying the second preview picture and the fused image of the target area.
Therefore, the shooting method in the embodiment of the invention can enable the user to capture the scenery needed by the user nearby without missing the view. On this basis, carry out local editing when shooting the preview, can observe the effect that the image was shot in real time, and then select suitable shooting opportunity to the realization does not need post processing's image to shoot, effectual saving shooting time reduces the shooting cost, has avoided also making the satisfied risk of user through post processing simultaneously, thereby improves the one slice rate of shooting the image.
Thus, the above steps will be explained in detail:
referring to step 510, since the method provided in the embodiment of the present invention may be applied to multiple scenes, and thus the first preview screen and the second preview screen in the embodiment of the present invention are also different according to different application scenes, and states of the preview screens are also different, the embodiment of the present invention may take the following three possible examples, which may be specifically shown as follows:
in one possible embodiment, the first preview screen and the second preview screen are respectively the viewfinder screens of the cameras in different orientations of the electronic device.
In another possible embodiment, the first preview screen and the second preview screen are respectively view-finding screens of cameras in different orientations of the electronic device (or view-finding screens of different cameras in the same orientation), the view-finding screens captured in the first preview screen and the second preview screen have an intersection, and the zoom multiples corresponding to the first preview screen and the second preview screen are different.
In yet another possible embodiment, the first preview screen and the second preview screen are respectively a viewfinder screen of a camera in different orientations of the electronic device, and the first preview screen and the second preview screen are screens which are acquired at the same time and have at least one same area.
Referring to step 520, in the embodiment of the present invention, two ways are provided to display the image corresponding to the target area in the second preview screen.
In one aspect, the electronic device automatically displays the target area in the second preview screen according to preset parameters (e.g., size and position information of the target area in the second preview screen) when receiving the first input. Responding to the first input, and adjusting the characteristic parameters of the second preview picture by using a double-shot alignment algorithm; and displaying the target area in the second preview picture according to the characteristic parameters. Therefore, the operation of the user can be reduced, and the shooting efficiency can be improved.
And the other way is that the electronic equipment generates a floating window corresponding to the target area according to the target area when receiving the first input, so that the user can adjust the size and the position of the target area according to the requirement of the user. Responding to a first input, displaying a movable floating window in a first preview picture by using a double-shot alignment algorithm, wherein the floating window comprises a target area; receiving a third input of moving the floating window into the second preview screen; in response to a third input, a target area is displayed in the second preview screen. Therefore, personalized requirements can be provided for the user, and the user experience is enhanced.
Based on this, in a possible embodiment, before the step of displaying the target area in the second preview screen referred to above, the method further comprises: receiving a fourth input to change the size of the floating window; in response to a fourth input, the enlarged or reduced target area is displayed in the second preview screen at a zoom scale associated with the fourth input.
Thus, after the user adjusts the target area in the second preview screen, the second preview screen or the first preview screen and the second preview screen may be photographed as required, that is, step 530 is executed.
Referring to step 530, in a scenario where the user only wants to capture a finder in the second preview screen, the second input is used to represent an input for capturing the second preview screen and the target area; alternatively, when the user wants to capture another preview screen (e.g., the first preview screen) while capturing the second preview screen in addition to capturing the second preview screen, the second input may also be used to characterize capturing the first preview screen and the second preview screen. Therefore, the preview picture required by the user can be shot according to the requirement of the user, and the requirement of individual customization is provided for the user.
Thus, after receiving the input of the photographing, the electronic device may generate the fused image according to the second input, and refer to step 540 for specific contents.
Referring to step 540, in an embodiment of the present invention, fused images in different forms may be generated and displayed based on different scenes, as shown in detail below.
Based on three possible embodiments in step 510, this step generates a fused image of the second preview screen and the target area in response to the second input. The fused image can be a fused image of a target area in the first preview picture and the second preview picture; alternatively, the first preview screen and the second preview screen may be connected to form a panoramic mosaic image.
Here, when the fused image is a stitched image, step 540 may specifically include: when the first preview picture and the second preview picture are framing pictures with different shooting ranges and the target area is the first preview picture, displaying a spliced image of the second preview picture and the panorama of the target area by utilizing characteristic pixel matching.
Therefore, when the user expects to shoot the photos in a plurality of scenes, the user does not need to move the electronic equipment to shoot the panoramic image as in the prior art, and therefore the user with poor shooting technology can stably shoot the panoramic image.
Based on this, in a possible embodiment, after the electronic device acquires the stitched image, the stitched image may be further subjected to 3D processing (for example, inputting the stitched image into a 3D model) to obtain a 3D panoramic image, and since the panoramic image is not taken via the mobile electronic device and is the first preview screen and the second preview screen which are taken under relatively stable conditions, the 3D panoramic image with full stereoscopic effect and without spatial distortion may be synthesized.
In addition, based on shooting a panoramic image, another possible scenario is provided in the embodiments of the present invention, that is, when the first preview picture and the second preview picture have an intersection portion of the captured viewfinder pictures, and the intersection portion includes a target area, a user may remove the target area in the first preview picture without wanting the target area to appear in the last shot image, so that the second preview picture may also remove the target area, and the specific implementation manner is as follows, before step 540, the method further includes:
when the first preview picture and the second preview picture comprise the same target object, receiving a fifth input of removing a first area where the target object in the first preview picture is located; in response to a fifth input, removing the first area in the first preview screen and filling the first area with the second area; the second area is determined as the target area.
Based on this, step 540 is executed, so that image shooting without post-processing is realized, shooting time is effectively saved, shooting cost is reduced, and meanwhile, the risk that the user cannot be satisfied by post-processing is avoided, and therefore the filming rate of shot images is improved.
In addition, according to another possible embodiment in step 510, since the zoom factors corresponding to the first preview screen and the second preview screen are different, step 540 may specifically include:
when the first preview picture and the framing picture shot by the second preview picture have an intersection part, acquiring a first zoom multiple of a target area in the first preview picture and a second zoom multiple of the second preview picture;
and when the first zoom multiple is different from the second zoom multiple, displaying the target area in the second preview picture according to the second zoom multiple.
Therefore, the user can acquire a plurality of objects displayed in different zoom magnifications in one fused image without performing post-image processing in different shooting ranges and zoom magnifications.
In summary, to facilitate understanding of the method provided by the embodiment of the present invention, based on the above, the following takes preview pictures obtained by cameras (as shown in fig. 3) in multiple orientations of the electronic device as an example, and the shooting method in the embodiment of the present invention is described as an example.
Fig. 6 is a flowchart of a method for implementing a shooting method according to an embodiment of the present invention.
As shown in fig. 6, the method may include steps 610 to 670, which may be specifically as follows:
step 610, receiving a preset operation of a user to start an application program for shooting an image.
And step 620, responding to preset operation, starting a plurality of accessed cameras of the electronic equipment to acquire a preview screen of each camera.
In step 630, a plurality of preview screens are displayed on the shooting preview interface.
As shown in fig. 7, the plurality of preview screens include: a first preview screen corresponding to the rear camera 31, a second preview screen corresponding to the front camera 32, a third preview screen corresponding to the left camera 33, a fourth preview screen corresponding to the right camera 34, a fifth preview screen corresponding to the upper camera 35, and a sixth preview screen corresponding to the lower camera 36.
It should be noted that, the user may select the multi-dimensional scene same frame mode based on the displayed multiple preview pictures, the multiple cameras in multiple directions may present the multiple preview pictures in the same time period in the same interface, and at this time, the stage of selecting the target area may be performed in the preview stage, that is, step 640 is performed.
Step 640 receives a first input selecting a target area in the first preview screen.
Following the example in step 630, as shown in fig. 7, the interfaces where the different preview images in the total 6 orientations are located are front, back, left, right, top and bottom 6 different cameras and/or preview images, respectively, all of which can implement independent editing (e.g., zooming in, zooming out, filtering, beautifying, etc.). Different 6 scenes are displayed in 6 different preview pictures, and a characteristic area hexagram and other imaging objects are arranged behind the 6 different preview pictures. As shown in fig. 8, the user may click on the first preview screen to fullscreen the first preview screen, and use a corresponding gesture (e.g. a general hexagram gesture) on the first preview screen to scratch the image using a pixel point difference algorithm according to the stroked gesture, in anticipation of giving a possible imaging feature region (i.e. target region). If the selection area is wrong, the user can be prompted to reselect again.
Here, if it is desired to implement independent editing for different cameras and/or preview screens, the following steps need to be executed:
a double Alignment algorithm (SAT) is used, for example, between two cameras, and the Alignment of the viewing angle ranges (e.g., the entire preview screen) of the two cameras can be performed by using the algorithm. After alignment, the two cameras can perform independent editing functions, such as zoom in, zoom out, filter flip, and so on. As shown in fig. 7, a rear camera 31 is provided, and a user can select a certain target area in the first preview image, and the electronic device performs matting on the target area selected by the user according to portrait features and pixel difference algorithm matting. After the target area is determined, the front camera 32 aligns and frames the target area in the same second preview screen. As shown in fig. 9, in the case that the second preview screen (only one, before editing and after editing) of the user is actually formed by two cameras simultaneously, the user can select the front camera 32 to align with the previously selected target area (e.g. the five-pointed star in fig. 9) of the rear camera 31 for independent editing, and the effect in the first preview screen of the rear camera 31 is not changed. Thus, the local processing when previewing a certain local under the unique scene can be realized. The area other than the display effect pentagram in the second preview screen is large fov imaged and the pentagram is small fov imaged. Thus, the angle of view other than the five-pointed star and the preview effect in the first preview screen are not changed, but the angle of view and the preview effect of the five-pointed star in the second preview screen are changed (i.e., from the left image before editing to the right image after enlargement). In this way, the p-picture effect can be realized at the highest speed in the preview process under a specific scene, so that the post-image processing is omitted.
Step 650, in response to the first input, displaying an image corresponding to the target area in the second preview screen.
In step 640, after the user selects a corresponding target area, the target area is recorded, and a movable floating window is displayed in the first preview screen by using a bi-shooting alignment algorithm, where the floating window includes the target area, and the user may select an imaging area in different preview screens according to different actual requirements, and drag the target area to other preview screens, that is, receive a third input to move the floating window to the second preview screen, and display the target area in the second preview screen in response to the third input, and at this time, in the process of the preview screens, use an image synthesis algorithm to complete synthesis of the target area into the second preview screen, so that the target area is covered in the preview screen located ahead. Here, for example, if the user selects a target region of a hexagram in the first preview screen, the parameter value (e.g., crop region) of the first preview screen is transferred to the second preview screen for image crop (note that these processes are all implemented during image preview), and then the blending editing between the two regions of the preview screen of each of the two cameras can be implemented.
Alternatively, when the target area is moved to the second preview screen, the post-camera and/or the second preview screen not including the target area may be independently edited or may not be edited (but the target area is already recorded). The user can move the target area to the position where the user selected the target area on the second preview screen.
It should be noted that when the moving target region is shifted from the first preview screen to the second preview screen, the target region is still in the first preview screen (or the first preview screen is removed and the first preview screen in the first preview screen is filled with the target region).
In addition, in some possible embodiments, if the user only selects the target area and does not want to perform composition editing with other preview pictures, the user may select the saving target area and select other images provided by the electronic device (such as downloaded images or images taken in history), and the target area and the other images are fused by using a picture composition technology, that is, the target area selected by the user is overlaid on the other images for saving.
Moreover, when the user can select a function button, that is, a scene of panoramic shooting, a plurality of preview pictures can be subjected to panoramic picture splicing in six directions from the first preview picture to the sixth preview picture according to a preset fov scaling ratio (this ratio ensures that under the condition of special fov, the edges of two images are at the same position capable of being spliced together, so that picture edge splicing is realized), and the spliced picture is a special staple-shaped panoramic picture, so that a different panoramic picture experience is brought to the user. Or inputting the spliced image into a 3D model to obtain a 3D panoramic image.
Step 660 receives a second input to capture a second preview screen and a target area.
Step 670, responding to the second input, displaying the second preview picture and the fused image of the target area.
In one possible example, the fused image is displayed by the electronic device while the fused image of the second preview screen and the target area is generated.
Thus, according to the imaging method shown in fig. 6 to 9, the target area may be a target area in any preview screen, or the target area may be moved to a plurality of preview screens, and here, only the first preview screen and the second preview screen are described as an example, but in an actual application, the area in each preview screen may be moved to another preview screen, and the movement may be repeated.
In summary, in the embodiments of the present invention, when the first preview screen and the second preview screen are displayed on the shooting preview interface, a first input for selecting the target area in the first preview screen is received; the first preview picture and the second preview picture are respectively view-finding pictures of the cameras on the electronic equipment in different directions; then, a target area is determined to be selected in the first preview screen, the target area is displayed in the second preview screen, and a fused image of the second preview screen and the target area is generated according to user input. Therefore, the shooting method in the embodiment of the invention can enable the user to capture the scenery needed by the user nearby without missing the view. On this basis, carry out local editing when shooting the preview, can observe the effect that the image was shot in real time, and then select suitable shooting opportunity to the realization does not need post processing's image to shoot, effectual saving shooting time reduces the shooting cost, has avoided also making the satisfied risk of user through post processing simultaneously, thereby improves the one slice rate of shooting the image.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
As shown in fig. 10, the electronic device 1000 may specifically include:
a receiving module 1001, configured to receive a first input selecting a target area on a first preview screen when a first preview screen and a second preview screen are displayed on a shooting preview interface; the first preview picture and the second preview picture are respectively view-finding pictures of the cameras on the electronic equipment in different directions;
a display module 1002, configured to display, in response to a first input, an image corresponding to a target area in a second preview screen;
the receiving module 1001 is further configured to receive a second input of capturing a second preview screen and the target area;
and a processing module 1003, configured to display, in response to the second input, a fused image of the second preview screen and the target area.
In a possible embodiment, the display module 1002 may be specifically configured to, in response to the first input, adjust a feature parameter of the second preview screen by using a bi-camera alignment algorithm; and displaying the target area in the second preview picture according to the characteristic parameters.
In another possible embodiment, the display module 1002 may be specifically configured to, in response to the first input, display a movable floating window in the first preview screen by using a bi-camera alignment algorithm, where the floating window includes the target area. Based on this, the receiving module 1001 is further configured to receive a third input for moving the floating window into the second preview screen; and responding to a third input, and displaying an image corresponding to the target area in the second preview screen.
In addition, the receiving module 1001 of the embodiment of the present invention may be further configured to receive a fourth input for changing the size of the floating window; based on this, the display module 1002 may be further configured to, in response to a fourth input, display an image corresponding to the enlarged or reduced target area in the second preview screen at a zoom scale associated with the fourth input.
The processing module 1003 in the embodiment of the present invention may be further configured to capture an image of the first preview screen.
Further, the processing module 1003 in this embodiment of the present invention may be specifically configured to, when the first preview screen and the second preview screen are view-finding screens with different shooting ranges, and the target area is the first preview screen, display a stitched image of the panorama of the second preview screen and the target area by using feature pixel matching.
In addition, the electronic device 1000 in the embodiment of the present invention further includes a determining module 1004, configured to receive a fifth input of removing a first area where a target object in the first preview screen is located when the first preview screen and the second preview screen include the same target object; in response to a fifth input, removing the first area in the first preview screen and filling the first area with the second area; the second area is determined as the target area.
The display module 1002 in the embodiment of the present invention may be specifically configured to, when a viewing picture captured by the first preview picture and a viewing picture captured by the second preview picture have an intersecting portion, obtain a first zoom multiple of the target area in the first preview picture and a second zoom multiple of the second preview picture; and when the first zoom multiple is different from the second zoom multiple, displaying the target area in the second preview picture according to the second zoom multiple.
It should be noted that, in a possible embodiment, the first preview screen and the second preview screen in the embodiment of the present invention are screens that are acquired at the same time and have at least one same area.
In the embodiment of the invention, when a first preview picture and a second preview picture are displayed on a shooting preview interface, a first input of selecting a target area in the first preview picture is received; the first preview picture and the second preview picture are respectively view-finding pictures of the cameras on the electronic equipment in different directions; then, a target area is determined to be selected in the first preview picture, an image corresponding to the target area is displayed in the second preview picture, and a fused image of the second preview picture and the target area is displayed according to user input. Therefore, the electronic equipment in the embodiment of the invention can enable the user to capture the scenery needed by the user nearby without missing the view. On this basis, carry out local editing when shooting the preview, can observe the effect that the image was shot in real time, and then select suitable shooting opportunity to the realization does not need post processing's image to shoot, effectual saving shooting time reduces the shooting cost, has avoided also making the satisfied risk of user through post processing simultaneously, thereby improves the one slice rate of shooting the image.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
The electronic device 1100 includes, but is not limited to: radio frequency unit 1101, network module 1102, audio output unit 1103, input unit 1104, sensor 1105, display unit 1106, user input unit 1107, interface unit 1108, memory 1109, processor 1110, power supply 1111, and camera 1112. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 11 does not constitute a limitation of electronic devices, which may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A user input unit 1107 configured to receive a first input of selecting a target area in a first preview screen in a case where the first preview screen and the second preview screen are displayed on the shooting preview interface; the first preview picture and the second preview picture are respectively view-finding pictures of the cameras on the electronic equipment in different directions;
a display unit 1106 for displaying a target area in the second preview screen in response to the first input;
the user input unit 1107 is also configured to receive a second input of capturing a second preview screen and a target area;
processor 1110 is configured to generate a fused image of the second preview screen and the target area in response to the second input.
Therefore, the shooting method in the embodiment of the invention can enable the user to capture the scenery needed by the user nearby without missing the view. On this basis, carry out local editing when shooting the preview, can observe the effect that the image was shot in real time, and then select suitable shooting opportunity to the realization does not need post processing's image to shoot, effectual saving shooting time reduces the shooting cost, has avoided also making the satisfied risk of user through post processing simultaneously, thereby improves the one slice rate of shooting the image.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 1101 may be configured to receive and transmit signals during a message transmission or a call, and specifically, receive downlink resources from a base station and then process the received downlink resources to the processor 1110; in addition, the uplink resource is transmitted to the base station. In general, radio frequency unit 1101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 1101 may also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 1102, such as to assist the user in sending and receiving e-mail, browsing web pages, and accessing streaming media.
The audio output unit 1103 may convert an audio resource received by the radio frequency unit 1101 or the network module 1102 or stored in the memory 1109 into an audio signal and output as sound. Also, the audio output unit 1103 may also provide audio output related to a specific function performed by the electronic device 1100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 1103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 1104 is used to receive audio or video signals. The input Unit 1104 may include a Graphics Processing Unit (GPU) 11041 and a microphone 11042, the Graphics processor 11041 Processing image resources of still pictures or video obtained by an image capturing device, such as a camera 1112, in a video capturing mode or an image capturing mode. The processed image frame may be displayed on the display unit 1107. The image frames processed by the graphic processor 11041 may be stored in the memory 1109 (or other storage medium) or transmitted via the radio frequency unit 1101 or the network module 1102. The microphone 11042 may receive sound and may be capable of processing such sound into an audio asset. The processed audio resources may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 1101 in case of the phone call mode.
The electronic device 1100 also includes at least one sensor 1105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 11061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 11061 and/or the backlight when the electronic device 1100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 1105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., and will not be described in detail herein.
The display unit 1106 is used to display information input by a user or information provided to the user. The Display unit 1106 may include a Display panel 11061, and the Display panel 11061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 1107 may be used to receive input numeric or character information and generate key signal inputs relating to user settings and function control of the electronic apparatus. Specifically, the user input unit 1107 includes a touch panel 11071 and other input devices 11072. The touch panel 11071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 11071 (e.g., operations by a user on or near the touch panel 11071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 11071 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 1110, and receives and executes commands sent from the processor 1110. In addition, the touch panel 11071 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 1107 may include other input devices 11072 in addition to the touch panel 11071. In particular, the other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 11071 can be overlaid on the display panel 11061, and when the touch panel 11071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 1110 to determine the type of the touch event, and then the processor 1110 provides a corresponding visual output on the display panel 11061 according to the type of the touch event. Although the touch panel 11071 and the display panel 11061 are shown in fig. 11 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 11071 and the display panel 11061 may be integrated to implement the input and output functions of the electronic device, and the embodiment is not limited herein.
The interface unit 1108 is an interface for connecting an external device to the electronic apparatus 1100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless resource port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. Interface unit 1108 may be used to receive input (e.g., resource information, power, etc.) from an external device and transmit the received input to one or more elements within electronic device 1100 or may be used to transmit resources between electronic device 1100 and an external device.
The memory 1109 may be used to store software programs and various resources. The memory 1109 may mainly include a storage program area and a storage resource area, where the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like; the storage resource area may store resources (such as audio resources, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory 1109 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 1110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions and processing resources of the electronic device by operating or executing software programs and/or modules stored in the memory 1109 and calling resources stored in the memory 1109, thereby performing overall monitoring of the electronic device. Processor 1110 may include one or more processing units; preferably, the processor 1110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1110.
The electronic device 1100 may further include a power supply 1111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 1111 may be logically connected to the processor 1110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
A camera 1112, the camera 1112 may include cameras for multiple orientations of the electronic terminal.
In addition, the electronic device 1100 includes some functional modules that are not shown, and thus are not described in detail herein.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, which, when the computer program is executed in a computer, causes the computer to perform the steps of the photographing method of an embodiment of the present invention.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A shooting method is applied to electronic equipment, and is characterized by comprising the following steps:
receiving a first input of selecting a target area in a first preview picture under the condition that the first preview picture and a second preview picture are displayed on a shooting preview interface; the first preview picture and the second preview picture are respectively view-finding pictures of cameras on the electronic equipment in different directions;
responding to the first input, and displaying an image corresponding to the target area in the second preview screen;
receiving a second input of photographing the second preview screen and the target area;
and responding to the second input, and displaying a fused image of the second preview picture and the target area.
2. The method of claim 1, wherein the displaying, in response to the first input, an image corresponding to the target area in the second preview screen comprises:
in response to the first input, adjusting a characteristic parameter of the second preview picture by using a bi-shooting alignment algorithm;
and displaying an image corresponding to the target area in the second preview picture according to the characteristic parameters.
3. The method of claim 1, wherein the displaying, in response to the first input, an image corresponding to the target area in the second preview screen comprises:
displaying a movable floating window in the first preview screen by utilizing a bi-camera alignment algorithm in response to the first input, wherein the floating window comprises the target area;
receiving a third input to move the floating window into the second preview screen;
and responding to the third input, and displaying an image corresponding to the target area in the second preview screen.
4. The method according to claim 3, wherein before displaying the image corresponding to the target region in the second preview screen, the method further comprises:
receiving a fourth input to change the size of the floating window;
in response to the fourth input, displaying an image corresponding to the enlarged or reduced target area in the second preview screen at a zoom scale associated with the fourth input.
5. The method of claim 1, after responding to the second input, comprising: and shooting an image of the first preview picture.
6. The method of claim 1, wherein displaying, in response to the second input, a fused image of the second preview screen and the target area comprises:
when the first preview picture and the second preview picture are view-finding pictures with different shooting ranges and the target area is the first preview picture,
and displaying the second preview picture and the panoramic spliced image of the target area by utilizing characteristic pixel matching.
7. The method according to claim 1 or 6, further comprising, before displaying the fused image of the second preview screen and the target area in response to the second input:
when the first preview picture and the second preview picture comprise the same target object, receiving a fifth input of removing a first area where the target object in the first preview picture is located;
removing a first area in the first preview screen and filling the first area with a second area in response to the fifth input;
determining the second region as the target region.
8. The method according to claim 1, wherein the displaying the image corresponding to the target area in the second preview screen comprises:
when the first preview picture and the framing picture shot by the second preview picture have an intersection part, acquiring a first zoom multiple of the target area in the first preview picture and a second zoom multiple of the second preview picture;
and when the first zoom multiple is different from the second zoom multiple, displaying an image corresponding to a target area in the second preview picture according to the second zoom multiple.
9. The method according to claim 1, wherein the first preview screen and the second preview screen are screens that are captured at the same time and have at least one same area.
10. An electronic device, comprising:
the device comprises a receiving module, a display module and a display module, wherein the receiving module is used for receiving a first input of selecting a target area in a first preview picture under the condition that the first preview picture and a second preview picture are displayed on a shooting preview interface; the first preview picture and the second preview picture are respectively view-finding pictures of cameras on the electronic equipment in different directions;
the display module is used for responding to the first input and displaying the image corresponding to the target area in the second preview picture;
the receiving module is further used for receiving a second input of shooting the second preview picture and the target area;
and the processing module is used for responding to the second input and displaying the second preview picture and the fused image of the target area.
CN201911374454.4A 2019-12-27 2019-12-27 Shooting method and electronic equipment Pending CN111083371A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911374454.4A CN111083371A (en) 2019-12-27 2019-12-27 Shooting method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911374454.4A CN111083371A (en) 2019-12-27 2019-12-27 Shooting method and electronic equipment

Publications (1)

Publication Number Publication Date
CN111083371A true CN111083371A (en) 2020-04-28

Family

ID=70318336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911374454.4A Pending CN111083371A (en) 2019-12-27 2019-12-27 Shooting method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111083371A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112532887A (en) * 2020-12-18 2021-03-19 惠州Tcl移动通信有限公司 Shooting method, device, terminal and storage medium
CN112702517A (en) * 2020-12-24 2021-04-23 维沃移动通信(杭州)有限公司 Display control method and device and electronic equipment
CN112738402A (en) * 2020-12-30 2021-04-30 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and medium
CN113141450A (en) * 2021-03-22 2021-07-20 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and medium
CN114339029A (en) * 2021-11-23 2022-04-12 维沃移动通信有限公司 Shooting method and device and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014175616A1 (en) * 2013-04-25 2014-10-30 주식회사 모리아타운 Apparatus and method for synthesizing images using multiple cameras
CN104580910A (en) * 2015-01-09 2015-04-29 宇龙计算机通信科技(深圳)有限公司 Image synthesis method and system based on front camera and rear camera
CN105657239A (en) * 2015-04-27 2016-06-08 宇龙计算机通信科技(深圳)有限公司 Image processing method and device
CN106909274A (en) * 2017-02-27 2017-06-30 努比亚技术有限公司 A kind of method for displaying image and device
CN106998428A (en) * 2017-04-21 2017-08-01 维沃移动通信有限公司 The image pickup method and mobile terminal of a kind of mobile terminal
CN108307111A (en) * 2018-01-22 2018-07-20 努比亚技术有限公司 A kind of zoom photographic method, mobile terminal and storage medium
CN109040596A (en) * 2018-08-27 2018-12-18 Oppo广东移动通信有限公司 A kind of method, mobile terminal and storage medium adjusting camera
CN110445978A (en) * 2019-06-24 2019-11-12 华为技术有限公司 A kind of image pickup method and equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014175616A1 (en) * 2013-04-25 2014-10-30 주식회사 모리아타운 Apparatus and method for synthesizing images using multiple cameras
CN104580910A (en) * 2015-01-09 2015-04-29 宇龙计算机通信科技(深圳)有限公司 Image synthesis method and system based on front camera and rear camera
CN105657239A (en) * 2015-04-27 2016-06-08 宇龙计算机通信科技(深圳)有限公司 Image processing method and device
CN106909274A (en) * 2017-02-27 2017-06-30 努比亚技术有限公司 A kind of method for displaying image and device
CN106998428A (en) * 2017-04-21 2017-08-01 维沃移动通信有限公司 The image pickup method and mobile terminal of a kind of mobile terminal
CN108307111A (en) * 2018-01-22 2018-07-20 努比亚技术有限公司 A kind of zoom photographic method, mobile terminal and storage medium
CN109040596A (en) * 2018-08-27 2018-12-18 Oppo广东移动通信有限公司 A kind of method, mobile terminal and storage medium adjusting camera
CN110445978A (en) * 2019-06-24 2019-11-12 华为技术有限公司 A kind of image pickup method and equipment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112532887A (en) * 2020-12-18 2021-03-19 惠州Tcl移动通信有限公司 Shooting method, device, terminal and storage medium
CN112532887B (en) * 2020-12-18 2022-08-05 惠州Tcl移动通信有限公司 Shooting method, device, terminal and storage medium
CN112702517A (en) * 2020-12-24 2021-04-23 维沃移动通信(杭州)有限公司 Display control method and device and electronic equipment
CN112702517B (en) * 2020-12-24 2023-04-07 维沃移动通信(杭州)有限公司 Display control method and device and electronic equipment
CN112738402A (en) * 2020-12-30 2021-04-30 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and medium
CN113141450A (en) * 2021-03-22 2021-07-20 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and medium
CN113141450B (en) * 2021-03-22 2022-07-22 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and medium
CN114339029A (en) * 2021-11-23 2022-04-12 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN114339029B (en) * 2021-11-23 2024-04-23 维沃移动通信有限公司 Shooting method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN111083380B (en) Video processing method, electronic equipment and storage medium
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
CN111541845B (en) Image processing method and device and electronic equipment
CN111355889B (en) Shooting method, shooting device, electronic equipment and storage medium
CN106937039B (en) Imaging method based on double cameras, mobile terminal and storage medium
CN107592466B (en) Photographing method and mobile terminal
CN111083371A (en) Shooting method and electronic equipment
EP3767939A1 (en) Photographing method and mobile terminal
CN110365907B (en) Photographing method and device and electronic equipment
CN107948505B (en) Panoramic shooting method and mobile terminal
CN109474786B (en) Preview image generation method and terminal
CN111246106B (en) Image processing method, electronic device, and computer-readable storage medium
CN111064895B (en) Virtual shooting method and electronic equipment
CN111050070B (en) Video shooting method and device, electronic equipment and medium
CN109905603B (en) Shooting processing method and mobile terminal
CN109474787B (en) Photographing method, terminal device and storage medium
CN108449546B (en) Photographing method and mobile terminal
CN110798622B (en) Shared shooting method and electronic equipment
CN109102555B (en) Image editing method and terminal
CN111669503A (en) Photographing method and device, electronic equipment and medium
KR20220005087A (en) Filming method and terminal
CN110798621A (en) Image processing method and electronic equipment
CN108924422B (en) Panoramic photographing method and mobile terminal
CN111447365B (en) Shooting method and electronic equipment
CN108174110B (en) Photographing method and flexible screen terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200428

RJ01 Rejection of invention patent application after publication