CN112887609B - Shooting method and device, electronic equipment and storage medium - Google Patents

Shooting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112887609B
CN112887609B CN202110111256.XA CN202110111256A CN112887609B CN 112887609 B CN112887609 B CN 112887609B CN 202110111256 A CN202110111256 A CN 202110111256A CN 112887609 B CN112887609 B CN 112887609B
Authority
CN
China
Prior art keywords
target
shooting
video
video image
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110111256.XA
Other languages
Chinese (zh)
Other versions
CN112887609A (en
Inventor
孙家圆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110111256.XA priority Critical patent/CN112887609B/en
Publication of CN112887609A publication Critical patent/CN112887609A/en
Application granted granted Critical
Publication of CN112887609B publication Critical patent/CN112887609B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method, a shooting device, electronic equipment and a storage medium, which belong to the technical field of video processing, and the method comprises the following steps: determining a target shooting object and a target display area corresponding to the target shooting object; synthesizing a first video image of a target shooting object and a second video image of a target shooting scene according to the target display area to obtain a target video; the first video image is obtained by shooting through a first camera of the electronic equipment, and the second video image is obtained by shooting through a second camera of the electronic equipment; the first focal length of the first camera is larger than the second focal length of the second camera, so that the shooting cost of the video is reduced, and compared with the shooting method of the same type of video in the prior art, the difference between every two pictures is small, and the video is smoother; the parameters such as position, focal length and the like do not need to be accurately adjusted when each picture is shot, and the operation cost is low; the shooting time is short, and the shooting success probability is high.

Description

Shooting method, shooting device, electronic equipment and storage medium
Technical Field
The application belongs to the field of video processing, and particularly relates to a shooting method and device, electronic equipment and a storage medium.
Background
At present, on mobile terminal when the user wants to shoot a section of background zoom in or zoom out, but the shooting thing keeps when the screen shows that placed in the middle video, can only continuous mobile camera position, at every position adjustment focal length, guarantee to shoot a lot of photos after the thing shows same position on the screen, then a lot of photos splice into a section video, reach the effect that the shooting thing lasts placed in the middle in the video.
In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art:
however, in the process of shooting by using the basic scheme, to ensure that the video playing effect does not jump too much, the difference between each picture needs to be as small as possible, so that a great number of pictures need to be shot to ensure the final effect; parameters such as position, focal length and the like need to be accurately adjusted before each picture is shot, and the operation cost is high; when a video with enough time needs to be shot, the duration of the shot picture is long, the light, the shot object and the like can have the inequality change, and the probability of shooting failure is high; in the process of shooting the picture, the effect of the last shot picture is not uniform due to the influence of light rays, other objects entering the mirror and the like, and the post-processing cost is high.
Therefore, how to provide a shooting scheme to shoot an ideal video more conveniently and reduce post-processing is a technical problem to be solved urgently by those skilled in the art.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method and device, electronic equipment and a storage medium, and solves the technical problems that the operation for shooting similar videos is complex and the post-processing is more in the prior art.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a shooting method, where the method includes:
determining a target shooting object and a target display area corresponding to the target shooting object;
synthesizing a first video image of a target shooting object and a second video image of a target shooting scene according to the target display area to obtain a target video;
the first video image is obtained by shooting through a first camera of the electronic equipment, and the second video image is obtained by shooting through a second camera of the electronic equipment; the first focal length of the first camera is greater than the second focal length of the second camera.
In a second aspect, an embodiment of the present application provides an apparatus for shooting, including:
the target determining module is used for determining a target shooting object and a target display area corresponding to the target shooting object;
the video fusion module is used for synthesizing a first video image of a target shooting object and a second video image of a target shooting scene according to the target display area to obtain a target video;
the first video image is obtained by shooting through a first camera of the electronic equipment, and the second video image is obtained by shooting through a second camera of the electronic equipment; the first focal length of the first camera is greater than the second focal length of the second camera.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the shooting method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the shooting method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the shooting method according to the first aspect.
The shooting method and device, the electronic equipment and the storage medium provided by the embodiment of the application can determine the target shooting object and reduce the operation cost of adjusting the focal length to keep the shooting position; the final effect is achieved without a picture synthesis mode in the follow-up process, the shooting cost of the video is reduced, and compared with the shooting method of the similar video in the prior art, the difference between every two pictures is small, and the video is smoother; the parameters such as position, focal length and the like do not need to be accurately adjusted when each picture is shot, and the operation cost is low; the shooting time is short, and the shooting success probability is high; in the process of shooting a picture, the influence of light rays, other objects entering the mirror and the like can be avoided, the shot video effect is uniform, and the later stage does not need to be processed again.
Drawings
Fig. 1 is a flowchart of a shooting method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an interface provided by an embodiment of the present application for responding to a first input;
FIG. 3 is a schematic diagram of an interface responding to a fourth input according to an embodiment of the present application;
FIG. 4 is a second schematic view of an interface responding to a fourth input according to an embodiment of the present application;
FIG. 5 is a third exemplary diagram of an interface responding to a fourth input according to the present disclosure;
FIG. 6 is a fourth schematic view of an interface provided in an embodiment of the present application for responding to a fourth input;
fig. 7 is a view of a shooting interface of a shooting method according to an embodiment of the present disclosure;
fig. 8 is a second shooting interface diagram of a shooting method according to an embodiment of the present application;
fig. 9 is a third shooting interface diagram of a shooting method according to an embodiment of the present disclosure;
fig. 10 is a fourth view of a shooting interface of a shooting method according to an embodiment of the present disclosure;
fig. 11 is a schematic diagram illustrating a shooting apparatus according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 13 is a schematic hardware structure diagram of another electronic device for implementing the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, of the embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, fig. 1 is a flowchart of a shooting method according to an embodiment of the present disclosure, and as shown in fig. 1, the shooting method includes the following steps:
step 101: determining a target shooting object and a target shooting object in a target display area corresponding to the target shooting object;
in the embodiment of the application, when a user takes a video, since a target shooting object in a video to be shot needs to be always kept at the same position of the video, the target shooting object in the target video, which needs to be fixed at the same position, needs to be determined first, and certainly, a shooting scene needs to be determined, so that the target shooting object is in the shooting scene. Specifically, for example, when a user needs to take a video and ensures that a target shooting object is continuously and centrally displayed in the middle of the screen, the camera may be turned on, and a shooting mode of "focusing a shooting object" is selected, at which time the user moves the camera so that the target shooting object is located in the middle of a shooting picture of the camera, so that the target shooting object may be automatically identified and determined. Of course, other ways of determining the target photographic subject may be adopted, and the discussion will be made in other embodiments of the present application.
Step 102: synthesizing a first video image of a target shooting object and a second video image of a target shooting scene according to the target display area to obtain a target video, wherein the first video image is obtained by shooting through a first camera of the electronic equipment, and the second video image is obtained by shooting through a second camera of the electronic equipment; the first focal length of the first camera is greater than the second focal length of the second camera.
After the target shooting object and the target display area are determined, the first camera can be used for shooting the target shooting object to obtain a first video image of the target shooting object, the second camera can be used for shooting the shooting scene to obtain a second video image, and the target shooting object is a key object of video shooting, so that the first camera can be set as a long-focus camera in order to better shoot the details of the target shooting object, the shooting scene needs a wider viewing angle, and the shooting scene is provided for the target shooting object, so that the second camera can be set as a short-focus camera, the background zooming-in or zooming-out operation can be conveniently performed on the shooting scene, and different second video images can be obtained.
After the first video image and the second video image are obtained, the first video image and the second video image may be fused. It should be noted here that, for example, a user takes a video with a length of 10 seconds, and sets a frame of 30 frames per second, so that 300 fused images are required to obtain a final video.
In an embodiment of the present application, image fusion may be performed every 1/30 second, that is, image fusion is performed synchronously each time a first video image and a second video image at the same time are acquired, so as to obtain a current fusion image, and the current fusion image is displayed on a display screen of a camera.
Of course, in another embodiment of the present application, the image fusion may be performed at every preset time, for example, every 1 second, so that a total of 30 fused images need to be generated in a second. It is needless to say that the image fusion and the video generation may be performed when the user confirms that the shooting is finished, for example, the user shoots a video with a length of 10 seconds, a 30-frame picture is set every second, and 300 first video images and 300 second video images coexist at the time of the shooting, and at this time, the first video image and the second video image at the same time may be fused to obtain 300 fused images. So that the target video can be output based on the plurality of fused images.
After the fused images are obtained, the captured video file can be obtained by using the fused images. Specifically, the multiple fused images may be sequenced according to the time sequence of the timestamps corresponding to the multiple fused images when the images are fused to obtain a fused image sequence; and sequentially storing the images in the fusion image sequence as video frames to obtain a video file. Of course, in an embodiment of the present application, after each fused image is obtained, the fused image may be stored as a video frame of the video file.
Further, in one embodiment of the present application, a first input of a user may be first received in order to determine a target photographic subject; displaying a preview image of at least one object to be selected in response to the first input; and determining the target shooting object in the at least one object to be selected. That is, the user needs to turn on the camera first and enter into the interface shown in fig. 2, and fig. 2 is one of the interface schematic diagrams provided by the embodiment of the present application and responding to the first input, and it can be seen that two people are identified in fig. 2, and the person enclosed by the current dashed line is the person on the right side, and if the click is determined at this time, the person on the right side is determined to be the target shooting object. If the click is switched at the moment, the broken line frame encloses the figure on the left side, and then the click is determined, so that the figure on the left side is determined as the target shooting object. Of course, in practice, more images of the object to be selected may be identified and displayed for selection by the user. Other selection methods can be used for selection, for example, a user can directly click on an image of an object to be selected as a target photographic object, so as to determine the image of the target photographic object.
It should be noted that in practice, not only one target photographic subject may be determined, but if the first camera includes more than one camera, each camera may be set to focus on one target photographic subject, so as to select a plurality of target photographic subjects, and select corresponding display positions for the plurality of target photographic subjects. Specifically, the following steps may be performed in order to determine the first video image in the image of the object to be selected: displaying a selection interface of the at least one object to be selected; receiving a second input of the user on the selection interface; in response to the second input, determining the target object corresponding to the second input in the at least one object to be selected.
Of course, the selection of the target shooting object may also be performed in a manner of manual frame selection by the user, and specifically, the camera may receive a third input of the frame selection of the first video image by the user; determining a first video image in response to the third input; the first video image is at least one of the images of the object to be selected, which is selected by the user in a frame mode. That is, after the user manually selects the target photographic object on the touch screen, the target photographic object in the selected area is automatically identified by the electronic device, and the determination of the target photographic object is completed.
After the determination of the target photographic object is completed, a target display position of the target photographic object on the photographic scene may be set, and specifically, a preset display area may be displayed on the second video image to a user; receiving a fourth input of the user in the preset display area; in response to the fourth input, determining a target display area of the target shooting object after the target shooting object is fused in the second video image; the target display area is one of the preset display areas. For example, as shown in fig. 3, fig. 3 is one of the interface diagrams provided by the embodiment of the present application and responding to the fourth input, it can be seen that the second video image is divided into four equal rectangular areas, which are currently in a state of 1 from 4, and the user can click on a selection button of any one of the areas, thereby completing the determination of the target display area. As shown in fig. 4, fig. 4 is a second schematic view of an interface responding to a fourth input provided by the embodiment of the present application, and it can be seen that the second video image is divided into six equal rectangular areas, which are currently in a state of 1 out of 6, and a user can click on a selection button of any one area, thereby completing the determination of the target display area. As shown in fig. 5, fig. 5 is a third schematic view of an interface responding to a fourth input provided by the embodiment of the present application, and it can be seen that the second video image is divided into 9 equal rectangular areas, which are currently in a state of 1 out of 9, and a user can click a selection button of any one area, thereby completing the determination of the target display area.
Of course, the user may also manually customize the target display area, as shown in fig. 6, where fig. 6 is a fourth schematic view of an interface provided in the embodiment of the present application and responding to the fifth input. The user manually defines the target display area, and certainly, the area manually defined by the user is not regular enough, so that the size and the moving position of the rectangular frame can be adjusted by the user, and finally the area of the shooting scene corresponding to the rectangular frame is determined to be the target display area. That is, receiving a fifth input by a user on the second video image; in response to the fifth input, determining a target presentation area; and the target display area is a user-defined area clicked or circled on the second video image by the user.
On the basis of the above embodiment, the target shooting object and the target display area are determined, and the first video image and the second video image can be subjected to image fusion to obtain a plurality of fusion images, specifically, the first video image and the second video image with the same timestamp can be subjected to image fusion; the first video image is positioned in a target display area of the second video image; the first video image is located above the second video image. In the process of image fusion, the first video image may be transformed, for example, the first video image may be adapted according to the size of the target display area and the fixed aspect ratio of the original first video image, and the target display area is filled;
specifically, for example, the target display area is a rectangular area with the dimensions: the height is 9 units, the width is 6 units, and the size of the first video image is 3 units, the width is 2 units, at this time, the height and the width of the first video image can be multiplied by 3, so that the target display area can be filled; if the size of the first video image is 3 units high and 1.5 units wide, the height and width of the first video image can be multiplied by 3 to obtain a transformed first video image with a size of 9 units high and 4.5 units wide, and the target display area cannot be filled. Of course, in another fusion method of the present application, it is possible to ensure that the first video image is displayed at the middle position of the target display area selected by the user by ensuring that the geometric center coordinates of the first video image and the target display area coincide.
On the basis of the above embodiment, the target video can be output based on the multiple fusion images subsequently obtained, and specifically, the first video image and the second video image with the same timestamp can be subjected to image fusion to obtain the fusion image; sequencing the multiple fusion images according to the time sequence of the timestamps corresponding to the fusion images to obtain a fusion image sequence; sequentially storing the images in the fusion image sequence as video frames of a target video to obtain the target video; wherein the first video image is located in a target display area of the second video image; the first video image is located above the second video image. . Of course, for a video file, not only a video frame of a frame but also a corresponding audio track file are generally included, and for the audio track file, the audio file can be generated by using a sound pick-up to synchronize while recording a video, and the audio file is synthesized into a final video file.
As shown in fig. 7 and 8, fig. 7 is one of the shooting interface diagrams of a shooting method according to the embodiment of the present application; fig. 8 is a second shooting interface diagram of a shooting method according to an embodiment of the present application; as can be seen from fig. 7, the target photographic subject (target person) is located at the middle position of the photographic scene 1, while as can be seen from fig. 8, when the photographic background is shifted to the photographic scene 2, the target photographic subject (target person) is still located at the middle position of the photographic scene 2. In particular practice, the shooting scene may be subjected to sliding zooming, thereby achieving centered display of the target photographic subject under the changed shooting scene.
As shown in fig. 9 and 10, fig. 9 is a third shooting interface diagram of a shooting method according to the embodiment of the present application; fig. 10 is a fourth view of a shooting interface of a shooting method according to an embodiment of the present disclosure; in fig. 9 and 10, there are two target photographic subjects (target persons), and accordingly, there should be two image capturing units in the first camera that focus on the two target photographic subjects, respectively, and the positions of the two target photographic subjects do not change with time in the changed photographic scene. Of course, for the target photographic subject, if the target photographic subject is a moving object, a target recognition and tracking algorithm may be used to track the target photographic subject and acquire an image of the target photographic subject (target person), so as to synthesize the target photographic subject into the photographic scene.
According to the shooting method provided by the embodiment of the application, the target shooting object can be determined, and the operation cost of adjusting the focal length to keep the shooting position is reduced; and the final effect is achieved without a picture synthesis mode in the follow-up process, so that the shooting cost of the videos is reduced. Compared with the shooting method of the same type of video in the prior art, the difference between each picture is small, and the video is smoother; the parameters such as position, focal length and the like do not need to be accurately adjusted when each picture is shot, and the operation cost is low; the shooting time is short, and the shooting success probability is high; in the process of shooting a picture, the influence of light rays, other objects entering the mirror and the like can be avoided, the shot video effect is uniform, and the later stage does not need to be processed again.
It should be noted that, in the shooting method provided in the embodiment of the present application, the execution subject may be a shooting device or a control module in the shooting device for executing the loading shooting method. In the embodiment of the present application, a shooting device executes a loading shooting method as an example, and the shooting method provided in the embodiment of the present application is described.
As shown in fig. 11, fig. 11 is a schematic diagram of a shooting device according to an embodiment of the present application, where the shooting device 1100 includes:
a target determining module 1110, configured to determine a target shooting object and a target display area corresponding to the target shooting object;
the video fusion module 1120 is configured to synthesize a first video image of the target shooting object and a second video image of the target shooting scene according to the target display area to obtain a target video;
the first video image is obtained by shooting through a first camera of the electronic equipment, and the second video image is obtained by shooting through a second camera of the electronic equipment; the first focal length of the first camera is greater than the second focal length of the second camera.
Further, the goal determination module comprises:
a first receiving unit for receiving a first input of a user;
the first response unit is used for responding to the first input and displaying a preview image of at least one object to be selected;
a first determination unit, configured to determine the target photographic object in the at least one candidate object.
Further, the first determination unit includes:
the first display subunit is used for displaying a selection interface of the at least one object to be selected;
the first receiving subunit is used for receiving a second input of the user on the selection interface;
a first response subunit, configured to determine, in response to the second input, the target photographic object corresponding to the second input in the at least one object to be selected; or
The first determination unit includes:
the second receiving subunit is used for receiving a third input of the user for selecting the target shooting object;
a second response subunit, configured to determine the target photographic object in response to the third input; the target shooting object is at least one of the objects to be selected which are framed and selected by a user.
Further, the goal determination module comprises:
the first display unit is used for displaying a preset display area on the second video image;
the second receiving unit is used for receiving a fourth input of the user in the preset display area;
a second response unit, configured to determine, in response to the fourth input, a target display area where the target photographic object is fused in the second video image; the target display area is one of the preset display areas.
Further, the goal determination module comprises:
a third receiving unit, configured to receive a fifth input of the user on the second video image;
a third response unit, configured to determine a target display area in response to the fifth input; and the target display area is a user-defined area clicked or circled on the second video image by the user.
Further, the video fusion module comprises:
the image fusion unit is used for carrying out image fusion on the first video image and the second video image with the same timestamp to obtain a fused image;
the image sorting unit is used for sorting the multiple fusion images according to the time sequence of the timestamps corresponding to the fusion images to obtain a fusion image sequence;
the video frame storage unit is used for sequentially storing the images in the fusion image sequence as video frames of a target video to obtain the target video;
wherein the first video image is located in a target display area of the second video image; the first video image is located above the second video image.
The imaging device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The shooting device provided in the embodiment of the present application can implement each process implemented by the shooting device in the method embodiments of fig. 1 to fig. 10, and is not described here again to avoid repetition.
The shooting device provided by the embodiment of the application can determine the target shooting object and reduce the operation cost of adjusting the focal length to keep the shooting position; and the final effect is achieved without a picture synthesis mode in the follow-up process, so that the shooting cost of the videos is reduced. Compared with the shooting method of the same type of video in the prior art, the difference between each picture is small, and the video is smoother; the parameters such as position, focal length and the like do not need to be accurately adjusted when each picture is shot, and the operation cost is low; the shooting time is short, and the shooting success probability is high; in the process of shooting a picture, the influence of light rays, other objects entering the mirror and the like can be avoided, the shot video effect is uniform, and the later stage does not need to be processed again.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Optionally, an electronic device provided in an embodiment of the present application further includes a processor 1310, a memory 1309, and a program or an instruction stored in the memory 1309 and executable on the processor 1310, where the program or the instruction is executed by the processor 1310 to implement each process of the foregoing shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 13 is a schematic hardware structure diagram of another electronic device for implementing the embodiment of the present application.
The electronic device 1300 includes, but is not limited to: a radio frequency unit 1301, a network module 1302, an audio output unit 1303, an input unit 1304, a sensor 1305, a display unit 1306, a user input unit 1307, an interface unit 1308, a memory 1309, a processor 1310, and the like.
Those skilled in the art will appreciate that the electronic device 1300 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1310 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 13 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 1310 is configured to determine a target photographic object and a target display area corresponding to the target photographic object;
a processor 1310, configured to synthesize a first video image of a target shooting object and a second video image of a target shooting scene according to the target display area, so as to obtain a target video;
the first video image is obtained by shooting through a first camera of the electronic equipment, and the second video image is obtained by shooting through a second camera of the electronic equipment; the first focal length of the first camera is greater than the second focal length of the second camera.
The electronic equipment provided by the embodiment of the application can determine the target shooting object and reduce the operation cost of adjusting the focal length to keep the shooting position; and the final effect is achieved without a picture synthesis mode in the follow-up process, so that the shooting cost of the videos is reduced. Compared with the shooting method of the same type of video in the prior art, the difference between each picture is small, and the video is smoother; the parameters such as position, focal length and the like do not need to be accurately adjusted when each picture is shot, and the operation cost is low; the shooting time is short, and the shooting success probability is high; in the process of shooting a picture, the influence of light rays, other objects entering the mirror and the like can be avoided, the shot video effect is uniform, and the later stage does not need to be processed again.
Optionally, a user input unit 1307 for receiving a first input by a user;
a processor 1310 configured to display a preview image of at least one object to be selected in response to the first input;
a processor 1310 configured to determine the target photographic object in the at least one candidate object.
Optionally, the display unit 1306 is configured to display a selection interface of the at least one object to be selected;
a user input unit 1307, configured to receive a second input of the user on the selection interface;
a processor 1310 configured to determine, in response to the second input, the target photographic object corresponding to the second input among the at least one object to be selected; or
Alternatively, the user input unit 1307 is configured to receive a third input of the user to frame the target photographic subject;
a processor 1310 for determining the target photographic object in response to the third input; the target shooting object is at least one object to be selected which is selected by a user in a frame mode.
Optionally, the display unit 1306 is configured to display the preset display area on the second video image;
a user input unit 1307, configured to receive a fourth input of the user in the preset display area;
a processor 1310, configured to determine, in response to the fourth input, a target display area of the target photographic object after the second video image is fused; the target display area is one of the preset display areas.
Optionally, a user input unit 1307 for receiving a fifth input by the user on the second video image;
a processor 1310 for determining a target presentation area in response to the fifth input; and the target display area is a user-defined area clicked or circled on the second video image by the user.
Optionally, the processor 1310 is configured to perform image fusion on the first video image and the second video image with the same timestamp, so as to obtain a fused image;
a processor 1310, configured to sort the multiple fused images according to the time sequence of the timestamps corresponding to the fused images to obtain a fused image sequence;
a processor 1310, configured to sequentially store the images in the fused image sequence as video frames of a target video, so as to obtain the target video;
wherein the first video image is located in a target display area of the second video image; the first video image is located above the second video image.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, which are intended to be illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the appended claims.

Claims (14)

1. A photographing method, characterized by comprising:
determining a target shooting object and a target display area corresponding to the target shooting object;
synthesizing a first video image of the target shooting object and a second video image of a target shooting scene to obtain a target video, wherein the first video image is located in a target display area above the second video image;
the first video image is obtained by shooting through a first camera of the electronic equipment, and the second video image is obtained by shooting through a second camera of the electronic equipment; the first focal length of the first camera is greater than the second focal length of the second camera, and the first camera is a long-focus camera and is used for acquiring the target shooting object; the second camera is a short-focus camera and is used for acquiring the target shooting scene;
under the condition that the first camera comprises more than one camera, different first cameras focus on different target shooting objects, the different target shooting objects correspond to different target display areas, the same target shooting object is fixed in the corresponding same target display area, the different target shooting objects correspond to different first video images, and the synthesized target video comprises the second video image and the different first video images.
2. The photographing method according to claim 1,
the determining of the target photographic object includes:
receiving a first input of a user;
displaying a preview image of at least one object to be selected in response to the first input;
and determining the target shooting object in the at least one object to be selected.
3. The photographing method according to claim 2,
the determining a target photographic object in the at least one candidate object comprises:
displaying a selection interface of the at least one object to be selected;
receiving a second input of the user on the selection interface;
in response to the second input, determining the target photographic object corresponding to the second input in the at least one object to be selected; or
The determining the target photographic object in the at least one candidate object comprises:
receiving a third input of the user for selecting the target shooting object;
determining the target photographic object in response to the third input; the target shooting object is at least one object to be selected which is selected by a user in a frame mode.
4. The shooting method according to claim 1, wherein the determining of the target display area corresponding to the target shooting object comprises:
displaying a preset display area on a second video image;
receiving a fourth input of the user in the preset display area;
in response to the fourth input, determining a target display area of the target shooting object after the target shooting object is fused in the second video image; the target display area is one of the preset display areas.
5. The shooting method according to claim 1, wherein the determining of the target display area corresponding to the target shooting object comprises:
receiving a fifth input of the user on the second video image;
in response to the fifth input, determining a target presentation area; and the target display area is a user-defined area clicked or circled on the second video image by the user.
6. The photographing method according to any one of claims 1 to 5,
the synthesizing a first video image of a target shooting object and a second video image of a target shooting scene according to the target display area to obtain a target video comprises:
carrying out image fusion on the first video image and the second video image with the same timestamp to obtain a fused image;
sequencing the multiple fusion images according to the time sequence of the timestamps corresponding to the fusion images to obtain a fusion image sequence;
and sequentially storing the images in the fusion image sequence as video frames of a target video to obtain the target video.
7. A camera, comprising:
the target determining module is used for determining a target shooting object and a target display area corresponding to the target shooting object;
the video fusion module is used for synthesizing a first video image of the target shooting object and a second video image of a target shooting scene to obtain a target video, wherein the first video image is located in a target display area above the second video image;
the first video image is obtained by shooting through a first camera of the electronic equipment, and the second video image is obtained by shooting through a second camera of the electronic equipment; the first focal length of the first camera is greater than the second focal length of the second camera, and the first camera is a long-focus camera and is used for acquiring the target shooting object; the second camera is a short-focus camera and is used for acquiring the target shooting scene;
under the condition that the first camera comprises more than one camera, different first cameras focus on different target shooting objects, the different target shooting objects correspond to different target display areas, the same target shooting object is fixed in the corresponding same target display area, the different target shooting objects correspond to different first video images, and the synthesized target video comprises the second video image and the different first video images.
8. The camera of claim 7, wherein the target determination module comprises:
a first receiving unit for receiving a first input of a user;
the first response unit is used for responding to the first input and displaying a preview image of at least one object to be selected;
a first determination unit, configured to determine the target photographic object in the at least one candidate object.
9. The photographing apparatus according to claim 8, wherein the first determination unit includes:
the first display subunit is used for displaying a selection interface of the at least one object to be selected;
the first receiving subunit is used for receiving a second input of the user on the selection interface;
a first response subunit, configured to determine, in response to the second input, the target photographic object corresponding to the second input in the at least one object to be selected; or
The first determination unit includes:
the second receiving subunit is used for receiving a third input of the user for selecting the target shooting object;
a second response subunit, configured to determine the target photographic object in response to the third input; the target shooting object is at least one object to be selected which is selected by a user in a frame mode.
10. The camera of claim 7, wherein the target determination module comprises:
the first display unit is used for displaying a preset display area on the second video image;
the second receiving unit is used for receiving a fourth input of the user in the preset display area;
a second response unit, configured to determine, in response to the fourth input, a target display area where the target photographic object is fused in the second video image; the target display area is one of the preset display areas.
11. The camera of claim 7, wherein the target determination module comprises:
a third receiving unit, configured to receive a fifth input from the user on the second video image;
a third response unit, configured to determine a target display area in response to the fifth input; and the target display area is a user-defined area clicked or circled on the second video image by the user.
12. The camera according to any one of claims 7 to 11, wherein the video fusion module includes:
the image fusion unit is used for carrying out image fusion on the first video image and the second video image with the same timestamp to obtain a fused image;
the image sorting unit is used for sorting the multiple fusion images according to the time sequence of the timestamps corresponding to the fusion images to obtain a fusion image sequence;
and the video frame storage unit is used for sequentially storing the images in the fusion image sequence as video frames of the target video to obtain the target video.
13. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the photographing method according to any one of claims 1 to 6.
14. A readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions which, when executed by a processor, implement the steps of the photographing method according to any one of claims 1 to 6.
CN202110111256.XA 2021-01-27 2021-01-27 Shooting method and device, electronic equipment and storage medium Active CN112887609B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110111256.XA CN112887609B (en) 2021-01-27 2021-01-27 Shooting method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110111256.XA CN112887609B (en) 2021-01-27 2021-01-27 Shooting method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112887609A CN112887609A (en) 2021-06-01
CN112887609B true CN112887609B (en) 2023-04-07

Family

ID=76052772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110111256.XA Active CN112887609B (en) 2021-01-27 2021-01-27 Shooting method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112887609B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113329230A (en) * 2021-06-30 2021-08-31 王展 Video acquisition processing method and device, electronic equipment and storage medium
CN113873080B (en) * 2021-09-27 2022-12-13 维沃移动通信有限公司 Multimedia file acquisition method and device
CN114040115B (en) * 2021-11-29 2024-09-20 Oook(北京)教育科技有限责任公司 Method and device for capturing abnormal actions of target object, medium and electronic equipment
CN114157810B (en) * 2021-12-21 2023-08-18 西安维沃软件技术有限公司 Shooting method, shooting device, electronic equipment and medium
CN115412672B (en) * 2022-08-29 2024-02-02 深圳传音控股股份有限公司 Shooting display method, intelligent terminal and readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107395975A (en) * 2013-01-07 2017-11-24 华为技术有限公司 A kind of image processing method and device
CN104580910B (en) * 2015-01-09 2018-07-24 宇龙计算机通信科技(深圳)有限公司 Image combining method based on forward and backward camera and system
CN107105315A (en) * 2017-05-11 2017-08-29 广州华多网络科技有限公司 Live broadcasting method, the live broadcasting method of main broadcaster's client, main broadcaster's client and equipment
CN107277371A (en) * 2017-07-27 2017-10-20 青岛海信移动通信技术股份有限公司 A kind of method and device in mobile terminal amplification picture region
CN107426502B (en) * 2017-09-19 2020-03-17 北京小米移动软件有限公司 Shooting method and device, electronic equipment and storage medium
CN112954218A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment

Also Published As

Publication number Publication date
CN112887609A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN112887609B (en) Shooting method and device, electronic equipment and storage medium
US20140092272A1 (en) Apparatus and method for capturing multi-focus image using continuous auto focus
CN110493526A (en) Image processing method, device, equipment and medium based on more photographing modules
CN107395957B (en) Photographing method and device, storage medium and electronic equipment
CN112532808A (en) Image processing method and device and electronic equipment
CN114125179B (en) Shooting method and device
CN112637500B (en) Image processing method and device
CN112887617B (en) Shooting method and device and electronic equipment
CN103905725A (en) Image processing apparatus and image processing method
CN112738397A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112087579B (en) Video shooting method and device and electronic equipment
CN112839166A (en) Shooting method and device and electronic equipment
CN113473018B (en) Video shooting method and device, shooting terminal and storage medium
CN114390206A (en) Shooting method and device and electronic equipment
CN112367465B (en) Image output method and device and electronic equipment
CN115134536B (en) Shooting method and device thereof
CN113873160B (en) Image processing method, device, electronic equipment and computer storage medium
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN112653841B (en) Shooting method and device and electronic equipment
CN112887624B (en) Shooting method and device and electronic equipment
CN115499589A (en) Shooting method, shooting device, electronic equipment and medium
CN114245018A (en) Image shooting method and device
CN112887620A (en) Video shooting method and device and electronic equipment
CN112367464A (en) Image output method and device and electronic equipment
CN112399092A (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant