CN115118884A - Shooting method and device and electronic equipment - Google Patents

Shooting method and device and electronic equipment Download PDF

Info

Publication number
CN115118884A
CN115118884A CN202210764905.0A CN202210764905A CN115118884A CN 115118884 A CN115118884 A CN 115118884A CN 202210764905 A CN202210764905 A CN 202210764905A CN 115118884 A CN115118884 A CN 115118884A
Authority
CN
China
Prior art keywords
moving
shooting
camera
preview interface
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210764905.0A
Other languages
Chinese (zh)
Inventor
申健成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202210764905.0A priority Critical patent/CN115118884A/en
Publication of CN115118884A publication Critical patent/CN115118884A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The application discloses a shooting method, a shooting device and electronic equipment, and belongs to the field of camera shooting. The embodiment of the application provides a shooting method, which is applied to a shooting device and comprises the following steps: the method comprises the steps of responding to a first input of a user to a first shooting preview interface, displaying at least one shooting object thumbnail, receiving a second input of the user to a target object, wherein the target object is an object corresponding to a target thumbnail in the at least one shooting object thumbnail, responding to the second input, updating layout information of the target object in the first shooting preview interface, and outputting a target image based on the updated layout information.

Description

Shooting method and device and electronic equipment
Technical Field
The application belongs to the field of camera shooting, and particularly relates to a shooting method and device and electronic equipment.
Background
When a user shoots, the user often encounters the problem that other objects interfere with the shooting scene or the background is too messy, and when the user encounters the problem, the user can only shoot images which do not meet the requirements of the user, and after the images are shot, the user obtains photos meeting the requirements of the user by performing post-processing on the images, so that the user experience is poor.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method, a shooting device and electronic equipment, which can solve the problem that a shooting background is messy or other objects interfere with the shooting background.
In a first aspect, an embodiment of the present application provides a shooting method, which is applied to a shooting device, and the method includes: the method comprises the steps of responding to a first input of a user to a first shooting preview interface, displaying at least one shooting object thumbnail, receiving a second input of the user to a target object, wherein the target object is an object corresponding to a target thumbnail in the at least one shooting object thumbnail, responding to the second input, updating layout information of the target object in the first shooting preview interface, and outputting a target image based on the updated layout information.
In a second aspect, an embodiment of the present application provides a shooting device, including: the shooting device comprises a first response module, a first receiving module, a second response module and an output module, wherein the first response module is used for responding to a first input of a user to a first shooting preview interface and displaying at least one shooting object thumbnail, the first receiving module is used for receiving a second input of the user to a target object, the target object is an object corresponding to a target thumbnail in the at least one shooting object thumbnail, the second response module is used for responding to the second input and updating layout information of the target object in the first shooting preview interface, and the output module is used for outputting a target image based on the updated layout information by the user.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, in response to a first input of a user on a shooting preview interface, a thumbnail of a shooting object in the interface is displayed, a second input of the user on at least one shooting object corresponding to the thumbnail is received, layout information of a target object in the shooting preview interface is updated, and a target image is output based on the updated layout information. By the mode, the layout can be adjusted in the shooting preview interface in advance during shooting, so that images meeting the requirements of users are shot, and the users do not need to process in the later period.
Drawings
Fig. 1 is a flowchart of a shooting method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a shooting preview interface provided in an embodiment of the present application;
fig. 3 is a schematic diagram of a shooting preview interface after layout information is updated according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a preview interface provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a sum vector provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a preview interface provided in an embodiment of the present application;
fig. 7 is a schematic diagram of a shooting preview interface after a special spherical view angle effect is added according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a shooting device provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below clearly with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, fig. 1 is a flowchart illustrating a photographing method according to an embodiment of the disclosure. The method can be applied to a shooting device which can be a mobile phone, a tablet computer, a camera and the like. As shown in fig. 1, the method includes steps S11 through S14.
In step S11, at least one photographic subject thumbnail is displayed in response to a first input to the first photographing preview interface by the user.
In this embodiment, the shooting preview interface may display a preview interface of the shot image when the user performs shooting. For example, when a user takes a picture with a mobile phone, a real-time image interface shot by a camera displayed on a mobile phone screen before taking the picture is a shooting preview interface.
In an example of the embodiment, the first input of the user to the shooting preview interface may be an input of the user entering an adjustment layout mode and displaying a shooting object thumbnail, the first input of the user to the shooting device may enter the adjustment layout mode by a user touch click, or the user enters the adjustment layout mode by an interactive manner of voice or touch click.
In one example of the present embodiment, the photographic subject may be a specific subject in the photographic preview interface, such as a portrait 21, a shorter tree 22, a taller tree 23, and the like in fig. 2. In another example, the photographic subject may also be an entire image displayed in the photographic preview interface.
In one example of the present embodiment, the photographing preview interface may display a thumbnail of at least one photographing object in response to a first input to the photographing preview interface by a user. As shown in fig. 2, in the first shooting preview interface, it can be recognized that the portrait 21, the shorter tree 22, the taller tree 23, and the entire image displayed in the shooting preview interface are shooting targets based on an image recognition algorithm, and in this case, a thumbnail 24 of the portrait, a thumbnail 25 of the shorter tree, a thumbnail 26 of the taller tree, and a thumbnail 27 of the image displayed in the interface can be displayed on the shooting preview image, respectively, so that the user can know which shooting targets are included in the shooting preview image after seeing the thumbnails.
And step 12, receiving a second input of the target object by the user, wherein the target object is the object corresponding to the target thumbnail in the at least one shooting object thumbnail.
In an example of the embodiment, the second input of the target object by the user may be a drag input of a specific object, for example, as shown in fig. 2, the second input of the target object by the user may be a drag input of a tree higher in the thumbnail.
In response to the second input, the layout information of the target object in the first photographing preview interface is updated at step S13.
In step S14, the target image is output based on the updated layout information.
In one example of the present embodiment, the updated layout information may be updated by differentiation, optical flow, classification, clustering, and the like.
In an example of the embodiment, after receiving the second input of the target image by the user, the layout information in the shooting preview interface may be updated according to the second input of the user, for example, as shown in fig. 2, when the second input of the target object by the user is a drag input of the upper tree 23, the shooting device may update the position layout of the upper tree 23 in response to the drag input, and move the upper tree to the position dragged by the user, and the effect after the movement is as shown in fig. 3. After updating the layout information, the photographing apparatus outputs the target image after the layout is changed.
In this example, in response to a first input of a user in the shooting preview interface, thumbnails of the shooting objects in the interface are displayed, a second input of the user to at least one shooting object corresponding to the thumbnails is received, layout information of the target objects in the shooting preview interface is updated, and the target images are output based on the updated layout information. By the mode, the layout can be adjusted in the shooting preview interface in advance during shooting, so that images meeting the requirements of users are shot, and the users do not need to process in the later period.
In one example of this embodiment, the photographing apparatus includes a first camera, and before displaying at least one photographic subject thumbnail in response to a first input to the first photographing preview interface by a user, the method further includes: receiving a third input of the moving object in the second shooting preview interface from the user; determining movement parameter information of the first camera based on the motion information of the moving object in response to a third input; and controlling the first camera to move according to the movement parameter information, and displaying a first shooting preview interface.
In an example of the present embodiment, the third input of the moving object in the second shooting preview interface by the user may be a selection input of the user selecting the moving object in the shooting preview interface, for example, the user clicks the moving object in the shooting preview interface.
In an example of the present embodiment, the moving object is any object that is moving and displayed in the shooting preview interface, and the moving object and the target object may be the same object or different objects.
In one example of this embodiment, the motion information includes a motion direction and a motion speed, the movement parameter information includes a movement direction and a movement magnitude, and the determining the movement parameter information of the first camera based on the motion information of the moving object includes: and determining the moving direction and the moving amplitude of the first camera according to the moving direction and the moving speed of the moving object, so that the moving object is displayed in a preset range of the first shooting preview interface.
In an example of the present embodiment, the movement parameter information of the first camera may be a movement direction, a movement amplitude, a rotation angle, or the like of the camera.
In one example of the present embodiment, the moving direction and the moving speed of the moving object may be determined according to the position and the focal distance of the moving object in the photographing preview interface. For example, in the current frame of the shooting preview interface, the moving object is at the position a of the current frame, in the previous frame of the current frame, the moving object is at the position B of the previous frame, the moving direction of the moving object from the position B to the position a can be determined according to the position a and the position B, the actual moving distance of the moving object can be determined according to the distance between the position a and the pixel at the position B and the focus distance of the preview image, and the time interval between the current frame and the previous frame, that is, the time interval between the position a and the position B can be determined according to the frame rate of the preview image, so as to determine the moving speed of the moving object.
In one example of this embodiment, the camera may be an optical anti-shake camera or other camera that can be adjusted in position.
In an example of this embodiment, after the motion direction and the motion speed of the moving object are obtained, the moving direction and the moving amplitude of the camera may be further determined, so that the camera may move along with the moving object, and the moving object is always displayed within the preset range of the shooting preview interface and remains clear.
In an example of this embodiment, after the moving distance and the moving amplitude of the camera are determined, the camera may be controlled to move at the interval of acquiring the preview image, that is, the camera moves between the current frame of preview image and the next frame of preview image.
In one example of the present embodiment, after the camera is moved according to the moving direction and the moving amplitude, the camera may be controlled to capture an image. Since the camera is moved according to the moving direction and the moving speed of the moving object, the moving object itself in the captured image to be processed is already clear and remains in the image.
In an example of this embodiment, before displaying the first photographing preview interface, the method further includes: and acquiring boundary pixels of an image area where the moving object is located, and processing the boundary pixels based on at least one first image, wherein the first image is an image collected before receiving third input of the user to the moving object in the second shooting preview interface.
In an example of this embodiment, after the camera moves according to the motion of the moving object, and after a preview image is captured, before the preview image is displayed on the first captured preview interface, the boundary of the moving object may be unclear, at this time, a boundary pixel of an area where the moving object is located may be obtained, for example, a dynamic vision sensor identifies the boundary of a moving part, and a moving part boundary pixel in the preview picture may be output, where the moving part boundary pixel includes a pixel value of the moving part boundary and a corresponding position of the pixel value in the preview image.
After the boundary pixels of the moving object are acquired, the moving boundary pixels may be processed through a preview image captured by the camera before receiving a third input from the user, specifically, the moving object and the boundary pixels of the moving object may be acquired from the preview image, and the boundary pixels of the moving object in the image captured by the camera after the motion anti-shake is further processed, for example, pixels of a clearer boundary acquired from the preview image and corresponding pixels of the moving object in the image captured by the camera after the motion anti-shake is synthesized, so that the boundary of the moving object in the image is smoother and clearer.
In this example, after the moving object is determined in the shooting preview interface, the camera can be controlled to move along with the moving object according to the moving direction and the moving speed of the moving object, so that the moving object is always within the preset range of the shooting preview interface and keeps clear, and according to the boundary information of the moving object, which is acquired from the preview image before, the boundary of the moving object in the shot image is processed and then displayed in the shooting preview interface, so that a clear and complete image of the moving object is shot. Therefore, the user can use the method, and the interaction experience of the user is improved.
In one example of this embodiment, the determining the moving direction and the moving amplitude of the first camera according to the moving direction and the moving speed of the moving object includes: determining a first velocity vector of the first moving object according to the moving direction and the moving speed of the first moving object, determining a second velocity vector of the second moving object according to the moving direction and the moving speed of the second moving object, acquiring a sum vector of the first velocity vector and the second velocity vector, and determining the moving direction and the moving amplitude of the first camera according to the sum vector.
In an example of this embodiment, in a case that there are two moving objects in the image, a velocity vector of each moving object may be obtained according to a moving direction and a moving speed of each moving object, where a direction of the velocity vector is a moving direction, and a magnitude of the velocity vector is a magnitude of the moving speed. After the velocity vector of each moving object is obtained, the direction and the moving amplitude of the camera can be determined according to the vector sum of all the velocity vectors and the obtained sum vector. The moving direction of the lens camera can be determined according to the direction of the sum vector, and the moving amplitude of the lens camera can be determined according to the size of the sum vector. For example, the first moving object is a person 31, the second moving object is a bicycle 32, the moving direction of the moving object is shown by the arrow direction in fig. 4, the moving direction of the person 31 is from right to left, and the moving direction of the bicycle 32 is from top to bottom, and at this time, the first velocity vector, the second velocity vector, and the sum vector can be determined as shown in fig. 5 based on the moving direction and the moving velocity of the first moving object, and the moving direction and the moving velocity of the second moving object.
In this example, when there are a plurality of moving objects, the moving direction and magnitude of the lens camera can be determined according to the sum vector of the velocity vectors of each moving object, and in this way, a user can capture a clear image containing a plurality of moving objects, thereby improving user experience.
In an example of this embodiment, after determining the movement parameter information of the first camera, the method further includes: displaying track information, the track information including at least one of: motion trajectory information of the moving object; and moving track information of the camera.
In an example of this embodiment, after the movement parameter information of the camera is determined, at least one of motion trail information of the moving object and camera movement trail information may be further displayed on the shooting preview interface, where the motion trail information may include a motion direction, an amplitude, a motion trail, and the like of the moving object, and the camera movement trail includes a camera movement direction, an amplitude, a movement trail, and the like.
In this example, the movement track of the moving object and the movement track information of the camera are displayed on the shooting preview interface, so that a user can better master the movement condition of the moving object or the movement condition of the camera in the whole shooting process, and the shooting interest and the interaction experience of the user are improved.
In an example of this embodiment, the shooting device further includes a second camera, and after the layout information of the target object in the first shooting preview interface is updated, the method further includes: in the moving process of the first camera, controlling the second camera to collect at least two second images, and generating a target file according to the at least two second images, wherein the target file comprises at least one of the following items: images, videos.
In one example of the present embodiment, two independent cameras may be included in the photographing apparatus, wherein the first camera may move according to the motion of the moving object, and the photographing preview image is displayed on the first photographing preview interface. The second camera does not move along with the moving object, and directly shoots images.
In one example of the present embodiment, the shooting preview interfaces of the first camera and the second camera may be simultaneously displayed on the display screen of the shooting device.
In an example of this embodiment, after the user changes the layout information in the first shooting preview interface, the layout information in the shooting preview interface of the second camera may be updated according to the layout information in the first shooting preview interface, for example, the position of a higher tree in the first shooting preview interface is updated, and the tree moves from position a to position B, and at this time, the same tree in the shooting preview interface of the second camera may also move from position a to position B.
In an example of this embodiment, the first camera is continuously moved along with the moving object, and the preview image in the first shooting preview interface is continuously updated, during the movement of the first camera, the second camera may capture at least two second images, and generate a target file according to the at least two second images, where the target file may be an image file, that is, a shot second image. The target file can also be a video file, and the video file can be synthesized according to at least two shot second images to synthesize a dynamic video with a static scene and a moving object.
In this example, an image or video may be generated by shooting an image of the changed layout information by a second camera in the shooting device, resulting in an image or video with a fixed scene and without the camera moving with the moving object. The method is suitable for more scenes, and the interaction experience of the user is improved.
In an example of this embodiment, updating the layout information of the target object in the first shooting preview interface includes: and displaying a target special effect in the first shooting preview interface, wherein the target special effect is determined based on the input parameters of the second input.
In one example of the present embodiment, the display target special effect on the first photographing preview interface may be a lightning special effect, a ghost special effect, a firework special effect, a spherical viewing angle special effect, or the like. Specifically, each special effect may correspond to different input parameters of the second input, for example, the first preview interface includes a portrait 21, a shorter tree 22, and a taller tree 23, as shown in fig. 6, when the second input of the user is clockwise circle sliding, the corresponding target special effect is a spherical viewing angle special effect, at this time, in response to the second input of the user, the spherical viewing angle special effect is added to the first shooting preview interface, and a schematic diagram after the special effect is added is shown in fig. 7.
In this example, the target special effect displayed in the first shooting preview interface can be determined according to the preset second input parameter, so that more various and interesting images can be obtained, the diversified requirements of the user can be met, and the user experience can be improved.
According to the shooting method provided by the embodiment of the application, the execution main body can be a shooting device. The embodiment of the present application takes a method for executing shooting by a shooting device as an example, and describes the shooting device provided by the embodiment of the present application.
Corresponding to the above embodiment, referring to fig. 8, an embodiment of the present application further provides a shooting apparatus 100, including:
a first response module 110 for displaying at least one photographic subject thumbnail in response to a first input of the first photographing preview interface by the user;
a first receiving module 120, configured to receive a second input of the target object by the user, where the target object is an object corresponding to a target thumbnail in the at least one shooting object thumbnail;
a second response module 130 for updating the layout information of the target object in the first photographing preview interface in response to the second input;
the output module 140 outputs the target image based on the updated layout information by the user.
Optionally, the photographing apparatus includes a first camera, and before displaying the at least one photographic object thumbnail in response to a first input of the first photographing preview interface by the user, the apparatus further includes: the second receiving module is used for receiving a third input of the user to the moving object in the second shooting preview interface; in response to a third input, movement parameter information of the first camera is determined based on the motion information of the moving object. And the first control module is used for controlling the first camera to move according to the movement parameter information and displaying the first shooting preview interface.
Optionally, the motion information includes a motion direction and a motion speed, the movement parameter information includes a movement direction and a movement amplitude, and the second receiving module is further configured to determine the movement direction and the movement amplitude of the first camera according to the motion direction and the motion speed of the moving object, so that the moving object is displayed within a preset range of the first shooting preview interface.
Optionally, before displaying the first shooting preview interface, the apparatus further includes: the device comprises a first acquisition module, a first processing module and a second processing module, wherein the first acquisition module is used for acquiring boundary pixels of an image area where a moving object is located; the first image is an image acquired before receiving a third input of the user to the moving object in the second shooting preview interface.
Optionally, after determining the movement parameter information of the first camera, the apparatus further includes: the first display module is used for displaying track information, and the track information comprises at least one of the following items: motion trajectory information of the moving object; and moving track information of the camera.
Optionally, the shooting device further includes a second camera, and after the layout information of the target object in the first shooting preview interface is updated, the device further includes: the second control module is used for controlling the second camera to collect at least two second images in the moving process of the first camera; a first generating module, configured to generate a target file according to the at least two second images, where the target file includes at least one of: images, videos.
Optionally, the second response module is further configured to: and displaying a target special effect in the first shooting preview interface, wherein the target special effect is determined based on the input parameters of the second input.
Optionally, the moving objects include a first moving object and a second moving object, and the second receiving module is further configured to determine a first velocity vector of the first moving object according to the moving direction and the moving speed of the first moving object, determine a second velocity vector of the second moving object according to the moving direction and the moving speed of the second moving object, obtain a sum vector of the first velocity vector and the second velocity vector, and determine the moving direction and the moving amplitude of the first camera according to the sum vector.
In the embodiment of the application, in response to a first input of a user on a shooting preview interface, a thumbnail of a shooting object in the interface is displayed, a second input of the user on at least one shooting object corresponding to the thumbnail is received, layout information of a target object in the shooting preview interface is updated, and a target image is output based on the updated layout information. By the method, the image after the layout is adjusted can be determined in the shooting preview interface in advance during shooting, so that the image meeting the requirements of the user is shot, and the user does not need to process the image at the later stage.
The shooting device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The shooting device provided by the embodiment of the application can realize each process realized by the method embodiment, achieves the same technical effect, and is not repeated here for avoiding repetition.
Corresponding to the foregoing embodiment, optionally, as shown in fig. 9, an electronic device 800 is further provided in the embodiment of the present application, and includes a processor 801, a memory 802, and a program or an instruction stored in the memory 802 and capable of running on the processor 801, where the program or the instruction is executed by the processor 801 to implement each process of the foregoing shooting method embodiment, and can achieve the same technical effect, and is not described herein again to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, and a processor 910.
Those skilled in the art will appreciate that the electronic device 900 may further comprise a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 910 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
Wherein the display unit 906 is configured to display at least one photographic subject thumbnail in response to a first input to the first photographing preview interface by the user. A user input unit 907 for receiving a second input by the user to a target object, which is an object corresponding to the target thumbnail in the at least one photographic object thumbnail. The processor 910, in response to the second input, updates the layout information of the target object in the first photographing preview interface, and outputs the target image based on the updated layout information.
Optionally, the photographing apparatus includes a first camera before displaying at least one photographing object thumbnail in response to a first input of a user to the first photographing preview interface, a user input unit 907 for receiving a third input of the user to a moving object in the second photographing preview interface, and a processor 910 for determining movement parameter information of the first camera based on the movement information of the moving object in response to the third input, controlling the first camera to move according to the movement parameter information, and displaying the first photographing preview interface.
Optionally, the motion information includes a motion direction and a motion speed, and the movement parameter information includes a movement direction and a movement amplitude, and the processor 910 is further configured to determine the movement direction and the movement amplitude of the first camera according to the motion direction and the motion speed of the moving object, so that the moving object is displayed within a preset range of the first photographing preview interface.
Optionally, the processor 910 is further configured to, before displaying the first shooting preview interface, obtain a boundary pixel of an image area where the moving object is located, and process the boundary pixel based on at least one first image, where the first image is an image collected before receiving a third input of the moving object in the second shooting preview interface from the user.
Optionally, after determining the movement parameter information of the first camera, the display unit 906 is further configured to display trajectory information, where the trajectory information includes at least one of: motion trajectory information of the moving object; and moving track information of the camera.
Optionally, the shooting device further includes a second camera, and the processor 910 is further configured to, after the layout information of the target object in the first shooting preview interface is updated, control the second camera to acquire at least two second images during the moving process of the first camera; generating a target file according to the at least two second images, wherein the target file comprises at least one of the following items: images, videos.
Optionally, the display unit 906 is further configured to display a target special effect in the first shooting preview interface, where the target special effect is determined based on the input parameter of the second input.
Optionally, the moving objects include a first moving object and a second moving object, and the processor 910 is further configured to determine a first velocity vector of the first moving object according to the moving direction and the moving speed of the first moving object, determine a second velocity vector of the second moving object according to the moving direction and the moving speed of the second moving object, obtain a sum vector of the first velocity vector and the second velocity vector, and determine the moving direction and the moving amplitude of the first camera according to the sum vector.
In the embodiment of the application, in response to a first input of a user on a shooting preview interface, a thumbnail of a shooting object in the interface is displayed, a second input of the user on at least one shooting object corresponding to the thumbnail is received, layout information of a target object in the shooting preview interface is updated, and a target image is output based on the updated layout information. By the mode, the image after the layout is adjusted can be determined in the shooting preview interface in advance during shooting, so that the image meeting the requirements of the user is shot, and the user does not need to process the image in the later period.
It should be understood that, in the embodiment of the present application, the input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the Graphics Processing Unit 9041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes a touch panel 9071 and other input devices 9072. A touch panel 9071 also referred to as a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 909 can be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 910 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 910.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, or a system-on-chip.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing shooting method embodiments, and achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A shooting method is applied to a shooting device and is characterized by comprising the following steps:
displaying at least one photographic object thumbnail in response to a first input of a first photographic preview interface by a user;
receiving a second input of a user to a target object, wherein the target object is an object corresponding to a target thumbnail in the at least one shooting object thumbnail;
updating the layout information of the target object in the first photographing preview interface in response to the second input;
outputting the target image based on the updated layout information.
2. The method of claim 1, wherein the camera comprises a first camera, and wherein before displaying the at least one photographic subject thumbnail in response to a first user input to the first photographic preview interface, the method further comprises:
receiving a third input of the user to the moving object in the second shooting preview interface;
determining movement parameter information of the first camera based on motion information of the moving object in response to the third input;
and controlling the first camera to move according to the movement parameter information, and displaying a first shooting preview interface.
3. The method of claim 2, wherein the motion information comprises a motion direction and a motion speed, the motion parameter information comprises a motion direction and a motion amplitude, and the determining the motion parameter information of the first camera based on the motion information of the moving object comprises:
and determining the moving direction and the moving amplitude of the first camera according to the moving direction and the moving speed of the moving object, so that the moving object is displayed in a preset range of the first shooting preview interface.
4. The method of claim 2, wherein prior to displaying the first capture preview interface, the method further comprises:
acquiring boundary pixels of an image area where the moving object is located;
processing the boundary pixels based on at least one first image;
and the first image is an image acquired before receiving a third input of the user to the moving object in the second shooting preview interface.
5. The method of claim 2, wherein after determining the movement parameter information of the first camera, the method further comprises:
displaying trajectory information, the trajectory information including at least one of: motion trajectory information of the moving object; and the moving track information of the camera.
6. The method of claim 2, wherein the camera further comprises a second camera, and wherein after updating the layout information of the target object in the first capture preview interface, the method further comprises:
controlling the second camera to collect at least two second images in the moving process of the first camera;
generating a target file according to the at least two second images, wherein the target file comprises at least one of the following items: images, videos.
7. The method of claim 1, wherein the updating the layout information of the target object in the first capture preview interface comprises:
displaying a target special effect in the first photographing preview interface, the target special effect being determined based on the input parameter of the second input.
8. The method of claim 3, wherein the moving object comprises a first moving object and a second moving object, and wherein determining the moving direction and the moving amplitude of the first camera according to the moving direction and the moving speed of the moving object comprises:
determining a first speed vector of the first moving object according to the moving direction and the moving speed of the first moving object;
determining a second velocity vector of the second moving object according to the moving direction and the moving velocity of the second moving object;
and acquiring a sum vector of the first velocity vector and the second velocity vector, and determining the moving direction and the moving amplitude of the first camera according to the sum vector.
9. A camera apparatus, the apparatus comprising:
the first response module is used for responding to first input of a user to the first shooting preview interface and displaying at least one shooting object thumbnail;
the first receiving module is used for receiving a second input of a target object by a user, wherein the target object is an object corresponding to a target thumbnail in the at least one shooting object thumbnail;
a second response module, configured to update layout information of the target object in the first shooting preview interface in response to the second input;
and the output module outputs the target image based on the updated layout information by the user.
10. An electronic device, characterized in that it comprises a processor and a memory, said memory storing a program or instructions executable on said processor, said program or instructions, when executed by said processor, implementing the steps of the shooting method according to any one of claims 1-8.
CN202210764905.0A 2022-06-29 2022-06-29 Shooting method and device and electronic equipment Pending CN115118884A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210764905.0A CN115118884A (en) 2022-06-29 2022-06-29 Shooting method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210764905.0A CN115118884A (en) 2022-06-29 2022-06-29 Shooting method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115118884A true CN115118884A (en) 2022-09-27

Family

ID=83330967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210764905.0A Pending CN115118884A (en) 2022-06-29 2022-06-29 Shooting method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115118884A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080094498A1 (en) * 2006-10-24 2008-04-24 Sanyo Electric Co., Ltd. Imaging apparatus and imaging control method
CN110519512A (en) * 2019-08-16 2019-11-29 维沃移动通信有限公司 A kind of object processing method and terminal
CN111669506A (en) * 2020-07-01 2020-09-15 维沃移动通信有限公司 Photographing method and device and electronic equipment
CN111669507A (en) * 2020-07-01 2020-09-15 维沃移动通信有限公司 Photographing method and device and electronic equipment
CN112954197A (en) * 2021-01-27 2021-06-11 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080094498A1 (en) * 2006-10-24 2008-04-24 Sanyo Electric Co., Ltd. Imaging apparatus and imaging control method
CN110519512A (en) * 2019-08-16 2019-11-29 维沃移动通信有限公司 A kind of object processing method and terminal
CN111669506A (en) * 2020-07-01 2020-09-15 维沃移动通信有限公司 Photographing method and device and electronic equipment
CN111669507A (en) * 2020-07-01 2020-09-15 维沃移动通信有限公司 Photographing method and device and electronic equipment
CN112954197A (en) * 2021-01-27 2021-06-11 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN112637500B (en) Image processing method and device
CN112333382B (en) Shooting method and device and electronic equipment
CN112492215B (en) Shooting control method and device and electronic equipment
CN112738397A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112784081A (en) Image display method and device and electronic equipment
CN114422692A (en) Video recording method and device and electronic equipment
CN112511743B (en) Video shooting method and device
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112367465B (en) Image output method and device and electronic equipment
CN113866782A (en) Image processing method and device and electronic equipment
CN113207038A (en) Video processing method, video processing device and electronic equipment
CN114143455B (en) Shooting method and device and electronic equipment
CN114500852B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113891005B (en) Shooting method and device and electronic equipment
CN115589532A (en) Anti-shake processing method and device, electronic equipment and readable storage medium
CN114339051A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN115499589A (en) Shooting method, shooting device, electronic equipment and medium
CN113873147A (en) Video recording method and device and electronic equipment
CN112367467B (en) Display control method, display control device, electronic apparatus, and medium
CN115278049A (en) Shooting method and device thereof
CN115118884A (en) Shooting method and device and electronic equipment
CN114466140A (en) Image shooting method and device
CN113873160A (en) Image processing method, image processing device, electronic equipment and computer storage medium
CN112367464A (en) Image output method and device and electronic equipment
CN112399092A (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination