CN113347356A - Shooting method, shooting device, electronic equipment and storage medium - Google Patents

Shooting method, shooting device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113347356A
CN113347356A CN202110602880.XA CN202110602880A CN113347356A CN 113347356 A CN113347356 A CN 113347356A CN 202110602880 A CN202110602880 A CN 202110602880A CN 113347356 A CN113347356 A CN 113347356A
Authority
CN
China
Prior art keywords
video image
shooting
target object
video
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110602880.XA
Other languages
Chinese (zh)
Inventor
张印鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202110602880.XA priority Critical patent/CN113347356A/en
Publication of CN113347356A publication Critical patent/CN113347356A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method, a shooting device, electronic equipment and a storage medium, and belongs to the technical field of communication. The method mainly comprises the steps of receiving a first input of a video shot by a user; responding to a first input, shooting a first video image, and acquiring motion attitude information of a first target object in the first video image in the process of shooting the first video image; and adjusting the running state of the first target object in the first video image according to the motion attitude information to obtain the target video.

Description

Shooting method, shooting device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of communication, and particularly relates to a shooting method, a shooting device, electronic equipment and a storage medium.
Background
With the rapid development of electronic equipment in the aspect of shooting, users shoot video images through the electronic equipment so as to keep good memory through the video images.
Currently, the way to capture video images includes live action or capture using templates for beautified images in the capture application. However, none of the above-mentioned ways of capturing video images can adjust the content in the video image, such as the subject of the subject to be captured, so that the user cannot acquire the video image desired by the user in some cases.
Disclosure of Invention
An object of the embodiments of the present application is to provide a shooting method, an apparatus, an electronic device, and a storage medium, which can provide a new mode for shooting a video image to improve the effect of shooting the video image.
In a first aspect, an embodiment of the present application provides a shooting method, which may include:
receiving a first input of a user to shoot a video;
responding to a first input, shooting a first video image, and acquiring motion attitude information of a first target object in the first video image in the process of shooting the first video image;
and adjusting the running state of the first target object in the first video image according to the motion attitude information to obtain the target video.
In a second aspect, an embodiment of the present application provides a shooting apparatus, which may include:
the receiving module is used for receiving a first input of a video shot by a user;
the shooting module is used for responding to a first input, shooting a first video image and acquiring motion attitude information of a first target object in the first video image in the process of shooting the first video image;
and the processing module is used for adjusting the running state of the first target object in the first video image according to the motion attitude information to obtain the target video.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the shooting method shown in the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the shooting method as shown in the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the shooting method according to the first aspect.
In the embodiment of the application, the motion attitude information of the first target object in the first video image is acquired while the first video image is shot, and then the running state of the first target object in the first video image is adjusted according to the motion attitude information to obtain the target video. Therefore, when one video image is shot, the motion resource information of the target object in the video image is acquired, the running state of the target object fixedly shot in the video image is dynamically adjusted through the independently acquired different motion posture information of the target object, and partial content such as a shot object main body in the video image is locally adjusted.
Drawings
Fig. 1 is a schematic diagram of a shooting architecture according to an embodiment of the present disclosure;
fig. 2 is a schematic interface diagram related to a shooting method according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a shooting method according to an embodiment of the present disclosure;
fig. 4 is a schematic view of a first interface of a shooting method according to an embodiment of the present disclosure;
fig. 5 is a second interface schematic diagram of a shooting method according to an embodiment of the present disclosure;
fig. 6 is a third interface schematic diagram of a shooting method according to an embodiment of the present disclosure;
fig. 7 is a fourth interface schematic diagram of a shooting method according to an embodiment of the present disclosure;
fig. 8 is a fifth interface diagram of a shooting method according to an embodiment of the present disclosure;
fig. 9 is a schematic flowchart of obtaining a target video according to an embodiment of the present application;
fig. 10 is a schematic flowchart of another process for obtaining a target video according to an embodiment of the present application;
fig. 11 is a sixth interface schematic diagram of a shooting method according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a photographing device according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 14 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
Based on this, the following describes in detail the shooting method provided by the embodiment of the present application through a specific embodiment and an application scenario thereof with reference to fig. 1 to fig. 2.
An embodiment of the present application provides a shooting architecture, and as shown in fig. 1, the shooting architecture may include an electronic device, where the electronic device includes at least one camera. When the electronic device comprises a camera, the first video image can be shot based on the camera, and in the process of shooting the video image, the motion attitude information of the first target object in the first video image is recorded, so that the running state of the first target object in the first video image can be adjusted by adjusting the size of the motion attitude information, and the target video can be obtained.
Based on this, a first input of a user to capture a video is received, such as when the user inputs for recording a video in a capture interface. In response to the first input, a first video image is shot, and in the process of shooting the first video image, motion posture information of a first target object in the first video image is acquired. For example, if the target object includes a person, the target object to be tracked may be determined in the first video image captured by the first camera through the face detection function, and after the tracked target object is determined, the motion posture information of the target object may be acquired. And then, adjusting the running state of the first target object in the first video image according to the motion attitude information to obtain the target video, for example, adjusting the motion attitude information to be higher than the motion attitude information of the first target object in the first video image by a preset threshold value so as to realize the effect that the moving speed of the target object in the first video image moves faster than other objects in the first video image, and conversely, adjusting the motion attitude information to be lower than the motion attitude information of the first target object in the first video image by a preset threshold value so as to realize the effect that the moving speed of the target object in the first video image moves slower than other objects in the first video image.
Therefore, the target video is obtained by acquiring the motion attitude information of the first target object in the first video image while shooting the first video image, and then adjusting the running state of the first target object in the first video image according to the motion attitude information. Therefore, when one video image is shot, the motion resource information of the target object in the video image is acquired, the running state of the target object fixedly shot in the video image is dynamically adjusted through the independently acquired different motion posture information of the target object, and partial content such as a shot object main body in the video image is locally adjusted.
Alternatively, when the electronic device includes two cameras, the two cameras may include, for example, a first camera and a second camera, such as a Time-of-Flight (TOF) camera. Based on this, in some possible embodiments, the first camera corresponds to a first shooting parameter, and the second camera corresponds to a second shooting parameter, in some possible embodiments, the shooting frequency of the first shooting parameter is greater than or less than the shooting frequency of the second shooting parameter, that is, through the shooting frequency of the first shooting parameter, the shot video is a recording of a real scene using a conventional preset shooting parameter, and through the shooting frequency of the second shooting parameter, the shot video is a recording using an accelerated shooting (that is, the shooting frequency of the first shooting parameter is greater than the shooting frequency of the second shooting parameter) or using a slow lens (that is, the shooting frequency of the first shooting parameter is less than the shooting frequency of the second shooting parameter). Based on the above, when the contents shot by the two cameras are the same and the shooting frequency of the first shooting parameter is less than that of the second shooting parameter, the operation state of the target object in the video image shot by the first camera can be adjusted according to the motion attitude information of the target object shot by the second camera, so as to obtain the adjusted target video. Based on this, as shown in fig. 2, a first input of the user to shoot the video is received, such as an input when the user starts a slow-lens function on a video recording interface. In response to a first input, a first video image is captured based on a first capture parameter, and a target object in the first video image is determined. Then, a second video image of the target object is captured based on the second capturing parameter, where the second video image is an image corresponding to the motion posture information of the target object in the first video image, for example, if the target object includes a person, the target object to be tracked may be determined in the first video image captured by the first camera through a face detection function, and after the tracked target object is confirmed, the second video image of the target object is captured by the second camera in a real-time tracking manner, and the motion posture information of the target object is determined. Then, image processing is carried out, according to the second video image, the target area where the target object in the first video image is located is removed, the second video image is filled in the target area, the running state of the target object in the first video image is adjusted through the second video image, the target video is obtained, and therefore the effect that the moving speed of the target object in the first video image is slower than that of other objects in the first video image is achieved.
Therefore, a first video image is shot through the first shooting parameters, a second video image is shot through the second shooting parameters, the second video image is an image corresponding to the motion attitude information of the target object in the first video image, and then the running state of the target object in the first video image is adjusted through the second video image to obtain the target video. Therefore, different video images are shot by adopting different shooting parameters, partial areas in another video image are adjusted according to one video image, partial contents in the video images such as a shot object main body are locally adjusted, and therefore the embodiment of the application provides a new mode for shooting the video images, the effect of shooting the video images is improved, the shooting interestingness is increased, and the later-stage video processing operation of a user is reduced.
It should be noted that the shooting method provided in this embodiment of the present application may be applied to, in the process of recording a video as described above, adjusting a partial region in another video image through one video image to implement local adjustment on partial content in the video image, such as a shooting object main body, to obtain a scene of a target video after fusion adjustment, and may also be applied to a scene of image shooting, that is, to simultaneously obtain at least two images of different actions of a target object under the same shooting condition, and adjust a region of the target object in another image through one image to implement local adjustment on actions of partial content in the image, such as the target object, to obtain a scene of the target image after fusion adjustment, so that operations of a user on image processing in a later stage may be reduced.
According to the application scenario, the following describes the shooting method provided by the embodiment of the present application in detail with reference to fig. 3.
Fig. 3 is a flowchart of a shooting method according to an embodiment of the present disclosure.
As shown in fig. 3, the shooting method may be applied to the electronic devices as shown in fig. 1-2, and specifically includes the following steps based on this:
at step 310, a first input of a user to capture a video is received. Step 320, in response to the first input, capturing a first video image, and in the process of capturing the first video image, acquiring the motion posture information of the first target object in the first video image. And step 330, adjusting the running state of the first target object in the first video image according to the motion attitude information to obtain a target video.
In this way, the target video is obtained by acquiring the motion attitude information of the first target object in the first video image while shooting the first video image, and then adjusting the running state of the first target object in the first video image according to the motion attitude information. Therefore, when one video image is shot, the motion resource information of the target object in the video image is obtained, so that the running state of the target object fixedly shot in the video image is dynamically adjusted through the separately obtained different motion posture information of the target object, and partial content such as a shot object main body in the video image is locally adjusted.
The above steps are described in detail below, specifically as follows:
referring first to step 310, in one or more alternative embodiments, as shown in fig. 4, when the electronic device receives a user input to the preview interface, an interface for capturing a video is displayed; then, receive the first input that the user shot the video, electronic equipment opens the video recording function of shooting the video like this, starts local slow-focus lens function, at this moment, can open a camera module, perhaps, opens two at least camera modules, first camera and second camera promptly.
Under the condition that one camera module is started, a real scene is recorded by adopting conventional preset shooting parameters through shooting by a first camera, then a first target object in a real scene video image is recorded, and the motion attitude information of the first target object is recorded, so that the motion state of the first target object in the real scene video image is adjusted according to the recorded motion attitude information.
When at least two camera modules, namely a first camera and a second camera, are started, the first camera corresponds to a first shooting parameter, and through the shooting frequency of the first shooting parameter, a shot video is a real scene recorded by adopting conventional preset shooting parameters; the second camera corresponds to the second shooting parameter, and the shot video is recorded by adopting accelerated shooting or slow lens through the shooting frequency of the second shooting parameter. The shooting frequency of the first shooting parameter is greater than or less than that of the second shooting parameter, that is, the shot video is shot by adopting accelerated shooting (that is, the shooting frequency of the first shooting parameter is greater than that of the second shooting parameter) or slow-lens shooting (that is, the shooting frequency of the first shooting parameter is less than that of the second shooting parameter).
Next, before the step 320 is involved, the embodiment of the present application provides two ways of determining the target object, that is, the electronic device actively recognizes the captured image and determines the subject in the captured image as the target object; alternatively, the target object in the captured image is determined according to the selection of the user, and the specific determination of the target object is performed as follows.
In one or more alternative embodiments, prior to step 320, the method further comprises:
and performing image recognition on the first video image, and determining a first target object in the first video image.
In one or more alternative embodiments, prior to step 310, the method further comprises:
receiving a second input of a video image to be shot;
in response to the second input, a first object in the video image to be captured corresponding to the second input is determined as a first target object.
For example, as shown in fig. 5, the user selects a target object in the first video image, such as an object to be processed at a slow speed, through the touch screen.
Based on the target object determined in the above two manners, based on this, the step 320 may specifically include:
shooting a first video image through a first shooting parameter corresponding to the first camera, and shooting a first target object through a second shooting parameter corresponding to the second camera; the shooting frequency in the first shooting parameters is greater than or less than the shooting frequency in the second shooting parameters;
and acquiring motion attitude information corresponding to the shooting frequency in the second shooting parameters based on the shooting frequency in the second shooting parameters.
Based on this, as shown in fig. 6, after the user wants to perform slow processing on the object, the posture of the human body skeleton of the target object is tracked through the 3D imaging technology of the TOF camera, then the first target object is shot through the second shooting parameters corresponding to the second camera, and based on this, the motion posture information corresponding to the shooting frequency in the second shooting parameters is obtained, so that the state of the first target object shot by the first shooting parameters can be adjusted according to the motion posture information corresponding to the shooting frequency in the second shooting parameters, so as to obtain the target video.
It should be noted that, in another or various alternative embodiments, after the target object is determined, in order to better fulfill the requirements of the user and the effect of the generated target video, as shown in fig. 7, beautification options may be provided for the at least one captured video image and the first video image, that is, the user may add special weather processing to the first video image and the second video image, respectively.
Then, referring to step 330, in one or more alternative embodiments, the user may manually adjust the operation state of the first target object in the first video image, and based on this, this step 330 may specifically include:
3301, generating a first adjustment parameter according to the motion posture information of the first target object;
3302, receiving a third input to the first adjustment parameter;
and 3303, in response to the third input, adjusting the running state of the first target object in the first video image based on the target adjustment parameter corresponding to the third input to obtain a target video.
In this way, the first adjustment parameter may be displayed to enable the user to manually adjust the operation state of the first target object in the first video image to obtain the target video.
Based on this, in step 3303, the following steps may be specifically performed to adjust the running state of the first target object in the first video image, so as to obtain the target video, and specifically, the step may include:
determining a second target object based on the target adjustment parameter corresponding to the third input;
according to the second target object, the target area where the first target object is located in the first video image is scratched out;
and filling the second target object into the target area to obtain a target video.
For example, as shown in fig. 8, the user may set different multiples of slow-playing of the target object by adjusting the third input of the parameter option, and may pull the progress bar in the vertical direction to control the playing speed of the non-target object in the first video image or the playing speed of the first video image, and after the speeds of the target object and the first video image are set, click the save button to display the target video representing the local slow-shot effect in the first video image.
Here, it should be noted that, in step 330, the electronic device may perform real-time processing based on a video image captured in real time, may also perform processing on the first video image and the second video image after the user determines to stop recording in order to reduce the amount of calculation generated by the user to cancel recording, and the order of processing the first video image and the second video image provided in this embodiment may be set based on actual applications, and the order of processing the first video image and the second video image is not limited herein.
For example, as shown in fig. 9, when the first video image and the second video image are acquired in step 901, step 903 may be executed to synchronously store the data of the second video image and the motion posture information in the storage location of the first video image, for example, in the storage tail of the first video image, based on a preset custom storage format, so as to determine that normal parsing and playing of the first video image are not affected. Based on this, step 904 is executed, where step 904 includes step 9041 of extracting a target object in the second video image and performing slow processing on the target object according to the motion posture information and a second shooting parameter for shooting the second video image, and step 9042 of matting out a target region where the target object in the first video image is located according to the second video image after the slow processing, where a sequence of step 9041 and step 9042 is not limited. Then, step 905 is executed to fill the slow-processed second video image into the target area through an AI image processing technique, and perform fusion and repair with the first video image to obtain the target video.
In addition, from the technical aspect in the electronic device, with reference to fig. 10, the video logic processing of the electronic device is described in detail, that is, the processing flow of the image processor after the target object is determined may include step 1001, where step 1001 includes step 10011 and step 10012, where the sequence of step 10011 and step 10012 is not limited, where step 10011 captures a first video image through a first camera to obtain regular video data, and step 10012 captures a second video image through a second camera to obtain motion posture Information of the target object, such as American Standard Code for Information Interchange (ASCII) coordinate data. Next, step 1002, wherein step 1002 includes step 10021 and step 10022, where the sequence of step 10021 and step 10022 is not limited, where, in step 10021, when the user selects the slow-down playback speed of the target object that can be identified by setting the ASCII coordinate data and the playback speed of the non-target object in the first video image (the slow-down playback speed can be increased), in step 10022, after adjusting the slow-down playback speed of the second video image, the video data is unpacked to obtain video data in a conventional video format such as mp4, for example, video buffer, and the ASCII coordinate data is subjected to Unicode decoding to obtain pose buffer, and in step 1003, the target object corresponding to the second video image is displayed in the first video image.
In one or more optional embodiments, when the target audio associated with the target video is detected, associating the volume information of the target audio with the second adjusting parameter to obtain adjustment associated data; and adjusting the associated data to update the running state of the target object according to the change of the volume information.
Or, in another or various optional embodiments, when the volume information is the first volume information, adjusting the object whose running speed is greater than that in the first video image according to the adjustment associated data to obtain the first target video; and when the volume information is second volume information, adjusting the object with the running speed less than that of the first video image according to the adjustment associated data to obtain a second target video.
As shown in fig. 11, if the background audio associated with the target video is the target audio, the running state of the target object may be updated and displayed in two ways, the first way is to automatically update the running state of the target object according to the change of the volume information of the target audio, if the volume information is increased, the running state of the target object is updated to be fast playing, and if the volume information is decreased, the running state of the target object is updated to be slow playing. Similarly, the running state of the target object can be updated according to the playing speed of the target audio, if the playing speed of the target audio is faster, the running state of the target object is updated to be fast playing, and otherwise, if the playing speed of the target audio is slower, the running state of the target object is updated to be slow playing. And in the second mode, the input of the user for adjusting the volume information is received, and the running state of the target object is updated to be fast playing or slow playing according to the input.
Therefore, in the embodiment of the application, the target video is obtained by acquiring the motion posture information of the first target object in the first video image while shooting the first video image, and then adjusting the running state of the first target object in the first video image according to the motion posture information. Therefore, when one video image is shot, the motion resource information of the target object in the video image is obtained, so that the running state of the target object fixedly shot in the video image is dynamically adjusted through the separately obtained different motion posture information of the target object, and partial content such as a shot object main body in the video image is locally adjusted.
In the shooting method provided by the embodiment of the present application, the execution subject may be a shooting device, or a control module in the shooting device for executing the shooting method. In the embodiment of the present application, a photographing device executing a photographing method is taken as an example, and a photographing device provided in the embodiment of the present application is described.
Based on the same inventive concept, the application also provides a shooting device. The details are described with reference to fig. 12.
Fig. 12 is a schematic structural diagram of a shooting device according to an embodiment of the present application.
As shown in fig. 12, the photographing apparatus 120 is applied to the electronic device shown in fig. 1 or fig. 2, and may specifically include:
a receiving module 1201, configured to receive a first input of a user shooting a video;
a shooting module 1202, configured to, in response to a first input, shoot a first video image, and in a process of shooting the first video image, acquire motion posture information of a first target object in the first video image;
and the processing module 1203 is configured to adjust an operating state of the first target object in the first video image according to the motion posture information, so as to obtain a target video.
The following describes the imaging device 120 in detail, specifically as follows:
in one or more possible embodiments, the camera 120 may further include: the first determining module is used for carrying out image recognition on the first video image and determining a first target object in the first video image.
In one or more possible embodiments, the camera 120 may further include: a second determination module; the receiving module 1201 is further configured to receive a second input of the video image to be captured; based on the first input, the second determining module is used for responding to the second input and determining a first object corresponding to the second input in the video image to be shot as the first target object.
In one or more possible embodiments, the shooting module 1202 may be specifically configured to shoot a first video image through a first shooting parameter corresponding to a first camera, and shoot a first target object through a second shooting parameter corresponding to a second camera; the shooting frequency in the first shooting parameters is greater than or less than the shooting frequency in the second shooting parameters;
and acquiring motion attitude information corresponding to the shooting frequency in the second shooting parameters based on the shooting frequency in the second shooting parameters.
In one or more possible embodiments, the processing module 1203 is specifically configured to generate a first adjustment parameter according to the motion posture information of the first target object;
the receiving module 1201 is further configured to receive a third input of the first adjustment parameter;
the processing module 1203 is specifically configured to, in response to the third input, adjust the running state of the first target object in the first video image based on the target adjustment parameter corresponding to the third input, so as to obtain the target video.
In one or more possible embodiments, the processing module 1203 is specifically configured to determine a second target object based on a target adjustment parameter corresponding to the third input;
according to the second target object, the target area where the first target object is located in the first video image is scratched out;
and filling the second target object into the target area to obtain a target video.
In one or more possible embodiments, the processing module 1203 is specifically configured to, when it is detected that the target video is associated with the target audio, associate the volume information of the target audio with the motion posture information to obtain adjustment association data;
and adjusting the running state of the first target object in the first video image according to the adjustment associated data to obtain the target video.
In one or more possible embodiments, the processing module 1203 is specifically configured to, when the running state of the first target object includes a running speed of the first target object, adjust, according to the adjustment related data, that the running speed is greater than an object in the first video image to obtain a first target video when the volume information is the first volume information;
and when the volume information is second volume information, adjusting the object with the running speed less than that of the first video image according to the adjustment associated data to obtain a second target video.
The shooting device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in an electronic apparatus. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The shooting device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 11, and is not described here again to avoid repetition.
In the embodiment of the application, a first video image is shot through a first shooting parameter, a second video image is shot through a second shooting parameter, the second video image is an image corresponding to the motion posture information of a target object in the first video image, and then the running state of the target object in the first video image is adjusted through the second video image to obtain the target video. Therefore, different video images are shot by adopting different shooting parameters, partial areas in another video image are adjusted according to one video image, partial contents in the video images such as a shot object main body are locally adjusted, and therefore the embodiment of the application provides a new mode for shooting the video images, the effect of shooting the video images is improved, the shooting interestingness is increased, and the later-stage video processing operation of a user is reduced.
Optionally, as shown in fig. 13, an embodiment of the present application further provides a shooting device, such as the electronic device 130, which includes a processor 1301, a memory 1302, and a program or an instruction stored in the memory 1302 and capable of running on the processor 1301, where the program or the instruction is executed by the processor 1301 to implement each process of the shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
Fig. 14 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
The electronic device 1400 includes, but is not limited to: radio frequency unit 1401, network module 1402, audio output unit 1403, input unit 1404, sensor 1405, display unit 1406, user input unit 1407, interface unit 1408, memory 1409, processor 1410, and the like.
Those skilled in the art will appreciate that the electronic device 1400 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1410 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
In the embodiment of the present application, the user input unit 1407 is configured to receive a first input of a user shooting a video. A processor 1410, configured to capture a first video image in response to a first input, and acquire motion posture information of a first target object in the first video image during the capturing of the first video image; and adjusting the running state of the first target object in the first video image according to the motion attitude information to obtain the target video.
Therefore, when one video image is shot, the motion resource information of the target object in the video image is obtained, so that the running state of the target object fixedly shot in the video image is dynamically adjusted through the separately obtained different motion posture information of the target object, and partial content such as a shot object main body in the video image is locally adjusted.
It is to be appreciated that the input Unit 1404 may include a Graphics Processing Unit (GPU) 14041 and a microphone 14042, the Graphics processor 14041 Processing image data of still images or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 1406 may include a display panel 14061, and the display panel 14061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1407 includes a touch panel 14071 and other input devices 14072. Touch panel 14071, also referred to as a touch screen. The touch panel 14071 may include two parts of a touch detection device and a touch controller. Other input devices 14072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 14014 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. The processor 1410 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1410.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the foregoing shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. The readable storage medium includes a computer-readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
In addition, an embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the foregoing shooting method embodiment, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A photographing method, characterized by comprising:
receiving a first input of a user to shoot a video;
responding to the first input, shooting a first video image, and acquiring motion attitude information of a first target object in the first video image in the process of shooting the first video image;
and adjusting the running state of a first target object in the first video image according to the motion attitude information to obtain a target video.
2. The method of claim 1, wherein the capturing of the first video image in response to the first input and prior to obtaining the motion pose information for the first target object in the first video image during the capturing of the first video image, the method further comprises:
and performing image recognition on the first video image, and determining a first target object in the first video image.
3. The method of claim 1, wherein prior to receiving the first input from the user to capture the video, the method further comprises:
receiving a second input of a video image to be shot;
and responding to the second input, and determining a first object corresponding to the second input in the video image to be shot as the first target object.
4. The method according to any one of claims 1-3, wherein said capturing a first video image in response to the first input and acquiring motion pose information of a first target object in the first video image during the capturing of the first video image comprises:
shooting the first video image through a first shooting parameter corresponding to a first camera, and shooting the first target object through a second shooting parameter corresponding to a second camera; the shooting frequency in the first shooting parameters is greater than or less than the shooting frequency in the second shooting parameters;
and acquiring motion attitude information corresponding to the shooting frequency in the second shooting parameters based on the shooting frequency in the second shooting parameters.
5. The method according to claim 1, wherein the adjusting the running state of the first target object in the first video image according to the motion posture information to obtain a target video comprises:
generating a first adjusting parameter according to the motion attitude information of the first target object;
receiving a third input to the first adjustment parameter;
and responding to the third input, and adjusting the running state of a first target object in the first video image based on a target adjusting parameter corresponding to the third input to obtain a target video.
6. The method of claim 5, wherein adjusting the operating state of the first target object in the first video image based on the target adjustment parameter corresponding to the third input to obtain the target video first target object comprises:
determining a second target object based on a target adjustment parameter corresponding to the third input;
according to the second target object, removing a target area where the first target object is located in the first video image;
and filling the second target object into the target area to obtain a target video.
7. The method according to claim 1, wherein the adjusting the running state of the first target object in the first video image according to the motion posture information to obtain a target video comprises:
under the condition that the target video is detected to be associated with the target audio, associating the volume information of the target audio with the motion attitude information to obtain adjustment associated data;
and adjusting the running state of a first target object in the first video image according to the adjustment associated data to obtain a target video.
8. The method of claim 7, wherein the operating state of the first target object comprises an operating speed of the first target object; adjusting the running state of a first target object in the first video image according to the adjustment association data to obtain a target video, including:
when the volume information is first volume information, adjusting the running speed to be higher than an object in the first video image according to the adjustment associated data to obtain a first target video;
and when the volume information is second volume information, adjusting the object with the running speed less than that of the first video image according to the adjustment associated data to obtain a second target video.
9. A camera, comprising:
the receiving module is used for receiving a first input of a video shot by a user;
the shooting module is used for responding to the first input, shooting a first video image and acquiring the motion attitude information of a first target object in the first video image in the process of shooting the first video image;
and the processing module is used for adjusting the running state of a first target object in the first video image according to the motion attitude information to obtain a target video.
10. An electronic device, comprising: processor, memory and a program or instructions stored on the memory and executable on the processor, which program or instructions, when executed by the processor, implement the steps of the photographing method according to any one of claims 1-8.
11. A readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions which, when executed by a processor, implement the steps of the photographing method according to any one of claims 1 to 8.
CN202110602880.XA 2021-05-31 2021-05-31 Shooting method, shooting device, electronic equipment and storage medium Pending CN113347356A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110602880.XA CN113347356A (en) 2021-05-31 2021-05-31 Shooting method, shooting device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110602880.XA CN113347356A (en) 2021-05-31 2021-05-31 Shooting method, shooting device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113347356A true CN113347356A (en) 2021-09-03

Family

ID=77473304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110602880.XA Pending CN113347356A (en) 2021-05-31 2021-05-31 Shooting method, shooting device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113347356A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114257854A (en) * 2021-11-19 2022-03-29 支付宝(杭州)信息技术有限公司 Volume control method, volume control device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008109245A (en) * 2006-10-24 2008-05-08 Sony Corp Recording and reproducing device
JP2011087203A (en) * 2009-10-19 2011-04-28 Sanyo Electric Co Ltd Imaging apparatus
CN107396019A (en) * 2017-08-11 2017-11-24 维沃移动通信有限公司 A kind of slow motion video method for recording and mobile terminal
CN112422863A (en) * 2019-08-22 2021-02-26 华为技术有限公司 Intelligent video recording method and device
CN112532865A (en) * 2019-09-19 2021-03-19 华为技术有限公司 Slow-motion video shooting method and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008109245A (en) * 2006-10-24 2008-05-08 Sony Corp Recording and reproducing device
JP2011087203A (en) * 2009-10-19 2011-04-28 Sanyo Electric Co Ltd Imaging apparatus
CN107396019A (en) * 2017-08-11 2017-11-24 维沃移动通信有限公司 A kind of slow motion video method for recording and mobile terminal
CN112422863A (en) * 2019-08-22 2021-02-26 华为技术有限公司 Intelligent video recording method and device
CN112532865A (en) * 2019-09-19 2021-03-19 华为技术有限公司 Slow-motion video shooting method and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114257854A (en) * 2021-11-19 2022-03-29 支付宝(杭州)信息技术有限公司 Volume control method, volume control device and electronic equipment

Similar Documents

Publication Publication Date Title
CN110675420B (en) Image processing method and electronic equipment
CN108255304B (en) Video data processing method and device based on augmented reality and storage medium
CN111612873B (en) GIF picture generation method and device and electronic equipment
CN113014801B (en) Video recording method, video recording device, electronic equipment and medium
CN112954201B (en) Shooting control method and device and electronic equipment
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN112954199A (en) Video recording method and device
CN110086998B (en) Shooting method and terminal
CN113852756B (en) Image acquisition method, device, equipment and storage medium
US20230368338A1 (en) Image display method and apparatus, and electronic device
CN112511743B (en) Video shooting method and device
CN112702517B (en) Display control method and device and electronic equipment
CN113347356A (en) Shooting method, shooting device, electronic equipment and storage medium
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN113852757B (en) Video processing method, device, equipment and storage medium
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN112738398B (en) Image anti-shake method and device and electronic equipment
CN112367487B (en) Video recording method and electronic equipment
CN115278047A (en) Shooting method, shooting device, electronic equipment and storage medium
CN114650370A (en) Image shooting method and device, electronic equipment and readable storage medium
CN114245017A (en) Shooting method and device and electronic equipment
CN114339051A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112887515A (en) Video generation method and device
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium
CN111984173B (en) Expression package generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210903