CN112511750B - Video shooting method, device, equipment and medium - Google Patents

Video shooting method, device, equipment and medium Download PDF

Info

Publication number
CN112511750B
CN112511750B CN202011376642.3A CN202011376642A CN112511750B CN 112511750 B CN112511750 B CN 112511750B CN 202011376642 A CN202011376642 A CN 202011376642A CN 112511750 B CN112511750 B CN 112511750B
Authority
CN
China
Prior art keywords
scene
video
target
input
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011376642.3A
Other languages
Chinese (zh)
Other versions
CN112511750A (en
Inventor
刘梁贵
张巧双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011376642.3A priority Critical patent/CN112511750B/en
Publication of CN112511750A publication Critical patent/CN112511750A/en
Application granted granted Critical
Publication of CN112511750B publication Critical patent/CN112511750B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Abstract

The application discloses a video shooting method, a video shooting device, video shooting equipment and a video shooting medium, and belongs to the technical field of shooting. The video shooting method comprises the following steps: in the video shooting process, first scene information is obtained; determining scene characteristics according to the first scene information; and shooting a video according to the target filter effect associated with the scene characteristics to obtain a first video. In the video shooting process, the filter effect can be dynamically adjusted based on the change of scene information, the processing effect of the filter on the shot video is improved, and the video shooting requirement of a user is better met.

Description

Video shooting method, device, equipment and medium
Technical Field
The application belongs to the technical field of shooting, and particularly relates to a video shooting method, device, equipment and medium.
Background
As is well known, in video shooting, special effects of video can be achieved through a filter, so that a user can perform personalized video shooting. However, in the process of video shooting in the prior art, the adopted filter effect is generally single, so that the video shooting effect is poor, and the user requirements are difficult to meet.
Disclosure of Invention
The embodiment of the application aims to provide a video shooting method, a video shooting device, video shooting equipment and a video shooting medium, and can solve the problem that in the prior art, the shooting video effect is poor due to the fact that the filter effect adopted in video shooting is single.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video shooting method, including:
in the video shooting process, first scene information is obtained;
determining scene characteristics according to the first scene information;
shooting a video according to a target filter effect associated with the scene characteristics to obtain a first video; the target filter effect includes at least one of a target image effect, a target sound effect, and a target filter special effect.
In a second aspect, an embodiment of the present application provides a video shooting apparatus, including:
the first acquisition module is used for acquiring first scene information in the video shooting process;
the first determining module is used for determining scene characteristics according to the first scene information;
the first shooting module is used for shooting videos according to the target filter effect associated with the scene characteristics to obtain first videos; the target filter effect includes at least one of a target image effect, a target sound effect, and a target filter special effect.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the method according to the first aspect.
According to the video shooting method provided by the embodiment of the application, in the video shooting process, the scene characteristics are determined according to the first scene information, and the target filter effect managed by the scene characteristics is used for carrying out video shooting to obtain the first video. In the video shooting process, the filter effect can be dynamically adjusted based on the change of scene information, the processing effect of the filter on the shot video is improved, and the video shooting requirement of a user is better met.
Drawings
Fig. 1 is a flowchart of a video shooting method provided in an embodiment of the present application;
FIG. 2a is a diagram of an example of a video editing interface in an embodiment of the present application;
FIG. 2b is a diagram of another example of a video editing interface in an embodiment of the present application;
fig. 3 is another flowchart of a video shooting method provided in an embodiment of the present application;
FIG. 4 is a flow chart of a filter effect use and adjustment process in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a video camera provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 7 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video shooting method, apparatus, device and medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings and specific embodiments and application scenarios thereof.
Fig. 1 is a schematic flowchart of a video shooting method according to an embodiment of the present application. The video photographing method may include:
step 101, acquiring first scene information in a video shooting process;
step 102, determining scene characteristics according to the first scene information;
103, shooting a video according to the target filter effect associated with the scene characteristics to obtain a first video; the target filter effect includes at least one of a target image effect, a target sound effect, and a target filter special effect.
The video shooting method provided by the embodiment of the application can be applied to electronic equipment with a camera, such as a personal computer, a mobile terminal and the like, and is not limited specifically here. The electronic device may take a video shot of a scene, which may be, for example, a concert, sunrise, or indoors, etc.
Some information that can be acquired by the electronic device, such as light information or sound information, may exist in the above scenario. Taking the scene of a concert as an example, there will generally be singing voice of a singer, accompanying voice, cheering of audiences, etc., and these voices may correspond to the above-mentioned voice information; the stage area and the audience area usually have corresponding light intensity, and the light intensity can correspond to the light information; generally, the light information can be obtained by using the image as a carrier, and the image can carry more optical information, such as contrast, dynamic range, etc., and the image information will be mainly described hereinafter.
In this embodiment, the first scene information may correspond to at least one of image information and sound information of a scene during a video shooting process. The image information can be directly acquired by an element such as a camera. The sound information may be acquired by a sound sensor or the like.
Based on the first scene information, a scene characteristic may be determined, where the scene characteristic may refer to a characteristic of the scene information, such as an image characteristic or a sound characteristic. For example, according to the image information, the image characteristics such as brightness, brightness distribution characteristics, contrast characteristics and the like can be determined; from the sound information, sound characteristics such as volume level characteristics, volume change characteristics, or melody characteristics can be determined.
It is easy to understand that, taking the scene of the concert as an example, specifically to the user application level, the scene features may also be regarded as scene tags such as "stage zone", "audience zone", "in singing", or "end of singing". For simplicity of illustration, various types of scene features may be represented hereinafter in the form of scene tags.
In this embodiment, corresponding filter effects may be preconfigured for different scene characteristics, and in the video shooting process, the associated filter effects may be called to perform video shooting according to the determined scene characteristics.
The filter effect here may be an image filter effect, an audio filter effect, or a filter effect, and is not limited in detail here. As to the action principle of the above various filter effects on the shot video, the following examples can be used for illustration:
for the image filter effect, the video image can be processed through the image filter with preset filter parameters so as to change the brightness, the contrast or the dynamic range of the video image; for the audio filter effect, the audio filter with preset filter parameters can be used for processing the video sound so as to change the volume or frequency and the like of the video sound; for the filter special effect, a specific image special effect, such as a silhouette or a light effect, may be added to the video image, or the filter special effect may be some sound special effect, such as a cheering or a conversation, correspondingly added to the video image.
Generally, it can be considered that the image filter effect and the audio filter effect correspond to an image filter parameter and an audio filter parameter, respectively, and the filter effect may correspond to some additionally added image content or sound content, etc.
The incidence relation between the scene characteristics and the filter effect can be preset; the following also exemplifies the association relationship in connection with the scenario of a concert:
for example, when a user takes a video shot in a scene of a concert, a shooting focus may be switched between a stage area and an audience area, and when the stage area is taken as the shooting focus, the brightness and the dark contrast are obvious, so that the brightness of the audience area can be properly reduced, and the dynamic range of the stage area can be improved; at this time, the scene characteristic may correspond to that the contrast of the image information is greater than a threshold, or correspond to the scene label of the "stage zone", and the filter effect associated with the scene characteristic may be an image filter effect, which is used to reduce the brightness of the audience zone and improve the dynamic range of the stage zone.
For another example, when a singer sings, the singer may enter a blank period after singing the tail voice climax of a song, and from the perspective of voice information, the singer may suddenly drop into a low volume after maintaining a high volume for a period of time, in which case, it may be generally considered that a song has been sung, and the audience starts to applause or cheer; at this time, the scene characteristic may be considered as a specific variation trend of the volume of the sound information, or a scene label corresponding to the "singing end" mentioned above, and a filter effect associated with the scene characteristic may correspond to a filter special effect for adding a section of cheering.
In the video shooting process, corresponding scene characteristics can be determined according to the changed scene information in the scene, and the adopted target filter effect is adjusted according to the scene characteristics. In combination with the scenes of the concert, in the process of video shooting, shooting states that a user aims at a shooting focus in a stage area, aims at a shooting focus in an audience area, sings a song by a singer, finishes song singing by the singer and the like may exist, scene characteristics corresponding to the shooting states can be determined based on the acquisition of scene information, and further target filter effects associated with the scene characteristics are selected for video shooting. In other words, in the process of the shooting state transition, the used target filter effect can be adjusted accordingly.
Performing video shooting according to a target filter effect associated with scene characteristics to obtain a first video, that is, a video image or video sound in the first video may be a video image or video sound added with the target filter effect; the target filter effect may include at least one of a target image effect, a target sound effect, and a target filter special effect.
According to the video shooting method provided by the embodiment of the application, in the video shooting process, the scene characteristics are determined according to the first scene information, and the target filter effect associated with the scene characteristics is used for carrying out video shooting to obtain the first video. In the video shooting process, the filter effect can be dynamically adjusted based on the change of scene information, the processing effect of the filter on the shot video is improved, and the video shooting requirement of a user is better met.
In some examples, for a target image effect, it may refer to an image filter effect determined according to scene features; in general, an image filter parameter such as An Exposure (AE) parameter, a contrast, a dynamic range, a color saturation, or a hue may be associated with the image filter effect. It is readily understood that the AE parameter may correspond to the brightness of the video image, and the dynamic range may correspond to the gray scale ratio between the brightest portion and the darkest portion of the video image.
Taking the scene of the concert as an example, when the focus of the video shooting is in the stage area, the matched target image effect can enable the stage area to have a higher dynamic range; when the focus of the video shot is at the audience area, the matching target image effect may cause the audience area to have a higher dynamic range.
Therefore, based on the use of the target image effect, the detail content in the video image can be effectively presented, and the quality of the video image is improved.
For the target sound effect, it may refer to an audio filter effect determined according to scene characteristics; in general, an audio filter effect may be associated with an audio filter parameter such as volume or frequency.
Taking a scene of a concert as an example, after the voice signals in the scene are acquired, the voice signals can be distinguished according to characteristics of the voice signals, such as volume and rhythm. In short, the voices of the singer, the accompaniment and the musical instrument can be distinguished from the noise such as the cheering of the audience, and the voices of the singer, the accompaniment and the musical instrument can be amplified in volume through the target sound effect, or the sound effect such as the echo is additionally increased. Of course, by switching the target sound effect, the cheering sound generated by the audience can be strengthened in the blank section after the singer finishes singing, namely, the volume of the cheering sound is amplified.
It can be seen that based on the use of the target sound effect, specific sounds can be enhanced, attenuated or modified with special sound effects to achieve better auditory effects.
The target filter effect may be an image effect, a sound effect, or the like.
For example, in a scene of a concert, it is necessary to take a video shot with a stage area as a focus and to express the live atmosphere of the concert by taking a picture of an audience area; since the shooting focus is on a stage area with high brightness, the brightness of an audience area in a video image area is low, and it is difficult to effectively express the content such as the audience outline and the glow stick. At this time, special image effects such as fluorescent light effects or silhouettes of viewers can be additionally added to a darker area in the video image. Of course, the determination of the darker area in the video image may be implemented based on a preset condition such as a brightness threshold, and is not limited specifically here.
For example, a special sound effect such as cheering may be added after the song is finished in order to further represent the live atmosphere of the concert.
Therefore, the content of the video image or the sound content in the first video can be enriched by using the special effect of the filter, so that the requirement of the user on the personalized shooting of the video can be better met.
In one example, the first scene information includes at least one of image information and sound information.
In combination with the above example of the scene of the concert in the embodiment, in the video shooting, it may be necessary to increase the dynamic range of the stage area when the shooting focus is the stage area, and increase the dynamic range of the audience area when the shooting focus is the audience area. Here, the recognition of the stage area and the audience area, or the determination of the scene features, generally needs to be implemented based on the acquired image information. The image information can correspond to the brightness, color or tone information of the shot image; in the video shooting process, the scene characteristics can be determined and updated by continuously acquiring the image information in the video image, the follow-up video shooting can be performed by selecting a proper image filter effect, and the content identification degree and the visual effect of the video image are ensured.
Of course, in some possible embodiments, the appropriate audio filter effect may be selected subsequently according to the image information. For example, through the discernment to image information, the scene characteristic of confirming corresponds for "exaggeration laugh", and at this moment, can choose for use suitable audio frequency filter effect, handle the laughing tone for crude and mad laughing tone, promote the interest of shooing the video.
Similarly, during the video shooting process, by acquiring sound information such as sound volume, frequency, etc., scene features, such as the scene features corresponding to "singing in progress" or "singing over" mentioned in the above embodiments, can also be determined. When the video shooting is under the scene characteristic of 'singing', a proper audio filter effect can be selected, and the volume of the singing sound is increased; when the video shooting is under the scene characteristic of 'singing ending', sound special effects such as cheering and the like can be added.
Therefore, by acquiring the sound information, the subsequent selection of the proper special effect of the audio filter is facilitated to improve the auditory effect of the shot video.
Of course, in some possible embodiments, according to the sound information, the image filter effect may be selected as appropriate. For example, in video shooting, according to sound information, the determined scene features correspond to "calling one sound", and at the moment, a proper image filter effect can be selected, so that the video image is processed into a gray image, and the interest of shooting the video is improved.
Optionally, in step 101, before the obtaining the first scene information, the video shooting method may further include:
under the condition that a shooting preview picture is displayed, second scene information corresponding to the shooting preview picture is acquired;
determining an initial scene type according to the second scene information;
receiving a first input;
and responding to the first input, and carrying out video shooting according to the filter effect corresponding to the initial scene type.
Taking the shooting function in the mobile terminal used by the user as an example for explanation, when the user opens the shooting application, a shooting preview picture can be displayed on the display interface of the mobile terminal; at this time, it may be considered that video shooting is not started, but by acquiring the second scene information of the shooting preview picture, an initial determination may be made as to the type of the scene currently located.
Specifically, as for the second scene information, at least one of image information and sound information may be included as well. Based on the second scene information, the type of the scene, i.e. the initial scene type, can be preliminarily determined, for example, by combining the brightness distribution rule in the image information, or the recognition result of the image content, or the volume and the rhythm in the sound information.
Each initial scene type may be pre-corresponding to a specific filter effect, for example, the determined initial scene type may be a concert or sunrise, etc., and for the initial scene type, the corresponding filter effect may be used to enable a relatively large luminance area in the image to have a high dynamic range.
In the video shooting process by combining the above embodiment, the scene characteristics are determined according to the first scene information, and then the associated target filter effect mode is determined according to the scene characteristics; in the embodiment, only the type of the scene is roughly judged in the shooting preview stage, and the corresponding filter effect is determined, so that the filter effect can be determined more quickly.
The first input may correspond to an input to a capture control for controlling video capture; the first input may be in the form of a single click, a double click, a long press or a swipe, etc., and is not particularly limited herein. In this embodiment, in response to the first input, shooting may be performed according to the filter effect determined in the shooting preview stage, that is, the filter effect corresponding to the initial scene type.
According to the embodiment, at the initial stage of video shooting, a more appropriate filter effect can be called for video shooting, so that the brightness, color, volume and the like corresponding to the first frames of video images of a video and subsequent video images have obvious differences, and the continuity of the filter effect is effectively ensured.
Generally speaking, in the video shooting process, when a shot scene is relatively stable, the entire filter key can be determined, and then the variation range of filter parameters can be limited in the video shooting process, so that the filter effect of the shot video has high consistency and stability. To achieve the above effect, optionally, after receiving the first input and before acquiring the first scene information, the video capturing method may further include:
acquiring third scene information in response to the first input;
determining the type of a target filter according to the initial scene type and the third scene information;
the video shooting is carried out according to the target filter effect associated with the scene characteristics to obtain a first video, and the method comprises the following steps:
determining a target filter effect associated with the scene feature from a plurality of preset filter effects corresponding to the target filter type;
and shooting a video according to the target filter effect to obtain a first video.
As described in the above embodiment, the first input may correspond to an input for controlling video capturing, and at this time, the third scene information may be acquired. The third scene information may also include at least one of image information and sound information, and in practical applications, the third scene information may be derived from a video image of a preset frame number before the video is captured, or the third scene information may correspond to the obtained image information and sound information within a preset time period after the first input is received.
According to the initial scene type and the third scene information, a target filter type can be determined, and the target filter type can be understood as the integral key of the filter. Specifically, the initial scene type may correspond to a type of a scene preliminarily determined at a photographing preview stage, and after receiving the first input, third scene information may be further acquired to determine whether the third scene information matches the initial scene type. For example, the determined initial scene type is a concert, and after receiving the first input, it may be further determined whether parameters such as volume and rhythm of the acquired sound signal satisfy a sound parameter range set for the concert scene.
In the case that the third scenario information matches the initial scenario type, the target filter type may be determined according to the initial scenario type, in other words, the third scenario information is obtained, which may be regarded as being used for performing secondary confirmation on the initial scenario type, and in the case that the secondary confirmation passes, the target filter type corresponding to the initial scenario type may be determined. Meanwhile, the third scene information is matched with the initial scene type, so that the shot scene is stable, and the integral key of the filter can be determined. The overall filter tone here can be considered as an overall limitation on filter parameters or a range of special effects mechanisms.
Of course, in the case that the third scene information does not match the initial scene type, the target filter type may be further determined according to the acquired third scene information.
There may be a plurality of preset filter effects that may be pre-associated with the scene characteristics. Also taking a scene of a concert as an example, for a target filter type, the target filter type may correspond to a set of preset filter effects that may be used in a shooting scene of the concert, and when video shooting is performed in the scene of the concert, different scene characteristics may appear, such as the aforementioned "stage area", "audience area", "in singing", or "end of singing", and each scene characteristic may be associated with one or more preset filter effects in advance; in the video shooting process, when a certain scene characteristic is determined, a preset filter effect associated with the scene characteristic can be used as a target filter effect.
In other words, in this embodiment, the target filter effects associated with the scene features determined in the video shooting process may all exist in a plurality of preset filter effects corresponding to the target filter types; so, each target filter effect that uses in first video can belong to the whole key of same filter, and target filter effect has higher uniformity and stability, helps promoting the whole vision or the auditory effect of first video.
Optionally, in order to enable a user to know a type of a target filter used in a video shooting process, after determining the type of the target filter according to the initial scene type and the third scene information, the video shooting method may further include:
and displaying the identification corresponding to the type of the target filter.
In an example, the identifier corresponding to the target filter type may be characters such as "concert", "sunrise", "indoor", and the like, and the characters are actually matched with the scene type, so that the user can intuitively determine whether the target filter type adopted in the video shooting is appropriate in combination with the shooting scene.
The identifier corresponding to the type of the target filter can be displayed in a video shooting interface; in some possible implementations, user input for these identifications may also be obtained and the type of target filter employed in the video capture switched in response to the input.
Optionally, in step 103, after the video capturing is performed according to the target filter effect associated with the scene feature to obtain the first video, the video capturing method may further include:
and editing the first video to obtain a second video.
In this embodiment, the editing of the first video mainly refers to editing a target filter effect adopted in the first video, and certainly, in some feasible implementations, the editing may also be performed on a duration or a frame rate of the first video.
According to the embodiment, the second video is obtained by editing the first video, so that the requirement of the user on personalized adjustment of the shot video is favorably met. The following will specifically describe the manner of editing the first video.
Optionally, editing the first video to obtain a second video, including:
receiving a second input;
in response to the second input, displaying a play progress bar of the first video, the play progress bar including a mark thereon corresponding to the target filter effect, the mark being located at a play progress associated with the target filter effect;
receiving a third input for a target indicia, the indicia comprising the target indicia;
and responding to the third input, and adjusting the target filter effect corresponding to the target mark to obtain a second video.
In conjunction with an actual scene, the second input may be a click input to a particular control in the shooting interface, and in response to the second input, the shooting interface may be switched to a video editing interface in which a play progress bar of the first video is displayed. Of course, the second input mode and the interface switching mode are only an example, and may be specifically set according to actual needs; for example, the second input may be an input in the form of a double click, a long press, or a swipe; the video editing interface can also be switched after receiving a second input for a specific control of the video preview interface.
In this embodiment, the displayed play progress bar of the first video includes marks corresponding to the target filter effect, and each mark may be located at a play progress associated with the target filter effect. Here, the association relationship between the target filter effect and the play progress may refer to displaying a mark of the target filter effect at a time point of the play progress bar at which the target filter effect starts to be used.
For example, in the process of obtaining a first video by shooting, a target filter special effect a is used at a time point of 1 minute, and the target filter special effect a is replaced by a target filter special effect B at a time point of 2 minutes; then, in the play progress bar of the first video, the mark corresponding to the target filter special effect a may be displayed at the time point of 1 minute, and the mark corresponding to the target filter special effect B may be displayed at the time point of 2 minutes.
Of course, in some possible embodiments, the mark corresponding to the target filter effect may also be displayed in the time period used by the target filter effect in the play progress bar, that is, the width or span of the mark may be adjusted according to the time used by the target filter effect; alternatively, two corresponding markers may be set for one target filter effect, and the two markers are respectively displayed at the time point of starting use and the time point of receiving use of the target filter effect in the play progress bar, and the like. In other words, the specific display mode of the mark corresponding to the target filter effect can be set according to actual needs.
For the third input, the specific input form is not limited herein. The third input may be an input for a target mark. It is easily understood that, in the first video, the number of the target filter effects used may be one or more, and the corresponding mark displayed on the play progress bar may also be one or more, and the target mark corresponds to the mark operated by the third input among the marks.
The target filter effect corresponding to the target mark may be adjusted in response to a third input, for example, the third input may be a left-right dragging operation for the target mark, and at this time, the time for using the target filter effect corresponding to the target mark may be adjusted in response to the third input; or, the third input may be a dragging operation of the target mark to an area outside the display area where the progress bar is displayed, and at this time, the target filter effect corresponding to the target mark may be removed in response to the third input.
Of course, the above is merely an example of the adjustment manner of the third input and the corresponding target filter effect, and in practical applications, the manner of adjusting the target filter effect corresponding to the target mark in response to the third input may be set as needed.
Through the adjustment to the target filter effect in the first video, the second video can be obtained, so that the adjustment requirement of the user on the filter effect in the first video can be met.
Optionally, in order to facilitate the user to adjust the target filter parameter of the target filter effect, the adjusting the target filter effect corresponding to the target mark in response to the third input includes:
in response to the third input, displaying a parameter adjustment control corresponding to the target filter effect;
receiving a fourth input for a target parameter adjustment control, the parameter adjustment control comprising the target parameter adjustment control;
adjusting a target filter parameter associated with the target parameter adjustment control in response to the fourth input.
In combination with a practical application scenario, the third input may be a long-press input to a target mark, and in response to the third input, a parameter adjustment control corresponding to a target filter effect corresponding to the target mark may be displayed. For example, when the target filter effect is an image filter effect, the corresponding parameter adjustment control may respectively associate filter parameters such as brightness and contrast; when the target filter effect is an audio filter effect, the corresponding parameter adjusting control can respectively associate filter parameters such as volume, frequency and the like.
It can be seen that after responding to the third input, the number of the displayed parameter adjustment controls may be one or more, the fourth input may be an input for a target parameter adjustment control in the parameter adjustment controls, and the specific input form may be a single click, a double click, a long press, a slide, or the like, which is not limited specifically here. For example, the fourth input may be a drag input for a circular icon in the volume bar, a single click or a long press input for a volume adjustment key, or the like.
Referring to fig. 2a and 2b, fig. 2a is a diagram of an example of a video editing interface, in which a play progress bar P and a video preview window V are mainly displayed.
The playing progress bar P includes a time progress bar and marks corresponding to the target filter effects, and each mark is shifted by a playing progress associated with the corresponding target filter effect. Meanwhile, the marks corresponding to different types of filter effects can be displayed in different manners, for example, for the audio filter effect, the blue mark can be displayed above the time progress bar; and for the image filter effect, red indicia may be used and displayed below the timeline. Therefore, the display in different modes is carried out through the marks corresponding to the filter effects of different types, and the user can be helped to intuitively know the service time of the target filter effects of various types in the shot video.
Referring to fig. 2b, after the user presses one of the marks for a long time, that is, presses the target mark for a long time, a first window W that can be used for displaying the parameter adjustment control can be displayed in a floating manner on the video preview window V. When the target filter effect corresponding to the target identifier is an audio filter effect, a first parameter adjustment control for adjusting a volume parameter and a second parameter adjustment control for adjusting a frequency parameter can be displayed in the first window W, and both the volume parameter and the frequency parameter can be considered as filter parameters of the audio filter effect; the user can input the first parameter adjustment control or the second parameter adjustment control to adjust the volume parameter or the frequency parameter of the audio filter effect.
In addition, a control for adjusting the use time of the target filter effect may be displayed in the first window W, for example, the control may be displayed as "move to 02.
In an example, when the target filter parameter of the target filter effect corresponding to the target mark is adjusted, the video image after the target filter parameter adjustment may be displayed in the video preview window V, or the image frame corresponding to the playing progress in the video image may be played, so that the user may better visually grasp the influence of the adjustment of the target filter parameter on the video effect, and further may determine a more reasonable target filter parameter. Of course, when the filter parameters of the audio filter effect are adjusted, the sound of the video at the corresponding playing progress of the audio filter effect can be played, so that the user can conveniently grasp the influence of the adjustment of the filter parameters on the audio filter effect.
Referring to fig. 3, fig. 3 shows an implementation process of the above video shooting method applied in an application embodiment, specifically including:
step 301, filter effect initialization.
When a user opens a camera in the electronic equipment, the default can be that the filter is not opened, the electronic equipment starts to collect the environmental information at the moment, the environmental information is processed to output an initial scene type to be used as a basis for calling the filter effect initialization parameter, so that after the filter is opened, the filter effect calls the corresponding default filter parameter according to the initial scene type to ensure the continuity of the subsequent filter effect.
And step 302, starting the filter, and determining the integral key of the filter according to the surrounding environment information.
Taking a concert scene as an example, it is first required that the electronic device recognizes that the video recorded at this time is in the scene of the concert according to a specific parameter or mechanism, and at the same time, the filter effect is limited within a relatively reasonable range, i.e., the overall tone of the filter is determined. The parameter or mechanism may be obtained by integrating, within a certain preset time period when the filter starts to be turned on, the sound volume (for example, the decibel value is kept above a certain decibel threshold value for a certain period of time), the rhythm (for example, the sound with the largest energy component conforms to a certain beat), the brightness (for example, the ambient brightness is darker as a whole but has a drastic brightness fluctuation), and the type of the initial scene. Thus, the consistency and stability of the filter effect can be ensured.
And step 303, opening a filter, and adjusting filter parameters in real time in the video recording process.
The filter parameters here may be not only image-related effect parameters, such as brightness, color, hue, etc., but also sound-related effect parameters, such as sound volume, frequency, etc. In addition, in the ancient city for video shooting, additional special effects can be added, for example, fluorescent light effect, audience silhouette and the like can be added in a video image, or echo sound effect enhancement, audience cheering and the like can be added in sound.
The adjustment process of the filter parameters from the three angles of brightness, sound effect and color will be illustrated below.
Brightness: taking a concert scene as an example, on one hand, when a video of a concert scene is shot, a stage is mostly taken as a focus, and the brightness and the darkness contrast of a video image are obvious at the moment, so that the situation can be taken as a scene characteristic, an independent basic exposure parameter is configured, the brightness of an audience area is properly reduced, the dynamic range of a stage bright area is improved, and the brightness and the contrast of discrete light sources such as a fluorescent rod and the like are enhanced. On the other hand, when video shooting is considered, the picture is not always focused on the stage, for example, when the picture is focused on audiences, the whole picture is dark, bright areas or discrete light sources may appear at the edges of the picture, and the dynamic range of the dark areas can be improved as another scene feature.
Sound effect: in order to highlight the singing voice, the accompaniment, the musical instrument and other sounds of the singer, a noise removing function can be set, namely, environmental noise is distinguished by extracting the voice characteristics of the singer and the music, the voice characteristics comprise volume, rhythm, high consistency and the like, then the volume of the required voice can be properly amplified, and of course, a corresponding threshold value can be set aiming at the amplification of the volume. In addition, some characteristic temperaments and melodies can be enhanced, or special sound effects such as echoes and the like can be additionally added. Specific sounds are enhanced or weakened or modified with special sound effects to obtain more excellent auditory effects.
Color: in the process of a concert, the color is relatively monotonous, mainly changes in light and shade, and a small amount of fluorescent colors such as light on a stage, fluorescent sticks and lamp boards on auditorium are added, so that the color saturation can be properly improved to obtain a slightly exaggerated picture effect; and for the relatively dark parts such as seats of audiences, the brightness can be further removed, and the special effect of silhouettes of the audiences is formed to enhance the visual effect.
The change of the filter parameters and the special effects can be real-time following the environmental information, the implementation mechanism is to analyze and process the environmental information collected by the electronic equipment, and when certain characteristic sounds or characteristic images are detected, the corresponding filter parameters are adjusted, or the opening and closing of certain special effects are determined, so that the process of the filter effect is relatively reasonable.
And step 304, after the video shooting is finished, checking and editing the video.
In some cases, a user may not be satisfied with the filter effect of a certain portion of the captured video, some filter effects used in the video may be visually presented, and the user may edit the filter effects according to the preference. In one example, the editing interface for the filter effect may be as shown in fig. 2.
As shown in fig. 2a and 2b, the lowest part of the screen of the electronic device may be a progress bar of the whole video, and the marks above and below the progress bar control the editing of the audio filter effect and the image filter effect, respectively, such as: the upper mark represents that at the time node or a certain time period from the time node, the sound is edited by using an audio filter, such as adding a sound effect of audience cheering, or adding reverberation processing to the time period, and the like; the lower mark represents that at the time node, or some period of time from the time node, the video image has been edited using the image filter effect, such as adjusting brightness or color. Of course, the marking forms of the various filter effects are not limited to the above, and different types of filter effects can be distinguished and marked by colors, shapes and the like.
After any mark is input by long pressing or a motor, a menu bar can be independently popped up for adjusting filter parameters, for example, the color class can adjust the color tone, the color is equal, the brightness class can adjust the contrast, the local brightness and the exposure, and the sound effect class can adjust the frequency and the volume. Of course, the user may also operate the mark, move the position of the filter effect, or add or delete a specific filter effect, etc.
Referring to fig. 4, fig. 4 shows a process for using and adjusting the filter effect, which specifically includes the following steps:
step 401, collecting initial environment information and setting a filter scene identifier under the condition of opening a camera;
generally, after a camera is turned on, a shooting preview interface may be entered first, and at this time, initial environment information such as image information and sound information may be collected. The filter scene identification here can be regarded as an initial judgment of a scene type, for example, specifically, a scene such as a concert, a sunrise, or a room. The filter scene identification may be an intermediate parameter and may not be used for presentation on the shooting preview interface; of course, as a possible implementation, the filter scene identifier may also be presented at the shooting preview interface.
Step 402, under the condition of receiving input for controlling video shooting, calling corresponding initial filter parameters according to filter scene identification;
for each filter scene identifier or each initial scene type, a corresponding initial filter parameter can be preset, and the initial filter parameter can be called to perform video shooting under the condition of entering video shooting, so that the continuity of subsequent filter effects is ensured.
And 403, after video shooting is performed, acquiring scene information of N frames of video images before the shooting, and determining an integral key of a filter by combining the filter scene identification to limit parameters of subsequent filter effects and ensure consistency and stability of the filter effects, wherein N is a positive integer.
Step 404, in the normal shooting process of the video, continuously collecting sound information and image information, and determining sound characteristics and image characteristics;
step 405, adjusting the audio filter parameters and the image filter parameters according to the determined sound characteristics and image characteristics, or switching the filter effect;
step 406, determining whether a user input for closing the filter effect is received, if yes, executing step 407; if not, the step 403 can be executed again;
in step 407, the filter parameters are reset, or the filter effect is turned off.
The video shooting method provided by the embodiment of the application can realize the video shooting process of the filter effect which is changed in real time based on the environmental information, the switching mode of the filter effect is more convenient, the requirements of users on various filter effects are met, and the video shooting method has stronger interestingness. Meanwhile, the video can be edited at the later stage of video shooting, and the applicability and diversity of the video shooting method are greatly expanded.
It should be noted that, in the video shooting method provided in the embodiment of the present application, the execution subject may be a video shooting device, or a control module in the video shooting device for executing the video shooting method. In the embodiment of the present application, a video shooting method executed by a video shooting device is taken as an example, and the video shooting device provided in the embodiment of the present application is described.
Fig. 5 is a schematic structural diagram of a video capture device according to an embodiment of the present application. The video photographing apparatus may include:
a first obtaining module 501, configured to obtain first scene information in a video shooting process;
a first determining module 502, configured to determine a scene characteristic according to the first scene information;
a first shooting module 503, configured to perform video shooting according to a target filter effect associated with the scene feature to obtain a first video; the target filter effect includes at least one of a target image effect, a target sound effect, and a target filter special effect.
The video shooting device provided by the embodiment of the application can acquire the first scene information in the video shooting process, determine the scene characteristics according to the first scene information, and further shoot the video according to the target filter effect associated with the scene characteristics to obtain the first video; so, can switch the target filter effect according to the change developments of first scene information in video shooting dynamically, promote the richness of filter effect, help promoting the filter to the overall treatment effect of first video, satisfy the user demand to diversified filter effect in video shooting better.
Optionally, the video camera may further include:
the second acquisition module is used for acquiring second scene information corresponding to the shooting preview picture under the condition that the shooting preview picture is displayed;
the second determining module is used for determining the initial scene type according to the second scene information;
a receiving module for receiving a first input;
and the second shooting module is used for responding to the first input and carrying out video shooting according to the filter effect corresponding to the initial scene type.
Optionally, the video camera may further include:
a third obtaining module, configured to obtain third scene information in response to the first input;
a third determining module, configured to determine a type of a target filter according to the initial scene type and the third scene information;
the first photographing module may include:
a determining unit, configured to determine a target filter effect associated with the scene feature from a plurality of preset filter effects corresponding to the target filter type;
and the shooting unit is used for shooting videos according to the target filter effect to obtain a first video.
Optionally, the video camera may further include:
and the display module is used for displaying the identification corresponding to the type of the target filter.
Optionally, the video camera may further include:
and the editing module is used for editing the first video to obtain a second video.
Optionally, the editing module may include:
a first receiving unit for receiving a second input;
a display unit, configured to display a play progress bar of the first video in response to the second input, the play progress bar including a mark corresponding to the target filter effect thereon, the mark being located at a play progress associated with the target filter effect;
a second receiving unit for receiving a third input for a target mark, the mark comprising the target mark;
and the adjusting unit is used for responding to the third input and adjusting the target filter effect corresponding to the target mark to obtain a second video.
Optionally, the adjusting unit may include:
the display subunit is used for responding to the third input and displaying a parameter adjusting control corresponding to the target filter effect;
a receiving subunit, configured to receive a fourth input for a target parameter adjustment control, where the parameter adjustment control includes the target parameter adjustment control;
an adjustment subunit, configured to adjust, in response to the fourth input, a target filter parameter associated with the target parameter adjustment control.
Optionally, the first scene information includes at least one of image information and sound information.
The video shooting device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The video capture device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The video shooting device provided in the embodiment of the present application can implement each process implemented by the video shooting device in the video shooting method embodiments of fig. 1 to fig. 4, and for avoiding repetition, details are not repeated here.
Optionally, as shown in fig. 6, an electronic device 600 is further provided in this embodiment of the present application, and includes a processor 601, a memory 602, and a program or an instruction stored in the memory 602 and executable on the processor 601, where the program or the instruction is executed by the processor 601 to implement each process of the above-mentioned video shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 7 is a hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, and a processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The input unit 704 may be configured to acquire first scene information during a video shooting process;
accordingly, the processor 710 may be configured to determine a scene characteristic according to the first scene information;
performing video shooting according to the target filter effect associated with the scene characteristics to obtain a first video; the target filter effect includes at least one of a target image effect, a target sound effect, and a target filter special effect.
In the embodiment of the application, the electronic device can acquire the first scene information in the video shooting process, determine the scene characteristics according to the first scene information, and further shoot the video according to the target filter effect associated with the scene characteristics to obtain the first video; so, can switch the target filter effect according to the change developments of first scene information in video shooting dynamically, promote the richness of filter effect, help promoting the filter to the overall treatment effect of first video, satisfy the user demand to diversified filter effect in video shooting better.
Optionally, the input unit 704 may be further configured to, before acquiring the first scene information, acquire, in a case where a shooting preview screen is displayed, second scene information corresponding to the shooting preview screen;
accordingly, a user input unit 707 may be used to receive a first input;
accordingly, the processor 710 may be further configured to determine an initial scene type according to the second scene information;
and responding to the first input, and carrying out video shooting according to the filter effect corresponding to the initial scene type.
Optionally, the input unit 704 may be further configured to, after the receiving the first input and before the acquiring the first scene information, acquire third scene information in response to the first input;
accordingly, the processor 710 may be further configured to determine a target filter type according to the initial scene type and the third scene information;
correspondingly, the processor 710 may be further specifically configured to determine a target filter effect associated with the scene feature from a plurality of preset filter effects corresponding to the target filter type;
and shooting the video according to the target filter effect to obtain a first video.
Optionally, the display unit 706 may be configured to display an identifier corresponding to a target filter type after determining the target filter type according to the initial scene type and the third scene information.
Optionally, the processor 710 may be further configured to, after performing video shooting according to a target filter effect associated with the scene feature to obtain a first video, edit the first video to obtain a second video.
Optionally, the user input unit 707 may be further configured to receive a second input, and receive a third input for the target mark;
accordingly, the processor 710 may be further configured to display a play progress bar of the first video in response to the second input, the play progress bar including a mark thereon corresponding to the target filter effect, the mark being located at a play progress associated with the target filter effect;
and responding to the third input, and adjusting the target filter effect corresponding to the target mark to obtain a second video.
Optionally, the user input unit 707 may be further configured to receive a fourth input for the target parameter adjustment control;
accordingly, processor 710 may be further configured to display a parameter adjustment control corresponding to the target filter effect in response to the third input;
adjusting a target filter parameter associated with the target parameter adjustment control in response to the fourth input.
It should be understood that in the embodiment of the present application, the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics Processing Unit 7041 processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts of a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 709 may be used to store software programs as well as various data, including but not limited to applications and operating systems. Processor 710 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A video capture method, comprising:
in the video shooting process, first scene information is obtained;
determining scene characteristics according to the first scene information;
performing video shooting according to the target filter effect associated with the scene characteristics to obtain a first video; the target filter effect comprises at least one of a target image effect, a target sound effect and a target filter special effect;
the first scene information is at least one of image information and sound information of a scene in the video shooting process;
in the case that the first scene information is the image information, the scene characteristics include at least one of brightness height, brightness distribution characteristics and contrast characteristics;
when the first scene information is the sound information, the scene characteristics include at least one of volume high and low characteristics, volume change characteristics and rhythm characteristics;
before the obtaining the first scene information, the method further includes:
under the condition that a shooting preview picture is displayed, second scene information corresponding to the shooting preview picture is acquired;
determining an initial scene type according to the second scene information;
receiving a first input;
responding to the first input, and carrying out video shooting according to a filter effect corresponding to the initial scene type;
after the receiving the first input and before the acquiring the first scene information, the method further includes:
responding to the first input, and acquiring third scene information;
determining the type of a target filter according to the initial scene type and the third scene information;
the video shooting is carried out according to the target filter effect associated with the scene characteristics to obtain a first video, and the method comprises the following steps:
determining a target filter effect associated with the scene feature from a plurality of preset filter effects corresponding to the target filter type;
performing video shooting according to the target filter effect to obtain a first video;
determining a target filter type according to the initial scene type and the third scene information, including:
determining the type of the target filter according to the initial scene type under the condition that the third scene information is matched with the initial scene type;
and under the condition that the third scene information is not matched with the initial scene type, determining the target filter type according to the third scene information.
2. The method of claim 1, wherein after determining a target filter type based on the initial scene type and the third scene information, the method further comprises:
and displaying the identification corresponding to the type of the target filter.
3. The method of claim 1, wherein after capturing the first video based on the target filter effect associated with the scene feature, the method further comprises:
receiving a second input;
in response to the second input, displaying a play progress bar of the first video, the play progress bar including a marker thereon corresponding to the target filter effect, the marker being located at a play progress associated with the target filter effect;
receiving a third input for a target indicia, the indicia comprising the target indicia;
and responding to the third input, and adjusting the target filter effect corresponding to the target mark to obtain a second video.
4. The method of claim 3, wherein the adjusting the target filter effect corresponding to the target indicia in response to the third input comprises:
in response to the third input, displaying a parameter adjustment control corresponding to the target filter effect;
receiving a fourth input for a target parameter adjustment control, the parameter adjustment control comprising the target parameter adjustment control;
adjusting a target filter parameter associated with the target parameter adjustment control in response to the fourth input.
5. A video camera, comprising:
the first acquisition module is used for acquiring first scene information in the video shooting process;
the first determining module is used for determining scene characteristics according to the first scene information;
the first shooting module is used for shooting videos according to the target filter effect associated with the scene features to obtain a first video; the target filter effect comprises at least one of a target image effect, a target sound effect and a target filter special effect;
the first scene information is at least one of image information and sound information of a scene in the video shooting process;
in the case that the first scene information is the image information, the scene characteristics include at least one of brightness height, brightness distribution characteristics and contrast characteristics;
when the first scene information is the sound information, the scene characteristics include at least one of volume high and low characteristics, volume change characteristics and rhythm characteristics;
the second acquisition module is used for acquiring second scene information corresponding to the shooting preview picture under the condition that the shooting preview picture is displayed;
the second determining module is used for determining the initial scene type according to the second scene information;
a receiving module for receiving a first input;
the second shooting module is used for responding to the first input and carrying out video shooting according to the filter effect corresponding to the initial scene type;
a third obtaining module, configured to obtain third scene information in response to the first input;
a third determining module, configured to determine a type of a target filter according to the initial scene type and the third scene information;
the first photographing module includes:
a determining unit, configured to determine a target filter effect associated with the scene feature from a plurality of preset filter effects corresponding to the target filter type;
the shooting unit is used for shooting videos according to the target filter effect to obtain a first video;
the third determining module is specifically configured to determine the type of the target filter according to the initial scene type when the third scene information matches the initial scene type;
and under the condition that the third scene information is not matched with the initial scene type, determining the type of the target filter according to the third scene information.
6. The apparatus of claim 5, further comprising:
a first receiving unit for receiving a second input;
a display unit, configured to display a play progress bar of the first video in response to the second input, the play progress bar including a mark corresponding to the target filter effect thereon, the mark being located at a play progress associated with the target filter effect;
a second receiving unit for receiving a third input for a target mark, the mark comprising the target mark;
and the adjusting unit is used for responding to the third input and adjusting the target filter effect corresponding to the target mark to obtain a second video.
7. The apparatus of claim 6, wherein the adjusting unit comprises:
the display subunit is used for responding to the third input and displaying a parameter adjusting control corresponding to the target filter effect;
a receiving subunit, configured to receive a fourth input for a target parameter adjustment control, where the parameter adjustment control includes the target parameter adjustment control;
an adjustment subunit, configured to adjust, in response to the fourth input, a target filter parameter associated with the target parameter adjustment control.
8. An electronic device, characterized in that the electronic device comprises: processor, memory and a program or instructions stored on the memory and executable on the processor, which when executed by the processor implement the steps of the video capturing method as claimed in any one of claims 1 to 4.
9. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the video shooting method according to any one of claims 1 to 4.
CN202011376642.3A 2020-11-30 2020-11-30 Video shooting method, device, equipment and medium Active CN112511750B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011376642.3A CN112511750B (en) 2020-11-30 2020-11-30 Video shooting method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011376642.3A CN112511750B (en) 2020-11-30 2020-11-30 Video shooting method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN112511750A CN112511750A (en) 2021-03-16
CN112511750B true CN112511750B (en) 2022-11-29

Family

ID=74968152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011376642.3A Active CN112511750B (en) 2020-11-30 2020-11-30 Video shooting method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN112511750B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114302203A (en) * 2021-03-18 2022-04-08 海信视像科技股份有限公司 Image display method and display device
CN113194255A (en) * 2021-04-29 2021-07-30 南京维沃软件技术有限公司 Shooting method and device and electronic equipment
CN113470123A (en) * 2021-05-08 2021-10-01 广东观止文化网络科技有限公司 Video toning method and device, storage medium and shooting equipment
CN113965694B (en) * 2021-08-12 2022-12-06 荣耀终端有限公司 Video recording method, electronic device and computer readable storage medium
CN113645408B (en) * 2021-08-12 2023-04-14 荣耀终端有限公司 Photographing method, photographing apparatus, and storage medium
CN113810602B (en) * 2021-08-12 2023-07-11 荣耀终端有限公司 Shooting method and electronic equipment
CN113852755A (en) * 2021-08-24 2021-12-28 荣耀终端有限公司 Photographing method, photographing apparatus, computer-readable storage medium, and program product
CN115002335B (en) * 2021-11-26 2024-04-09 荣耀终端有限公司 Video processing method, apparatus, electronic device, and computer-readable storage medium
CN114390215B (en) * 2022-01-20 2023-10-24 脸萌有限公司 Video generation method, device, equipment and storage medium
CN114390214B (en) * 2022-01-20 2023-10-31 脸萌有限公司 Video generation method, device, equipment and storage medium

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103533244A (en) * 2013-10-21 2014-01-22 深圳市中兴移动通信有限公司 Shooting device and automatic visual effect processing shooting method thereof
CN103971713A (en) * 2014-05-07 2014-08-06 厦门美图之家科技有限公司 Video file filter processing method
CN104103300A (en) * 2014-07-04 2014-10-15 厦门美图之家科技有限公司 Method for automatically processing video according to music beats
CN104967801A (en) * 2015-02-04 2015-10-07 腾讯科技(深圳)有限公司 Video data processing method and apparatus
CN106657810A (en) * 2016-09-26 2017-05-10 维沃移动通信有限公司 Filter processing method and device for video image
CN106686301A (en) * 2016-12-01 2017-05-17 努比亚技术有限公司 Picture shooting method and device
CN106688305A (en) * 2015-02-03 2017-05-17 华为技术有限公司 Intelligent matching method for filter and terminal
CN107592453A (en) * 2017-09-08 2018-01-16 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN109120992A (en) * 2018-09-13 2019-01-01 北京金山安全软件有限公司 Video generation method and device, electronic equipment and storage medium
CN110611776A (en) * 2018-05-28 2019-12-24 腾讯科技(深圳)有限公司 Special effect processing method, computer device and computer storage medium
CN110740262A (en) * 2019-10-31 2020-01-31 维沃移动通信有限公司 Background music adding method and device and electronic equipment
CN111050203A (en) * 2019-12-06 2020-04-21 腾讯科技(深圳)有限公司 Video processing method and device, video processing equipment and storage medium
CN111050050A (en) * 2019-12-30 2020-04-21 维沃移动通信有限公司 Filter adjusting method and electronic equipment
CN111246102A (en) * 2020-01-22 2020-06-05 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN111416950A (en) * 2020-03-26 2020-07-14 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment
CN111435369A (en) * 2019-01-14 2020-07-21 腾讯科技(深圳)有限公司 Music recommendation method, device, terminal and storage medium
CN111586282A (en) * 2019-02-18 2020-08-25 北京小米移动软件有限公司 Shooting method, shooting device, terminal and readable storage medium

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3749822A (en) * 1971-12-30 1973-07-31 Veer F V D Animation method and apparatus
CN105323456B (en) * 2014-12-16 2018-11-30 维沃移动通信有限公司 For the image preview method of filming apparatus, image capturing device
CN105187737A (en) * 2015-07-31 2015-12-23 厦门美图之家科技有限公司 Image special effect processing display method, system and shooting terminal
CN106375660A (en) * 2016-09-13 2017-02-01 乐视控股(北京)有限公司 Photographic processing method and device
CN107730461A (en) * 2017-09-29 2018-02-23 北京金山安全软件有限公司 Image processing method, apparatus, device and medium
CN107967706B (en) * 2017-11-27 2021-06-11 腾讯音乐娱乐科技(深圳)有限公司 Multimedia data processing method and device and computer readable storage medium
CN110830845A (en) * 2018-08-09 2020-02-21 优视科技有限公司 Video generation method and device and terminal equipment
CN109729297A (en) * 2019-01-11 2019-05-07 广州酷狗计算机科技有限公司 The method and apparatus of special efficacy are added in video
CN109922268B (en) * 2019-04-03 2021-08-10 睿魔智能科技(深圳)有限公司 Video shooting method, device, equipment and storage medium
CN110225250A (en) * 2019-05-31 2019-09-10 维沃移动通信(杭州)有限公司 A kind of photographic method and terminal device
CN111050076B (en) * 2019-12-26 2021-08-27 维沃移动通信有限公司 Shooting processing method and electronic equipment
CN111416940A (en) * 2020-03-31 2020-07-14 维沃移动通信(杭州)有限公司 Shooting parameter processing method and electronic equipment
CN111541936A (en) * 2020-04-02 2020-08-14 腾讯科技(深圳)有限公司 Video and image processing method and device, electronic equipment and storage medium
CN111491123A (en) * 2020-04-17 2020-08-04 维沃移动通信有限公司 Video background processing method and device and electronic equipment
CN111866374A (en) * 2020-06-22 2020-10-30 上海摩象网络科技有限公司 Image shooting method and device, pan-tilt camera and storage medium
CN111654635A (en) * 2020-06-30 2020-09-11 维沃移动通信有限公司 Shooting parameter adjusting method and device and electronic equipment

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103533244A (en) * 2013-10-21 2014-01-22 深圳市中兴移动通信有限公司 Shooting device and automatic visual effect processing shooting method thereof
CN103971713A (en) * 2014-05-07 2014-08-06 厦门美图之家科技有限公司 Video file filter processing method
CN104103300A (en) * 2014-07-04 2014-10-15 厦门美图之家科技有限公司 Method for automatically processing video according to music beats
CN106688305A (en) * 2015-02-03 2017-05-17 华为技术有限公司 Intelligent matching method for filter and terminal
CN104967801A (en) * 2015-02-04 2015-10-07 腾讯科技(深圳)有限公司 Video data processing method and apparatus
CN106657810A (en) * 2016-09-26 2017-05-10 维沃移动通信有限公司 Filter processing method and device for video image
CN106686301A (en) * 2016-12-01 2017-05-17 努比亚技术有限公司 Picture shooting method and device
CN107592453A (en) * 2017-09-08 2018-01-16 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN110611776A (en) * 2018-05-28 2019-12-24 腾讯科技(深圳)有限公司 Special effect processing method, computer device and computer storage medium
CN109120992A (en) * 2018-09-13 2019-01-01 北京金山安全软件有限公司 Video generation method and device, electronic equipment and storage medium
CN111435369A (en) * 2019-01-14 2020-07-21 腾讯科技(深圳)有限公司 Music recommendation method, device, terminal and storage medium
CN111586282A (en) * 2019-02-18 2020-08-25 北京小米移动软件有限公司 Shooting method, shooting device, terminal and readable storage medium
CN110740262A (en) * 2019-10-31 2020-01-31 维沃移动通信有限公司 Background music adding method and device and electronic equipment
CN111050203A (en) * 2019-12-06 2020-04-21 腾讯科技(深圳)有限公司 Video processing method and device, video processing equipment and storage medium
CN111050050A (en) * 2019-12-30 2020-04-21 维沃移动通信有限公司 Filter adjusting method and electronic equipment
CN111246102A (en) * 2020-01-22 2020-06-05 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN111416950A (en) * 2020-03-26 2020-07-14 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN112511750A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
CN112511750B (en) Video shooting method, device, equipment and medium
US10200634B2 (en) Video generation method, apparatus and terminal
TWI475410B (en) Electronic device and method thereof for offering mood services according to user expressions
CN111163274B (en) Video recording method and display equipment
CN105117102B (en) Audio interface display methods and device
CN107230187A (en) The method and apparatus of multimedia signal processing
WO2022228412A1 (en) Photographing method, apparatus, and electronic device
CN104811829A (en) Karaoke interactive multifunctional special effect system
JP2011199858A (en) Content reproduction device, television receiver, content reproduction method, content reproduction program, and recording medium
US11863856B2 (en) Method and terminal device for matching photographed objects and preset text information
CN109168062A (en) Methods of exhibiting, device, terminal device and the storage medium of video playing
CN113676668A (en) Video shooting method and device, electronic equipment and readable storage medium
WO2022257367A1 (en) Video playing method and electronic device
JP2006262328A (en) Image printer and image processing method thereof
CN111291219A (en) Method for changing interface background color and display equipment
KR20190045894A (en) Apparatus for providing singing service
JP2011015129A (en) Image quality adjusting device
JP5498341B2 (en) Karaoke system
KR101973206B1 (en) Apparatus for providing singing service
CN112533023B (en) Method for generating Lian-Mai chorus works and display equipment
JP4702496B2 (en) Photo sticker creation apparatus and method, and program
KR101982346B1 (en) Apparatus for providing singing service
CN112073826B (en) Method for displaying state of recorded video works, server and terminal equipment
JP5562931B2 (en) Content reproduction apparatus, television receiver, content reproduction method, content reproduction program, and recording medium
JP2008052121A (en) Photographic sticker forming apparatus, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant