CN112822543A - Video processing method and device, electronic equipment and storage medium - Google Patents

Video processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112822543A
CN112822543A CN202011611682.1A CN202011611682A CN112822543A CN 112822543 A CN112822543 A CN 112822543A CN 202011611682 A CN202011611682 A CN 202011611682A CN 112822543 A CN112822543 A CN 112822543A
Authority
CN
China
Prior art keywords
identifier
operable
point
video
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011611682.1A
Other languages
Chinese (zh)
Inventor
王子乐
唐皓萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202011611682.1A priority Critical patent/CN112822543A/en
Publication of CN112822543A publication Critical patent/CN112822543A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8352Generation of protective data, e.g. certificates involving content or source identification data, e.g. Unique Material Identifier [UMID]

Abstract

The disclosure relates to a video processing method and device, electronic equipment and a storage medium. The method comprises the following steps: in response to a sliding operation of at least one operable identifier of a target object to be processed on a video time axis, determining the current position of the operable identifier on the video time axis; acquiring the position of an identification point of at least one identification point in the video time axis; determining the current state of the operable identifier according to the relation between the current position of the operable identifier and the position of the identifier point; and determining the target position of the operable identifier on the video time axis based on the current state of the operable identifier. In the embodiment, the target position of the operable identifier on the video time axis is determined according to the current state of the operable identifier, so that the operable identifier of the target object can be aligned with the identifier point on the video time axis; compared with the alignment of a target object dragged by a user, the alignment is more accurate, and the experience of processing the video is promoted.

Description

Video processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a video processing method and apparatus, an electronic device, and a storage medium.
Background
At present, the existing short video platform allows users to publish short videos by themselves. In the short video publishing process, if a user has a requirement of adding one or more sections of materials (stickers and characters) on the short video, after the user is allowed to select the materials, the user can refer to a picture or a video axis and manually drag the head and the tail of the materials to align the materials and the short video segments, so that the materials appear in the picture when a certain segment starts to be played; when a certain segment is finished playing, the material disappears in the picture.
However, in the existing scheme, the scheme for adding the material needs manual dragging by a user, so that the alignment accuracy of the material and the video clip is low, and the experience of adding the material is reduced.
Disclosure of Invention
The present disclosure provides a video processing method and apparatus, an electronic device, and a storage medium to solve the deficiencies of the related art.
According to a first aspect of the embodiments of the present disclosure, there is provided a video processing method, including:
in response to a sliding operation of at least one operable identifier of a target object to be processed on a video time axis, determining the current position of the operable identifier on the video time axis; the operational identification is used for adjusting the display endpoint position of the target object on the video time axis;
acquiring the position of an identification point of at least one identification point in the video time axis;
determining the current state of the operable identifier according to the relation between the current position of the operable identifier and the position of the identifier point;
and determining the target position of the operable identifier on the video time axis based on the current state of the operable identifier.
Optionally, determining the current state of the operable identifier according to the relationship between the current location of the operable identifier and the location of the identifier point includes:
sequentially calculating the distance between the current position and the position of the at least one identification point to obtain the identification point closest to the current position;
and when the distance between the position of the identification point of the nearest identification point and the current position is less than or equal to a first preset distance, determining that the current state of the operable identification is an adsorption state.
Optionally, determining the target position of the operable identifier on the video timeline based on the current state of the operable identifier comprises:
and when the current state of the operable identifier is an adsorption state, taking the identifier point position of the nearest identifier point as the target position of the operable identifier on the video timeline.
Optionally, after obtaining the identification point closest to the current position, the method further includes:
and when the distance between the identifier point position of the nearest identifier point and the current position is greater than the first preset distance, determining that the current state of the operable identifier is an idle state.
Optionally, determining the target position of the operable identifier on the video timeline based on the current state of the operable identifier comprises:
and when the current state of the operable identifier is an idle state, taking the current position of the operable identifier as the target position of the operable identifier on the video timeline.
Optionally, after taking the identifier point position of the nearest identifier point as the target position of the operable identifier on the video timeline, the method further comprises:
determining the current position of the operable identifier in response to the sliding operation of the operable identifier by the user;
and when the distance between the current position of the operable identifier and the position of the identifier point of the nearest identifier point is less than or equal to the first preset distance, keeping the current state of the operable identifier as an adsorption state.
Optionally, after determining the current position of the operable identifier in response to a sliding operation of the operable identifier by a user, the method further includes:
and when the distance between the current position of the operable identifier and the identifier point position of the nearest identifier point is greater than the first preset distance, determining that the current state of the operable identifier is an adjustable state.
Optionally, determining the target position of the operable identifier on the video timeline based on the current state of the operable identifier comprises:
when the current state of the operable identifier is an adjustable state, determining an offset distance of the operable identifier, wherein the offset distance is a difference value between the current position of the operable identifier and the first preset distance;
and when the offset distance is smaller than or equal to the first preset distance, taking the position of the identification point of the nearest identification point, which is offset by the offset distance towards the sliding direction of the sliding operation, as the target position of the operable identification on the video time axis.
Optionally, after determining the offset distance of the operable identifier, the method further comprises:
and when the offset distance is greater than the first preset distance, determining that the current state of the operable identifier is an idle state.
According to a second aspect of the embodiments of the present disclosure, there is provided a video processing apparatus including:
the current position determining module is configured to execute a sliding operation on at least one operable identifier of a target object to be processed on a video time axis, and determine the current position of the operable identifier on the video time axis; the operational identification is used for adjusting the display endpoint position of the target object on the video time axis;
an identification point position acquisition module configured to perform acquisition of an identification point position of at least one identification point in the video timeline;
a current state determination module configured to perform determining a current state of the operable identifier according to a relationship between a current position of the operable identifier and a position of the identifier point;
a target position determination module configured to perform determining a target position of the operational marker on the video timeline based on the current state of the operational marker.
Optionally, the current state determining module includes:
an identification point acquisition unit configured to perform sequential calculation of a distance between the current position and the at least one identification point position, and obtain an identification point closest to the current position;
and the adsorption state determination unit is configured to determine that the current state of the operable identifier is an adsorption state when the distance between the identifier point position of the nearest identifier point and the current position is smaller than or equal to a first preset distance.
Optionally, the target position determination module includes:
a target position determination unit configured to perform, when the current state of the operable marker is the adsorption state, regarding the marker point position of the nearest marker point as a target position of the operable marker on the video timeline.
Optionally, the current state determining module includes:
an idle state determination unit configured to determine that the current state of the operable identifier is an idle state when the distance between the identifier point position of the nearest identifier point and the current position is greater than the first preset distance.
Optionally, the target position determination module includes:
a target position determining unit configured to perform, when the current state of the operable identifier is an idle state, regarding the current position of the operable identifier as a target position of the operable identifier on the video timeline.
Optionally, the current state determining module includes:
a current position determination unit configured to perform a sliding operation of the operable identifier in response to a user, and determine a current position of the operable identifier;
and the adsorption state determining unit is configured to keep the current state of the operable identifier as an adsorption state when the distance between the current position of the operable identifier and the position of the identifier point of the nearest identifier point is less than or equal to the first preset distance.
Optionally, the current state determining module includes:
an adjustable state determination unit configured to determine that the current state of the operable identifier is an adjustable state when the distance between the current position of the operable identifier and the identifier point position of the nearest identifier point is greater than the first preset distance.
Optionally, the target position determination module includes:
an offset distance determining unit configured to determine an offset distance of the operable identifier when the current state of the operable identifier is an adjustable state, where the offset distance is a difference between the current position of the operable identifier and the first preset distance;
a target position determination unit configured to perform, when the offset distance is less than or equal to the first preset distance, a position where the identification point position of the closest identification point is offset by the offset distance to the sliding direction of the sliding operation as a target position of the operable identification on the video timeline.
Optionally, the current state determining module includes:
an idle state determination unit configured to perform, when the offset distance is greater than the first preset distance, determining that the current state of the operable identifier is an idle state.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing a computer program executable by the processor;
wherein the processor is configured to execute the computer program in the memory to implement any of the video processing methods of the first aspect described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, in which an executable computer program is executed by a processor, thereby implementing any one of the video processing methods of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer program instructions which, when executed by a processor, implement any of the video processing methods of the first aspect described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
as can be seen from the foregoing embodiments, the present disclosure may determine, in response to a sliding operation of at least one operable identifier of a target object to be processed on a video timeline, a current position of the operable identifier on the video timeline; the operational identification is used for adjusting the display endpoint position of the target object on the video time axis; then, acquiring the position of an identification point of at least one identification point in the video time axis; then, determining the current state of the operable identifier according to the relation between the current position of the operable identifier and the position of the identifier point; finally, the target position of the operable identifier on the video time axis is determined based on the current state of the operable identifier. In this way, in this embodiment, the target position of the operable identifier on the video timeline is determined by the current state of the operable identifier, and the operable identifier of the target object can be aligned with the identifier point on the video timeline; compared with the alignment of a target object dragged by a user, the alignment is more accurate, and the experience of editing and processing the video is promoted.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow diagram illustrating a video processing method according to an example embodiment.
FIG. 2 is a state flow diagram that is shown in accordance with an exemplary embodiment.
FIG. 3 is a block diagram illustrating an effect bar drag implementation module combination in accordance with an exemplary embodiment.
FIG. 4 is a block diagram illustrating another effect bar drag implementation module combination in accordance with an illustrative embodiment.
Fig. 5 is a block diagram illustrating a video processing apparatus according to an example embodiment.
FIG. 6 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The following exemplary described embodiments do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of devices consistent with certain aspects of the present disclosure as recited in the claims below.
In order to solve the above technical problem, an embodiment of the present disclosure provides a video processing method, and fig. 1 is a flowchart illustrating a video processing method according to an exemplary embodiment, and is applied to an electronic device, which may be a mobile terminal, a server, or the like. Referring to fig. 1, a video processing method includes steps 11 to 13:
in step 11, in response to a sliding operation of at least one operable identifier of a target object to be processed on a video time axis, determining a current position of the operable identifier on the video time axis; the operational identification is used for adjusting the display endpoint position of the target object on the video time axis.
In this embodiment, the electronic device may obtain the video file from a local, cloud, or third-party platform. When a user has a requirement for attaching a target object to a video, a trigger key for representing, editing and processing the video can be triggered, such as a configuration key, a character adding key, a sticker adding key and the like. The electronic device may adjust the video to an editing processing state upon detecting that the trigger button is triggered. Wherein the video is in an editing processing state may include setting an editing layer on the video, or setting an editing box on the video, or adjusting the video itself to an editable processing state, or the like. It will be appreciated that in the editing process state, the user can perform various processes on the video, such as clipping, filtering, adjusting brightness, etc., and attaching text or stickers, etc.
In this embodiment, when it is detected that the video is in the editing processing state, the electronic device may continue to detect a trigger operation of the user, such as inputting a text or selecting a sticker, and may determine the selected object according to the trigger operation of the user, which is hereinafter referred to as a target object. Taking the input text as an example, after the electronic device acquires the target text, the electronic device may obtain a default duration, such as 2 to 5 seconds, for the target text to be attached to the video.
For example, in the process of playing a video, the electronic device may determine, when it is detected that the user adjusts the video to the editing state, a start position of the target object on the video timeline, that is, a head display endpoint of the target object; and the electronic equipment can obtain the default time length of the target object after obtaining the target object, and determine the end position of the target object on the video time axis, namely the tail display endpoint of the target object, according to the default time length and the start position. That is, the electronic device can determine the position of the end point of the target object on the video timeline.
It should be noted that, because the trigger time of the user for the video may not be aligned to the head or the tail of the clip in the video, so that the head display endpoint and the tail display endpoint of the target object may not be aligned to the head and the tail of the clip, the head display endpoint and the tail display endpoint may be understood as the initial positions of the target object on the video timeline, or the electronic device coarsely locates the target object.
In this embodiment, after determining the position of the display endpoint of the target object on the video timeline, an operable identifier for adjusting the position of the display endpoint of the target object on the video timeline may be generated on the video timeline. For example, the operational indicator at the head display endpoint may adjust the position of the head display endpoint of the target object on the video timeline, and the operational indicator at the tail display endpoint may adjust the position of the tail display endpoint of the target object on the video timeline.
In this embodiment, after generating the operable identifier, the user may select the operable identifier to be adjusted, and slide the operable identifier to the left or right to adjust the target positions of the head display endpoint and the tail display endpoint of the target object.
In step 12, an identification point position of at least one identification point in the video timeline is obtained.
In this embodiment, after the electronic device obtains the video, the segments may be divided according to a preset manner, and identification points are set for the start point and the end point of each segment on the video timeline, so as to obtain an identification point position of at least one identification point in the video timeline. The preset manner may include time interval division, content division, and the like, which is not limited herein. In practical application, the video can be divided into segments in advance, and the identification points of the segments are set on the video time axis, so that the electronic equipment can directly acquire the identification point position of at least one identification point on the video time axis in the video, and the time for acquiring the identification point position can be shortened.
In step 13, the current state of the operable identifier is determined according to the relationship between the current position of the operable identifier and the position of the identifier point, and in step 14, the target position of the operable identifier on the video timeline is determined based on the current state of the operable identifier.
In this embodiment, the attachment state of the operable identifier of the target object may be stored in the electronic device in advance. The attachment state may be adjusted according to a specific scenario, such as an idle state, an adsorption state, and an adjustable state, which is not limited herein. Taking the operable identifier including 3 attaching states as an example, the 3 attaching states are: an idle state, an adsorption state, and an adjustable state. The idle state refers to a state in which the identifier is not attached to the identifier point in the video timeline, the attachment state refers to a state in which the identifier is attached to one of the identifier points in the video timeline, and the adjustable state refers to a state in which the identifier is attached to one of the identifier points in the video timeline and the target position of the identifier can be adjusted.
The flow diagram of the 3 attachment states of the operational indicia is shown in fig. 2. Referring to fig. 2, the circulation process includes:
(1) the initial state of the operational identification is an idle state. When the user slides and operates at least one operable identifier, each operable identifier gradually approaches to an identifier point in the sliding direction in the video time axis, and when the distance between the operable identifier and the nearest identifier point is greater than a first preset distance (adjustable, for example, a distance of 7 pixels), the current state of the operable identifier is kept in an idle state, and the current position of the operable identifier is used as the target position of the operable identifier on the video time axis. When the distance between the operable identifier and the nearest identifier point is less than or equal to a first preset distance, the current state of the operable identifier may be switched from the idle state to the adsorption state.
(2) The current state of the operable identifier is the adsorption state, and the electronic device may update the current position of the operable identifier to the identifier point position of the nearest identifier point.
When the user continues to slide, if the distance between the current position of the operable identifier and the nearest identifier point is less than or equal to the first preset distance, the state of the operable identifier is kept in the adsorption state, and the situation that the target position of the operable identifier is changed due to the shaking of the hand of the user can be avoided. When the distance between the current position of the operable identifier and the nearest identifier point is greater than a first preset distance, the current state of the operable identifier is switched from the adsorption state to the adjustable state.
It should be noted that, in the adsorption state, the target position of the operable mark does not change with the current position thereof, and is always the mark point position of the nearest mark point, that is, the effect similar to adsorbing the operable mark to the nearest mark point is achieved.
(3) The current state of the operable identifier is an adjustable state, and at this time, the electronic device may adjust the target position of the operable identifier of the target object, that is, the target position of the operable identifier is the closest identifier point position plus the offset distance. The offset distance refers to a difference value between the current position of the operable identifier and a first preset distance; or, the offset is caused by sliding continuously after the identification point with the closest operable identification distance is the first preset distance under the condition that the sliding direction is not changed.
When the offset distance is greater than a first preset distance, the current state of the operable identifier is switched from an adjustable state to an idle state, and the current position of the operable identifier is used as the target position of the operable identifier on the video timeline.
Based on the flow chart shown in fig. 2, when the current state of the operable identifier is the idle state, if the user lifts the finger, it can be determined that the operable identifier is not attached, and the operable identifier is still in the idle state. When the current state of the operable identifier is an adsorption state or an adjustable state, if a user lifts a finger, the operable identifier can be determined to finish attaching, at this moment, the operable identifier can be switched to an idle state to facilitate the next attaching, or switched to the adsorption state to facilitate the next attaching to avoid re-adsorption, or the current state is locked and switched to the idle state when the user performs sliding operation again. Technical staff can set the switching of the current posture of the operable identifier after the finger is lifted according to a specific scene, and under the condition that the situation that the flow diagram is not conflicted is ensured, the corresponding scheme falls into the protection scope of the disclosure.
In this embodiment, based on the flow chart shown in fig. 2, the electronic device may determine the current state of the operable identifier according to the relationship between the current position of the operable identifier and the position of the identifier point, and then, the electronic device may determine the target position of the operable identifier on the video timeline based on the current state of the operable identifier.
In this embodiment, the electronic device may calculate distances between the current position of the operable identifier and the identifier points of each identifier point on the video timeline, and obtain the identifier point closest to the current position of the operable identifier. When the distance between the nearest identification point and the current position is less than or equal to the first preset distance, the electronic device may determine that the current state of the operable identification is the adsorption state. In practical application, when the electronic device is switched to the adsorption state, the electronic device can remind a user through vibration, sound or color change of a target object. And when the current state of the operable marker is the adsorption state, the electronic device can take the marker point position of the nearest marker point as a target position of the operable marker on the video time axis. When the distance between the nearest identification point and the current position is greater than the first preset distance, the electronic device may determine that the current state of the operable identifier is an idle state, and at this time, the electronic device may use the current position of the operable identifier as a target position of the operable identifier on the video timeline.
In this embodiment, after the current state of the operable identifier is the adsorption state, the electronic device may continue to detect the sliding operation of the user on the operable identifier, and obtain the current position of the operable identifier. And when the distance between the current position of the operable identifier and the position of the identifier point of the nearest identifier point is detected to be less than or equal to a first preset distance, determining that the current state of the operable identifier is kept as an adsorption state. When the fact that the distance between the current position of the operable identifier and the position of the identifier point of the nearest identifier point is larger than a first preset distance is detected, the offset distance between the current position of the operable identifier and the first preset distance is obtained, and the offset distance is offset caused by the fact that the operable identifier continues to move after sliding the first preset distance.
When the offset distance is less than or equal to the first preset distance, the electronic device may determine that the current state of the operable identifier is an adjustable state, and superimpose the offset distance on the target position of the operable identifier, that is, a position where the identifier point of the nearest identifier point is offset by the offset distance in the sliding direction of the sliding operation is used as the target position of the operable identifier on the video timeline, so that the operable identifier is finely adjusted, and the alignment accuracy is ensured. When the offset distance is greater than the first preset distance, the electronic device may determine that the current state of the operable identifier is an idle state, and at this time, the electronic device may use the current position of the operable identifier as a target position of the operable identifier on the video timeline.
In this embodiment, when the current state of the operable identifier is the adsorption state or the adjustable state, the current position of the operable identifier may be determined as the attachment position of the target object, and the target object is attached to the video. Therefore, in the subsequent process of playing the target video, the target object can be displayed when the head display endpoint of the target object is played, and the target object is stopped being displayed when the tail display endpoint of the target object is played.
To this end, the embodiment of the present disclosure may determine, in response to a sliding operation of at least one operable identifier of a target object to be processed on a video timeline, a current position of the operable identifier on the video timeline; the operational identification is used for adjusting the display endpoint position of the target object on the video time axis; then, acquiring the position of an identification point of at least one identification point in the video time axis; then, determining the current state of the operable identifier according to the relation between the current position of the operable identifier and the position of the identifier point; finally, the target position of the operable identifier on the video time axis is determined based on the current state of the operable identifier. In this way, in this embodiment, the target position of the operable identifier on the video timeline is determined by the current state of the operable identifier, and the operable identifier of the target object can be aligned with the identifier point on the video timeline; compared with the alignment of a target object dragged by a user, the alignment is more accurate, and the experience of editing and processing the video is promoted.
The video processing method is described below with reference to specific scenes, and includes:
example 1
When the electronic equipment adopts an Android operating system, the sliding operation of the operable identifier in the Android operating system is to transmit a gesture to a time track component EditorTimeLineView through a gesture component BaseRangeView. The EditorTimeLineView processes the acquired gesture into a corresponding timeline time and passes it to the video preview component videodecorationtimelineviewer, which is responsible for the page data. The videodecorationtimelinepressenter completes the look-up (seek) operation for the corresponding video preview. The BaseRangeView, EditorTimeLineView, and videodecorationtimelinecompressor are associated with each other through a first interface timelineListener and a second interface rangeviewListener, and the specific relationship is shown in FIG. 3.
In this example, the gesture processing module timelinegesturgerprocessor may be used as a processing module of the slide gesture adsorption logic, and combined with the time track module EditorTimeLineView and the operable identifier slide dragging module BaseRangeView to jointly complete object attachment, where a specific relationship is shown in fig. 4. The implementation logic inside the timelinegetcureprocessor is consistent with the logic of the flow chart shown in fig. 2, and different processing modes are given through state flow.
In this example, the initialization of the timelinegetcurepomonitor is completed by the EditorTimeLineView, and the timelinegetcurepomonitor is notified to update the data set of the adsorbable point every time the operable identifier set attached to the EditorTimeLineView changes. The input to the TimeLineGesturProcessor is the drag gesture data in BaseRangeView. Before the BaseRangeView calls back the gesture data to the EditorTimeLineView, the TimeLineGesturgeProcessor processes the data through a processMovement method, if the data accord with adsorption logic, the data is changed to finish adsorption, vibration prompts are generated through a viewer class provided by an Android operating system, and then the updated data is returned to the BaseRangeView. Where the flow remains consistent with existing logic.
In this example, the timelinegetcureprocessor is applied to the logic flow shown in fig. 3 as a data processing module, which has little intrusion on the existing logic and low module coupling degree, and can be conveniently multiplexed in more other scenes.
Example two
When the electronic device adopts the iOS operating system, the electronic device starts with the gesture, traverses all the text and subtitles, finds the closest adsorption point as lastaddredviewtime.
Then, the sliding position of the finger in the view can be obtained by the getturerecognizer locationInView method, and the distance targetViewTime to be reached by the operable identifier is obtained by maintaining the distance offset between the sliding position of the finger and the operable identifier.
Then, the start-stop time of each visible text and subtitle is obtained through the collectionViewindextPathForVisibleItems, wherein the adsorption logic can be applied to adsorbable points (i.e., identification points) which are less than or equal to 7pt (pixels) away from the operable identification and are consistent with the moving direction. When there are a plurality of adsorbable sites, the closest adsorbable site may be selected. If the selected adsorbable point is the starting point or the end point of the current operation interval, no adsorption is performed.
After the adsorption point of the operable identifier is determined, the position of the adsorption point is added or subtracted with DBL _ EPSILON according to the adsorption direction and assigned to targetViewTime, and the situation that two characters can be displayed at the same time in the frame of the adsorption point is avoided. After the operable mark is adsorbed, the adsorption distance can be added or subtracted according to the direction to update the offset value, and the position lastaddredviewtime of the adsorption point is recorded. And then sending vibration feedback through the UIImpactFeedbackGenerator, and selecting medium at this time.
In this example, after the operable marker slide adsorption, the movement of the finger within 7pt around the adsorption point is ignored, and the targetViewTime is assigned as the position of the adsorption point, and is in the adsorption state.
In this example, after the movement of the adsorption point periphery exceeds 7pt, no other adsorption is performed within the range of the movement of 7pt around the adsorption point. And updating the value of offset according to the direction plus or minus 7pt, and switching to an adjustable state.
In this example, when the operational flag is 7pt away from the attraction point, lastaddredviewtime is set to-1, and the attraction point is determined again, and the operational flag is in the self-use moving state (i.e., idle state).
In this example, the gesture ends, setting offset to 0 and lastaddredviewtime to-1.
Fig. 5 is a block diagram illustrating a video processing apparatus according to an exemplary embodiment, which is applied to an electronic device, which may be a mobile terminal, a server, or the like. Referring to fig. 5, a video processing apparatus includes:
a current position determining module 51 configured to perform a sliding operation on the video timeline in response to at least one operable identifier of the target object to be processed, and determine a current position of the operable identifier on the video timeline; the operational identification is used for adjusting the display endpoint position of the target object on the video time axis;
an identification point position acquisition module 52 configured to perform acquisition of an identification point position of at least one identification point in the video timeline;
a current state determination module 53 configured to perform determining a current state of the operable identifier according to a relationship between a current position of the operable identifier and the identifier point position;
a target position determination module 54 configured to perform determining a target position of the operational marker on the video timeline based on the current state of the operational marker.
In one embodiment, the current state determination module comprises:
an identification point acquisition unit configured to perform sequential calculation of a distance between the current position and the at least one identification point position, and obtain an identification point closest to the current position;
and the adsorption state determination unit is configured to determine that the current state of the operable identifier is an adsorption state when the distance between the identifier point position of the nearest identifier point and the current position is smaller than or equal to a first preset distance.
In one embodiment, the target location determination module comprises:
a target position determination unit configured to perform, when the current state of the operable marker is the adsorption state, regarding the marker point position of the nearest marker point as a target position of the operable marker on the video timeline.
In one embodiment, the current state determination module comprises:
an idle state determination unit configured to determine that the current state of the operable identifier is an idle state when the distance between the identifier point position of the nearest identifier point and the current position is greater than the first preset distance.
In one embodiment, the target location determination module comprises:
a target position determining unit configured to perform, when the current state of the operable identifier is an idle state, regarding the current position of the operable identifier as a target position of the operable identifier on the video timeline.
In one embodiment, the current state determination module comprises:
a current position determination unit configured to perform a sliding operation of the operable identifier in response to a user, and determine a current position of the operable identifier;
and the adsorption state determining unit is configured to keep the current state of the operable identifier as an adsorption state when the distance between the current position of the operable identifier and the position of the identifier point of the nearest identifier point is less than or equal to the first preset distance.
In one embodiment, the current state determination module comprises:
an adjustable state determination unit configured to determine that the current state of the operable identifier is an adjustable state when the distance between the current position of the operable identifier and the identifier point position of the nearest identifier point is greater than the first preset distance.
In one embodiment, the target location determination module comprises:
an offset distance determining unit configured to determine an offset distance of the operable identifier when the current state of the operable identifier is an adjustable state, where the offset distance is a difference between the current position of the operable identifier and the first preset distance;
a target position determination unit configured to perform, when the offset distance is less than or equal to the first preset distance, a position where the identification point position of the closest identification point is offset by the offset distance to the sliding direction of the sliding operation as a target position of the operable identification on the video timeline.
In one embodiment, the current state determination module comprises:
an idle state determination unit configured to perform, when the offset distance is greater than the first preset distance, determining that the current state of the operable identifier is an idle state.
It can be understood that the apparatus provided in the embodiment of the present disclosure corresponds to the method shown in fig. 1, and specific contents may refer to the contents of each embodiment of the method, which are not described herein again.
FIG. 6 is a block diagram illustrating an electronic device in accordance with an example embodiment. Referring to fig. 6, electronic device 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an interface for input/output (I/O) 612, a sensor component 614, a communication component 616, and an image capture component 618.
The processing component 602 generally controls overall operation of the electronic device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the electronic device 600. Examples of such data include instructions for any application or method operating on the electronic device 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power supply component 606 provides power to the various components of electronic device 600. The power components 606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 600.
The multimedia component 608 includes a screen that provides an output interface between the electronic device 600 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 600 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is configured to output and/or input audio signals. For example, the audio component 610 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 614 includes one or more sensors for providing status assessment of various aspects of the electronic device 600. For example, the sensor component 614 may detect an open/closed state of the electronic device 600, the relative positioning of components, such as a display and keypad of the electronic device 600, the sensor component 614 may also detect a change in the position of the electronic device 600 or a component of the electronic device 600, the presence or absence of user contact with the electronic device 600, orientation or acceleration/deceleration of the electronic device 600, and a change in the temperature of the electronic device 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communications between the electronic device 600 and other devices in a wired or wireless manner. The electronic device 600 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an embodiment of the present disclosure, the electronic device 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the steps of the above-described video processing method.
In an embodiment of the present disclosure, there is also provided a non-transitory computer readable storage medium, such as the memory 604, comprising instructions that may be capable of performing the steps of the video processing method described above by the processor 620 of the electronic device 600.
In an embodiment of the present disclosure, a computer program product is also provided, which includes computer program instructions, when executed by a processor, enable the electronic device to perform the steps of the above-mentioned video processing method.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the device/server/storage medium embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the embodiments discussed above that follow in general the principles of the disclosure and include such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A video processing method, comprising:
in response to a sliding operation of at least one operable identifier of a target object to be processed on a video time axis, determining the current position of the operable identifier on the video time axis; the operational identification is used for adjusting the display endpoint position of the target object on the video time axis;
acquiring the position of an identification point of at least one identification point in the video time axis;
determining the current state of the operable identifier according to the relation between the current position of the operable identifier and the position of the identifier point;
and determining the target position of the operable identifier on the video time axis based on the current state of the operable identifier.
2. The method of claim 1, wherein determining the current state of the operational identifier according to the relationship between the current location of the operational identifier and the location of the identifier point comprises:
sequentially calculating the distance between the current position and the position of the at least one identification point to obtain the identification point closest to the current position;
and when the distance between the position of the identification point of the nearest identification point and the current position is less than or equal to a first preset distance, determining that the current state of the operable identification is an adsorption state.
3. The method of claim 2, wherein determining the target position of the operational marker on the video timeline based on the current state of the operational marker comprises:
and when the current state of the operable identifier is an adsorption state, taking the identifier point position of the nearest identifier point as the target position of the operable identifier on the video timeline.
4. The method of claim 2, wherein after obtaining the identification point closest to the current location, the method further comprises:
and when the distance between the identifier point position of the nearest identifier point and the current position is greater than the first preset distance, determining that the current state of the operable identifier is an idle state.
5. The method of claim 4, wherein determining the target position of the operational marker on the video timeline based on the current state of the operational marker comprises:
and when the current state of the operable identifier is an idle state, taking the current position of the operable identifier as the target position of the operable identifier on the video timeline.
6. The method of claim 3, wherein the marker point position of the nearest marker point is taken as the target position of the operable marker on the video timeline, and wherein the method further comprises:
determining the current position of the operable identifier in response to the sliding operation of the operable identifier by the user;
and when the distance between the current position of the operable identifier and the position of the identifier point of the nearest identifier point is less than or equal to the first preset distance, keeping the current state of the operable identifier as an adsorption state.
7. A video processing apparatus, comprising:
the current position determining module is configured to execute a sliding operation on at least one operable identifier of a target object to be processed on a video time axis, and determine the current position of the operable identifier on the video time axis; the operational identification is used for adjusting the display endpoint position of the target object on the video time axis;
an identification point position acquisition module configured to perform acquisition of an identification point position of at least one identification point in the video timeline;
a current state determination module configured to perform determining a current state of the operable identifier according to a relationship between a current position of the operable identifier and a position of the identifier point;
a target position determination module configured to perform determining a target position of the operational marker on the video timeline based on the current state of the operational marker.
8. An electronic device, comprising:
a processor;
a memory for storing a computer program executable by the processor;
wherein the processor is configured to execute the computer program in the memory to implement the video processing method of any of claims 1 to 6.
9. A computer-readable storage medium, wherein executable computer program instructions in the storage medium, when executed by a processor, are capable of implementing a video processing method as claimed in any one of claims 1 to 6.
10. A computer program product comprising computer program instructions, characterized in that the computer program instructions, when executed by a processor, implement the video processing method of any of claims 1 to 6.
CN202011611682.1A 2020-12-30 2020-12-30 Video processing method and device, electronic equipment and storage medium Pending CN112822543A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011611682.1A CN112822543A (en) 2020-12-30 2020-12-30 Video processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011611682.1A CN112822543A (en) 2020-12-30 2020-12-30 Video processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112822543A true CN112822543A (en) 2021-05-18

Family

ID=75856391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011611682.1A Pending CN112822543A (en) 2020-12-30 2020-12-30 Video processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112822543A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473200A (en) * 2021-05-25 2021-10-01 北京达佳互联信息技术有限公司 Multimedia resource processing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060150072A1 (en) * 2005-01-05 2006-07-06 Salvucci Keith D Composite audio waveforms with precision alignment guides
US20160139871A1 (en) * 2014-11-13 2016-05-19 Here Global B.V. Method and apparatus for associating an audio soundtrack with one or more video clips
CN110769309A (en) * 2019-11-04 2020-02-07 北京字节跳动网络技术有限公司 Method, apparatus, electronic device, and medium for presenting music points
CN111629252A (en) * 2020-06-10 2020-09-04 北京字节跳动网络技术有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN111901626A (en) * 2020-08-05 2020-11-06 腾讯科技(深圳)有限公司 Background audio determining method, video editing method, device and computer equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060150072A1 (en) * 2005-01-05 2006-07-06 Salvucci Keith D Composite audio waveforms with precision alignment guides
US20160139871A1 (en) * 2014-11-13 2016-05-19 Here Global B.V. Method and apparatus for associating an audio soundtrack with one or more video clips
CN110769309A (en) * 2019-11-04 2020-02-07 北京字节跳动网络技术有限公司 Method, apparatus, electronic device, and medium for presenting music points
CN111629252A (en) * 2020-06-10 2020-09-04 北京字节跳动网络技术有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN111901626A (en) * 2020-08-05 2020-11-06 腾讯科技(深圳)有限公司 Background audio determining method, video editing method, device and computer equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473200A (en) * 2021-05-25 2021-10-01 北京达佳互联信息技术有限公司 Multimedia resource processing method and device, electronic equipment and storage medium
CN113473200B (en) * 2021-05-25 2023-09-26 北京达佳互联信息技术有限公司 Multimedia resource processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US20170178289A1 (en) Method, device and computer-readable storage medium for video display
EP3156767B1 (en) Method and device for navigating and method and device for generating a navigation video
CN109151537B (en) Video processing method and device, electronic equipment and storage medium
US20170304735A1 (en) Method and Apparatus for Performing Live Broadcast on Game
KR101668352B1 (en) Photographing control method, apparatus, terminal, program and recording medium
CN106559712B (en) Video playing processing method and device and terminal equipment
US20170344192A1 (en) Method and device for playing live videos
CN108966025B (en) Video playing method and device and computer readable storage medium
EP3163884A1 (en) Image acquiring method and apparatus, computer program and recording medium
US11770497B2 (en) Method and device for processing video, and storage medium
US11545188B2 (en) Video processing method, video playing method, devices and storage medium
CN107562349B (en) Method and device for executing processing
US20170054906A1 (en) Method and device for generating a panorama
CN113206948B (en) Image effect previewing method and device, electronic equipment and storage medium
CN106502532B (en) Instruction processing method and device based on time axis
CN110719530A (en) Video playing method and device, electronic equipment and storage medium
CN104035674A (en) Picture displaying method and device
CN106095300B (en) Method and device for adjusting playing progress
CN112929561A (en) Multimedia data processing method and device, electronic equipment and storage medium
CN104850643B (en) Picture comparison method and device
CN113613082A (en) Video playing method and device, electronic equipment and storage medium
CN112822543A (en) Video processing method and device, electronic equipment and storage medium
CN112764636A (en) Video processing method, video processing device, electronic equipment and computer-readable storage medium
CN110399082B (en) Terminal attribute control method and device, electronic equipment and medium
CN112261453A (en) Method, device and storage medium for transmitting subtitle splicing map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210518