CN110719527A - Video processing method, electronic equipment and mobile terminal - Google Patents

Video processing method, electronic equipment and mobile terminal Download PDF

Info

Publication number
CN110719527A
CN110719527A CN201910944750.7A CN201910944750A CN110719527A CN 110719527 A CN110719527 A CN 110719527A CN 201910944750 A CN201910944750 A CN 201910944750A CN 110719527 A CN110719527 A CN 110719527A
Authority
CN
China
Prior art keywords
target video
video
input
candidate
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910944750.7A
Other languages
Chinese (zh)
Inventor
刘先亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910944750.7A priority Critical patent/CN110719527A/en
Publication of CN110719527A publication Critical patent/CN110719527A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4408Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video stream encryption, e.g. re-encrypting a decrypted video stream for redistribution in a home network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4405Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video stream decryption
    • H04N21/44055Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video stream decryption by partially decrypting, e.g. decrypting a video stream that has been partially encrypted

Abstract

The invention provides a video processing method, electronic equipment and a mobile terminal, and belongs to the technical field of communication. The first electronic equipment receives first input of a first electronic equipment user to at least one candidate video frame of a target video, then responds to the first input, marks N target video frames, acquires encryption information input by the user, encrypts the N target video frames based on the encryption information, and finally sends the encrypted target video to second electronic equipment. Compared with a mode of directly sharing the video for part of users and sharing the video after cutting the video for part of users, in the embodiment of the invention, the second electronic equipment user can watch all contents of the target video or only part of the contents in the target video based on the decryption information only by encrypting part of target video frames in the target video, so that the convenience degree of the sharing operation can be improved to a certain extent, and the time consumption of the sharing operation is reduced.

Description

Video processing method, electronic equipment and mobile terminal
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a video processing method, electronic equipment and a mobile terminal.
Background
With the application of the electronic device becoming more and more extensive, in the process of using the electronic device, a user may share videos with other users by using the electronic device, so as to realize interaction. In the video to be shared, there may be a case where part of the content is only needed to be viewed by part of other users.
In the prior art, the video is usually shared directly to the other users, and after the content of the part is cut out, the cut video is shared to the remaining other users by editing through third-party software. The whole sharing process is troublesome to operate and consumes a long time.
Disclosure of Invention
The invention provides a video processing method, electronic equipment and a mobile terminal, and aims to solve the problems that video sharing operation is troublesome and time consumption is long.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a video processing method, which is applied to a first electronic device, and the method may include:
receiving a first input of at least one candidate video frame of a target video from a first electronic device user;
in response to the first input, marking N target video frames;
acquiring encrypted information input by a user;
encrypting the N target video frames based on the encryption information;
sending the encrypted target video to second electronic equipment;
wherein N is a positive integer.
In a second aspect, an embodiment of the present invention provides a video processing method, which is applied to a second electronic device, and the method may include:
receiving a target video sent by first electronic equipment, wherein the target video comprises N target video frames, and the N target video frames are video frames encrypted based on encryption information input by a user of the first electronic equipment;
acquiring decryption information input by a user of the second electronic equipment;
and under the condition that the decryption information is matched with the encryption information, playing the target video, and playing the N target video frames in the playing process of the target video.
In a third aspect, an embodiment of the present invention provides a first electronic device, where the first electronic device may include:
the device comprises a first receiving module, a second receiving module and a display module, wherein the first receiving module is used for receiving first input of at least one candidate video frame of a target video from a first electronic equipment user;
a tagging module for tagging N target video frames in response to the first input;
the acquisition module is used for acquiring the encrypted information input by the user;
an encryption module, configured to encrypt the N target video frames based on the encryption information;
the sending module is used for sending the encrypted target video to second electronic equipment;
wherein N is a positive integer.
In a fourth aspect, an embodiment of the present invention provides a second electronic device, where the second electronic device may include:
the receiving module is used for receiving a target video sent by first electronic equipment, wherein the target video comprises N target video frames, and the N target video frames are video frames encrypted based on encryption information input by a user of the first electronic equipment;
the acquisition module is used for acquiring decryption information input by a user of the second electronic equipment;
and the first playing module is used for playing the target video under the condition that the decryption information is matched with the encryption information, and playing the N target video frames in the playing process of the target video.
In a fifth aspect, an embodiment of the present invention provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the video processing method according to any one of the first and second aspects.
In a sixth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the video processing method according to any one of the first and second aspects.
In the embodiment of the present invention, a first electronic device receives a first input of a user of the first electronic device to at least one candidate video frame of a target video, then, in response to the first input, marks N target video frames, acquires encryption information input by the user, encrypts the N target video frames based on the encryption information, and finally, sends the encrypted target video to a second electronic device. Compared with a mode of directly sharing the video for part of users and sharing the video after cutting the video for part of users, in the embodiment of the invention, the second electronic equipment user can watch all contents of the target video or only part of the contents in the target video based on the decryption information only by encrypting part of target video frames in the target video, so that the convenience degree of the sharing operation can be improved to a certain extent, the time consumption of the sharing operation is reduced, and the safety of video sharing is improved.
Drawings
Fig. 1 is a flowchart illustrating steps of a video processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a video processing method according to an embodiment of the present invention;
FIG. 3-1 is a flowchart illustrating steps of a video processing method according to an embodiment of the present invention;
FIG. 3-2 is a schematic diagram of a user interface provided by an embodiment of the invention;
3-3 are schematic diagrams of another user interface provided by embodiments of the present invention;
3-4 are schematic diagrams of still another user interface provided by embodiments of the present invention;
3-5 are schematic diagrams of still another user interface provided by embodiments of the present invention;
3-6 are schematic diagrams of still another user interface provided by embodiments of the present invention;
FIGS. 3-7 are schematic diagrams of still another user interface provided by embodiments of the present invention;
3-8 are schematic diagrams of still another user interface provided by embodiments of the present invention;
fig. 4 is a block diagram of a first electronic device according to an embodiment of the present invention;
FIG. 5 is a block diagram of a second electronic device provided by an embodiment of the invention;
fig. 6 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of steps of a video processing method provided by an embodiment of the present invention, where the method may be applied to a first electronic device, and as shown in fig. 1, the method may include:
step 101, receiving a first input of at least one candidate video frame of a target video from a first electronic device user.
In the embodiment of the present invention, the target video may be a video that needs to be shared, the target video may be a video downloaded by a user from a network, or a video shot by the user through a first electronic device, the candidate video frames may be all video frames included in the target video, or may also be partial video frames included in the target video, and accordingly, the first electronic device may use the video frames in the target video as the candidate video frames. The first input may be performed by a user of the first electronic device when the user wants to encrypt video frames contained in the target video. For example, the first input may be a click input or a long-press input, and the like, which is not limited by the embodiment of the present invention.
Step 102, in response to the first input, marking N target video frames.
In this embodiment of the present invention, N may be a positive integer, and the target video frame may be a video frame that needs to be encrypted in a subsequent step selected from the candidate video frames, specifically, a user may send a first input to the first electronic device when wanting to encrypt a certain candidate video frame, and accordingly, the first electronic device may mark the target video frame from at least one frame of the candidate video frames according to the first input, for example, may use the candidate video frame selected by the first input as the target video frame.
And step 103, acquiring the encryption information input by the user.
In this embodiment of the present invention, the encryption information may be information for encrypting a target video frame, and the encryption information may be information of an image type capable of representing a shape, or information of a number or a character type, which is not limited in this embodiment of the present invention. Specifically, the first electronic device may display an encrypted information input interface after determining the target video frame, so as to facilitate a user to input encrypted information, and accordingly, the first electronic device may detect and receive an operation of the user on the input interface, so as to obtain the encrypted information.
And 104, encrypting the N target video frames based on the encryption information.
In the embodiment of the invention, the first electronic device can encrypt the image data of the target video frame by using the encryption information, so that the encrypted target video carries the encryption information and the encrypted target video frame, and the target video frame is played in a subsequent process in a state of successful decryption, so that the second electronic device can watch the video frames after the user of the second electronic device completes decryption by using the corresponding decryption information, and the sharing safety can be further improved to a certain extent.
Meanwhile, compared with a mode of directly encrypting the whole video, the embodiment of the invention can encrypt the video frame which needs to be encrypted by the user based on the selection operation of the first electronic equipment user on the candidate video frame, so that the encryption operation is more flexible, and meanwhile, only part of the video frame is encrypted, so that after sharing, the second terminal user can watch the unencrypted video frame part of the video without executing decryption operation, and further the playing efficiency is ensured to a certain extent.
And 105, sending the encrypted target video to second electronic equipment.
In the embodiment of the present invention, the second electronic device may be an electronic device having a friend relationship with the first electronic device, for example, the friend relationship may be established through social software of a third party, and specifically, when sending the encrypted target video to the second electronic device, identifiers of all friends having a friend relationship with the first electronic device may be displayed first, then, the electronic device corresponding to the identifier selected by the user is used as the second electronic device, and finally, the encrypted target video is sent to the second electronic device, so as to implement video sharing.
In summary, in the video processing method provided in the embodiments of the present invention, the first electronic device receives a first input of a user of the first electronic device to at least one candidate video frame of the target video, then, in response to the first input, marks N target video frames, obtains encryption information input by the user, encrypts the N target video frames based on the encryption information, and finally, sends the encrypted target video to the second electronic device. Compared with a mode of directly sharing the video for part of users and sharing the video after cutting the video for part of users, in the embodiment of the invention, the second electronic equipment user can watch all contents of the target video or only part of the contents in the target video based on the decryption information only by encrypting part of target video frames in the target video, so that the convenience degree of the sharing operation can be improved to a certain extent, and the time consumption of the sharing operation is reduced.
Fig. 2 is a flowchart of steps of a video processing method provided by an embodiment of the present invention, which may be applied to a second electronic device, as shown in fig. 2, and the method may include:
step 201, receiving a target video sent by a first electronic device, where the target video includes N target video frames, and the N target video frames are video frames encrypted based on encryption information input by a user of the first electronic device.
In this embodiment of the present invention, the target video frame may be a partial video frame in the target video, and the encryption information may be set by the user of the first electronic device.
Step 202, acquiring decryption information input by a user of the second electronic device.
In this embodiment of the present invention, the decryption information may be input by the user of the second electronic device when the user needs to play the visual video frame in the target video. Specifically, the second electronic device may consider that the second electronic device user needs to play the visual audio frame when receiving the first play operation, and accordingly, the second electronic device may obtain the decryption information. For example, the first play operation may be a long press operation on the target video.
And 203, playing the target video under the condition that the decryption information is matched with the encryption information, and playing the N target video frames in the playing process of the target video.
In the embodiment of the present invention, the target video may be played first, so that the second electronic device user can watch the target video as soon as possible, and meanwhile, it may be determined whether the encrypted target video frame can be played for the second electronic device user based on the decryption information, specifically, if the decryption information matches the encryption information, it indicates that the second electronic device user wants to watch the encrypted target video frame and has a watching qualification, so that the target video frame may be played in the playing process of the target video.
In summary, in the video processing method provided in the embodiment of the present invention, a second electronic device receives a target video sent by a first electronic device, where the target video includes N target video frames, where the N target video frames are video frames encrypted based on encryption information input by a user of the first electronic device, obtains decryption information input by the user of the second electronic device, and plays the target video and the N target video frames in a playing process of the target video when the decryption information matches the encryption information. Therefore, the first electronic device only needs to encrypt part of target video frames in the target view, and a user of the second electronic device can watch all contents of the target video or only part of contents in the target video based on decryption information, so that convenience degree of sharing operation can be improved to a certain extent, and time consumption of the sharing operation is reduced.
Fig. 3-1 is a flowchart illustrating steps of a video processing method according to an embodiment of the present invention, and as shown in fig. 3-1, the method may include:
step 301, a first electronic device receives a first input of at least one candidate video frame of a target video from a user of the first electronic device.
In this step, the first electronic device may receive a second input from the user of the first electronic device before receiving the first input, and then display at least one candidate video frame included in the target video in response to the second input. Wherein the second input may be a click input, a long press input, etc., and the second input may also be a second operation, such as a click operation, a long press operation, etc.,
further, if the first electronic device receives the second input, it may be considered that the user of the first electronic device needs to encrypt a part of the video frames, and accordingly, at least one candidate video frame included in the target video may be displayed. Therefore, the candidate video frames are displayed, so that the first electronic equipment user can more visually see the content of the video frames, the first electronic equipment user can conveniently select the video frames, and the selection convenience is improved.
Specifically, the first electronic device may implement displaying at least one candidate video frame included in the target video through the following steps 3011 to 3013:
step 3011, the first electronic device extracts a video key frame included in the target video.
In this step, since the number of video frames included in the video is often large and there are many video frames with similar contents, if all the video frames are used as candidate video frames, it may cause too many candidate video frames to be displayed and the repeatability is high, which further causes interference to the selection of the user and reduces the selection efficiency of the user.
Step 3012, determining at least one candidate video frame based on the video key frame.
In this step, the video key frame is also referred to as an I frame, which indicates a frame image that can be independently decoded and rendered without referring to other information, and the video key frame includes more abundant image information than other types of video frames.
Further, in determining candidate video frames based on the video key frames, the first electronic device may determine all of the video key frames as candidate video frames to provide a richer choice for the first electronic device user. Of course, the first electronic device may also select a target video key frame from the video key frames as a candidate video frame. The target object in the target video key frame is changed compared with the target object in the adjacent video key frame of the target video key frame; the target object includes at least one of a scene and a video object. The video object may be a character and an object appearing in the video, for example, the object may be a puppy, a kitten, a desk, a television, and so forth.
Specifically, the first electronic device may perform scene detection on each video key frame by using a preset scene recognition algorithm to determine a scene corresponding to each video key frame, and/or perform object recognition on each video key frame by using a preset object recognition algorithm to determine a video object corresponding to each video key frame, then compare the scenes corresponding to each video key frame to determine a video key frame with a scene change, and/or compare the video objects corresponding to each video key frame to determine a video key frame with a video object change, for example, a corresponding character moves, or a new character is added, or a video key frame of a character is reduced compared with other video key frames, and finally determine the video key frames as candidate video frames. In this way, the video key frames with changed scenes and/or changed video objects are selected from the video key frames to serve as candidate video frames, so that the displayed candidate video frames can be further reduced, and meanwhile, the video key frames with changed scenes and/or changed video objects are used as the candidate video frames, so that the image contents of the candidate video frames are more representative, and the representativeness of the target video frame selected from the candidate video frames is improved to a certain extent.
Step 3013, the first electronic device displays each candidate video frame according to the playing sequence of each candidate video frame in the target video.
In this step, the playing sequence of the candidate video frames in the target video may be the playing time point of the candidate video frames in the target video, and for example, it is assumed that there are 3 candidate video frames: the candidate video frames a, b and c are displayed by the first electronic device in the order of candidate video frame b, candidate video frame c and candidate video frame a, wherein the playing time point of candidate video frame b in the target video is 20 seconds at 3 minutes, the playing time point of candidate video frame c in the target video is 40 seconds at 3 minutes, and the playing time point of candidate video frame a in the target video is 50 seconds at 3 minutes. Therefore, the candidate video frames are displayed according to the playing sequence, so that the sequence of the displayed candidate video frames is more in line with the watching habit of the user, and the user can conveniently select the candidate video frames. Of course, the candidate video frames may also be randomly arranged, which is not limited in the embodiment of the present invention.
For example, fig. 3-2 is a schematic diagram of a user interface provided by an embodiment of the present invention, as shown in fig. 3-2, in which a target video 01 is displayed, and further, fig. 3-3 is a schematic diagram of another user interface provided by an embodiment of the present invention, which may be a user interface displayed by the first electronic device after receiving a second input, for example, after receiving a long press input of the user on the target video 01 in fig. 3-2, as shown in fig. 3-3, in which a plurality of candidate video frames 02 are displayed.
Of course, in another alternative embodiment of the present invention, when displaying the candidate video frames, the first electronic device may also directly use all video frames included in the target video as the candidate video frames, for example, assuming that the target video includes 100 video frames, the first electronic device may determine the 100 video frames as the candidate video frames, which is not limited in this embodiment of the present invention. Therefore, all the video frames are used as candidate video frames, the selectivity of the user can be improved to the maximum extent, and the selection effect is further ensured to a certain extent.
Step 302, the first electronic device marks N target video frames in response to the first input.
In this step, the first electronic device may first determine a candidate video frame selected by the first input.
Specifically, the first input may include a sliding input on the first candidate video frame, and accordingly, the determining of the candidate video frame selected by the first input may be implemented by the following sub-step (1):
substep (1): and the first electronic equipment determines the first candidate video frame as the candidate video frame selected by the first input under the condition that the sliding direction of the sliding input is a preset direction.
In this step, the preset direction may be preset according to an actual requirement, for example, the preset direction may be a horizontal direction from left to right, or a horizontal direction from right to left, which is not limited in this embodiment of the present invention. The first candidate video frame may be a video frame through which the slide input slides. Accordingly, the first electronic device takes the first candidate video frame selected by the slide input as the candidate video frame. Therefore, the first electronic equipment user can select the selected candidate video frame only by executing sliding input, the whole operation is high in convenience and low in implementation difficulty.
Further, in an actual application scene, in order to facilitate a user to acquire which candidate video frames have been selected as target video frames, the display form of the candidate video frames may be set to a preset form after the candidate video frames are determined as the target video frames, so that the user can conveniently distinguish whether the candidate video frames have been selected, and further, the selection efficiency of the user is improved. The preset form may be preset according to actual conditions, for example, the preset form may be a semi-transparent form, and accordingly, the first electronic device may set the display form of the candidate video frame to be the semi-transparent form. For example, fig. 3-4 are schematic diagrams of still another user interface provided by the embodiment of the invention, and as shown in fig. 3-4, the display form of the selected candidate video frame is changed into a semi-transparent form (the semi-transparent form is shown by covering with oblique lines in the figure). Further, when a viewing operation sent by the user is received, the candidate video frame indicated by the viewing operation may be displayed in an enlarged manner so as to facilitate the user to view, where the viewing operation may be a sliding operation on the candidate video frame, for example, the user may slide the candidate video frame left and right to control the first electronic device to display different candidate video frames in an enlarged manner. Accordingly, a snippet consisting of the selected candidate video frame and the associated non-video keyframes may also be played in the interface.
Further, the first electronic device may also determine a candidate video frame selected by the first input through the following sub-steps (2) to (4):
substep (2): displaying a selection control on the at least one candidate video frame.
In this step, at least one candidate video frame may constitute a candidate video frame sequence, and accordingly, the first electronic device may display a selection control on the candidate video frame sequence. For example, the image sequence composed of the plurality of candidate video frames at the bottom in fig. 3-3 is a candidate video frame sequence. Specifically, when the user selects, the user often performs the first input at the position that needs to be selected, so that the first electronic device may display the selection control at the operation position of the first input, so as to facilitate the user to perform the operation. Further, the selection control may be a selection sliding bar, and accordingly, the user may select the candidate video frame by moving the selection sliding bar, but of course, the selection control may also be other forms of controls, for example, a selection cursor including a start cursor and an end cursor, which is not limited in this embodiment of the present invention.
Substep (3): receiving a third input to the selection control by the first electronic device user.
In this step, the third input may be a click input, a long-press input, a slide input, or the like. The first electronic device user may effect the control selection control by performing a third input.
Substep (4): in response to the third input, a first position and a second position indicated by the third input are obtained.
Specifically, the first electronic device may determine a display position of the selection control as a first position, and after the third input is performed, the termination position of the selection control is determined as a second position, for example, taking the selection control as a selection sliding rod, a user may select the selection sliding rod and drag the selection sliding rod to implement the third input on the selection control, so that by performing the third input, the selection range of the selection control may be quickly adjusted, and then candidate video frames are continuously selected, so as to improve the selection efficiency. By way of example, taking the selection control as an example of a selection slider bar, fig. 3-5 are schematic diagrams of still another user interface provided by the embodiment of the present invention, and as shown in fig. 3-5, the selection slider bar is slid to the 7 th candidate video frame displayed.
Substep (4): determining candidate video frames between the first location and the second location as candidate video frames selected by the first input.
In this step, the candidate video frame between the first position and the second position is the candidate video pin that the user wants to select, so the first electronic device can determine the candidate video pin between the first position and the second position as the target video frame. In the embodiment of the invention, the selection control is displayed, the first position and the second position are determined based on the third input of the user to the selection control, and the first position and the second position are determined as the candidate video frame selected by the first input, so that the user can more accurately select the candidate video frame through the displayed selection control, and the operation convenience of selection is further improved.
Then, after determining the candidate video frame selected by the first input, the first electronic device may determine the candidate video frame selected by the first input as the target video frame; or determining the candidate video frame selected by the first input and the non-video key frame associated with the candidate video frame selected by the first input as the target video frame. Specifically, in the case that the candidate video frames are all video frames included in the target video, the candidate video frame selected by the first input may be determined as the target video frame. And further determining the candidate video frame selected by the first input and the non-video key frame associated with the candidate video frame selected by the first input as the target video frame in the case that the candidate video frame is determined based on the video key frame contained in the target video.
The candidate video frames are determined based on key frame images in the video to be shared, so that the target video frame also belongs to the key frame images, and each key frame image in the video is usually formed by sequentially arranging a plurality of spaced non-key images, wherein the image content of the spaced non-key images is usually similar to that of the previous key frame image, and the associated non-key frame image can be a non-key frame image between the target video frame and the next key frame image of the target video frame adjacent to the target video frame in the video to be shared. Therefore, the candidate video frame selected by the first input and the non-key frame image associated with the candidate video frame selected by the first input are determined as the target video frame, so that the images with similar contents can be encrypted at the same time, and the encryption effect is improved. Furthermore, only the target video frame can be determined as the target video frame, so that the number of the target video frames needing to be encrypted is reduced while the user requirements are met, and the encryption efficiency of the first electronic device is improved.
Step 303, the first electronic device obtains the encrypted information input by the user.
Specifically, the encryption information may be input by a user on a display interface of the first electronic device, and the first electronic device may determine that the user needs to encrypt the encrypted information when a selection operation on a preset area is detected, for example, the preset area may be an area other than a display area of a play icon of the target video. Accordingly, the first electronic device may receive encrypted information input by the user. The display interface may be an interface provided by the first electronic device specifically for inputting the encrypted information, and of course, the display interface may also be another interface, for example, the display interface may also be an interface displayed when the target video frame is selected.
For example, fig. 3 to 6 are schematic diagrams of still another user interface provided by an embodiment of the present invention, as shown in fig. 3 to 6, a user inputs encrypted information on the interface, where the encrypted information is an image capable of embodying a shape and input by the user.
And step 304, the first electronic device encrypts the N target video frames based on the encryption information.
Specifically, the first electronic device may associate the encrypted information with image data of the target video frame, and then store the associated encrypted information and image data of the target video frame in a metadata area of the video to be shared. Specifically, the encryption information may be used as a key (key), the image data of the target video frame may be used as a value (value), the associated encryption information and the image data of the target video frame may be stored in a key-value form, and the metadata area (metadata) may be used to store metadata, where the metadata refers to structured data extracted from the video data and used to describe features and contents of the video data.
Further, in order to further increase the interest of video playing, a specific playing mode may be set for the candidate video frames of the video to be shared, for example, the candidate video frames are set to be played repeatedly, and the candidate video frames are set to be played at a specific speed. The second input may be a selection operation of a candidate video frame for which a play format needs to be set, for example, the second input may be a sliding input of a preset track, for example, the second input may be a sliding operation of the candidate video frame, of course, an operation format of the second input may also be set according to an actual situation, which is not limited in this embodiment of the present invention, and further, the candidate video frame indicated by the second input is the second candidate video frame.
Then, an association of a second candidate video frame, the playback parameter, and the encryption information may be established in response to a second input. The playing parameters may include the playing times and/or the playing speed, so that in the subsequent playing process, the playing in a specific form can be performed according to the playing parameters by associating the playing parameters with the second candidate video frame. Specifically, at the time of establishing, the playing times and/or playing speed input by the user can be received, and an association is established based on the received play times and/or play speed, the identification of the second candidate video frame and the encryption information, it may, of course, be established in other forms, for example, by detecting the number of times of the selection operation performed by the user on the second candidate video frame, and taking the number of times as the number of playing times, and further, in order to facilitate the user to know the specific set times, in the embodiment of the present invention, the set play parameter may also be displayed on the second candidate video frame, for example, with the play parameter as the play time, the playing time is 2 as an example, and fig. 3-7 are schematic diagrams of another user interface provided in the embodiment of the present invention, as shown in fig. 3-7, the playing time 2 is displayed on the second candidate video frame.
The association may then be stored to a metadata region. For a specific storage manner, reference may be made to the foregoing description, and details are not described herein in the embodiments of the present invention. By establishing the association relationship and storing the association relationship in the metadata area, the second electronic device can be controlled to play according to the specific play form only under the condition of successful decryption while the specific play form is set, so that the participation degree of the user of the second electronic device is improved.
In the embodiment of the present invention, the first electronic device may further play the target video when detecting a selection operation of the user on the first preset area on the interface, and play the second candidate video frame according to the play parameter in the playing process, so that the user of the first electronic device can visually see a specific play effect on the second candidate video frame. The first preset area may be an area where a play icon is displayed.
And 305, the first electronic equipment sends the encrypted target video to second electronic equipment.
Specifically, this step may refer to step 105, which is not limited in this embodiment of the present invention.
And step 306, the second electronic device receives the target video sent by the first electronic device.
Specifically, this step may refer to step 201 described above, which is not limited in this embodiment of the present invention.
Step 307, the second electronic device obtains the decryption information input by the user of the second electronic device.
Specifically, the second electronic device may receive information input by the user from the display interface after receiving the first play operation, and further obtain the decryption information. For example, fig. 3-8 are schematic diagrams of still another user interface provided by the embodiment of the present invention, as shown in fig. 3-8, a user may input decryption information on a display interface.
And 308, the second electronic device plays the target video under the condition that the decryption information is matched with the encryption information, and plays the N target video frames in the playing process of the target video.
In this step, the encrypted information may be stored in a metadata area of the target video, and accordingly, the second electronic device may match the decryption information with the encrypted information in the metadata area, specifically, may compare the decryption information with the encrypted information to match the decryption information with the encrypted information, then, if the matching is successful, read the image data associated with the encrypted information from the metadata area, specifically, if the similarity between the decryption information and the encrypted information reaches a preset threshold, it may be considered that the matching between the decryption information and the encrypted information is successful, at this time, read the image data associated with the encrypted information, and finally, in the playing process of the target video, play the encrypted N target video frames based on the image data, specifically, determine the playing time point of the encrypted video frame according to the playing time information included in the image data, and when the playing time point is reached, rendering by using the image data to realize playing of the target video frame. In the embodiment of the invention, the decryption information is matched with the encryption information in the metadata area, and the N target video frames are played under the condition of successful matching, so that a user of the second electronic equipment can watch the target video frames under the condition of watching qualification, the safety of the target video frames is improved, meanwhile, the first electronic equipment can realize the watching control of part of contents in the target video based on the encryption information without cutting the target video, and the processing process of the first electronic equipment can be simplified.
Further, the target video may further include a second candidate video frame, and accordingly, the second electronic device may further read the second candidate video frame and the playing parameter associated with the encryption information from the metadata area after the matching is successful, and play the second candidate video frame according to the playing parameter in the playing process of the target video. Specifically, the second electronic device may obtain an identifier of a second candidate video frame based on the association relationship, then may determine which video frame in the target video is the second candidate video frame according to the image identifier, and then play the second candidate video frame according to the play parameter, which is assumed as follows: the playing time is 2, the second electronic device may play the second candidate video frame twice. Therefore, the second candidate video frame is played based on the playing parameters to play in a specific form, and the interestingness of video playing can be improved to a certain extent.
Further, when a direct playing request for playing the target video is received, or the decryption information is not matched with the encryption information, only video frames except the N target video frames in the target video are played in the playing process of the target video.
Specifically, the direct play request does not include the decryption information, that is, the second play operation may be a trigger operation for a direct play function of the second electronic device, for example, the second play operation may be a click operation for the target video, and accordingly, if the second play operation is received, it may be considered that the user does not want to watch the target video frame in the target video, and therefore, only video frames except the target video frame in the target video may be played in the playing process of the target video. Further, if the decryption information does not match the encryption information, it may be considered that the second electronic device user does not have the right to view N target video frames, and therefore, in the case where the decryption information does not match the encryption information, only video frames other than the N target video frames in the target video may be played during the playing of the target video. In this way, when a direct playing request is received, or the decryption information is not matched with the encryption information, only the unencrypted video frame is played, so that the target video can be played differently by different second electronic devices under the condition that the same target video is shared by the different second electronic devices, and the video sharing effect is further improved.
To sum up, in the video processing method provided in the embodiment of the present invention, a first electronic device receives a first input of a first electronic device user for at least one candidate video frame of a target video, then, in response to the first input, marks N target video frames, obtains encryption information input by the user, encrypts the N target video frames based on the encryption information, and finally, sends the encrypted target video to a second electronic device, and accordingly, the second electronic device receives the target video sent by the first electronic device, where the target video includes the N target video frames, the N target video frames are video frames encrypted based on the encryption information input by the first electronic device user, obtains decryption information input by the second electronic device user, and plays the target video if the decryption information matches with the encryption information, and playing the N target video frames in the playing process of the target video. Therefore, the first electronic equipment only needs to encrypt part of target video frames in the target view, so that a user of the second electronic equipment can watch all contents of the target video or only part of contents in the target video based on decryption information, convenience degree of sharing operation can be improved to a certain extent, and time consumption of the sharing operation is low
Fig. 4 is a block diagram of a first electronic device according to an embodiment of the present invention, and as shown in fig. 4, the first electronic device 40 may include:
a first receiving module 401 is configured to receive a first input of at least one candidate video frame of a target video from a first electronic device user.
A tagging module 402 for tagging N target video frames in response to the first input.
An obtaining module 403, configured to obtain the encrypted information input by the user.
An encryption module 404, configured to encrypt the N target video frames based on the encryption information.
A sending module 405, configured to send the encrypted target video to a second electronic device.
Wherein N is a positive integer.
In summary, the first electronic device provided in the embodiment of the present invention can implement each process implemented by the first electronic device in the method embodiment of fig. 1, and for avoiding repetition, details are not described here again. The first electronic device provided by the embodiment of the invention receives a first input of at least one candidate video frame of a target video from a first electronic device user, then, in response to the first input, marks N target video frames, acquires encryption information input by the user, encrypts the N target video frames based on the encryption information, and finally, sends the encrypted target video to a second electronic device. Compared with a mode of directly sharing the video for part of users and sharing the video after cutting the video for part of users, in the embodiment of the invention, the second electronic equipment user can watch all contents of the target video or only part of the contents in the target video based on the decryption information only by encrypting part of target video frames in the target video, so that the convenience degree of the sharing operation can be improved to a certain extent, and the time consumption of the sharing operation is reduced.
Optionally, the first electronic device 40 further includes:
and the second receiving module is used for receiving a second input of the first electronic equipment user.
A first display module for displaying at least one candidate video frame included in the target video in response to the second input.
Optionally, the first display module is specifically configured to:
and extracting video key frames contained in the target video.
At least one candidate video frame is determined based on the video keyframes.
And displaying each candidate video frame according to the playing sequence of each candidate video frame in the target video.
Optionally, the first display module is specifically configured to:
determining all video frames contained in the target video as the candidate video frames.
Optionally, the first display module is further specifically configured to:
selecting a target video key frame from the video key frames as the candidate video frame;
wherein a target object included in the target video key frame is changed compared with a target object in an adjacent video key frame of the target video key frame; the target object comprises at least one of a scene and a video object.
Optionally, the marking module 402 is specifically configured to:
candidate video frames selected by the first input are determined.
Determining a candidate video frame selected by the first input as the target video frame; or determining the candidate video frame selected by the first input and the non-video key frame associated with the candidate video frame selected by the first input as the target video frame.
Optionally, the first input comprises a slide input on the first candidate video frame.
The marking module 402 is further specifically configured to:
and determining the first candidate video frame as the candidate video frame selected by the first input under the condition that the sliding direction of the sliding input is a preset direction.
Optionally, the marking module 402 is further specifically configured to:
displaying a selection control on the at least one candidate video frame.
Receiving a third input to the selection control by the first electronic device user.
In response to the third input, a first position and a second position indicated by the third input are obtained.
Determining candidate video frames between the first location and the second location as candidate video frames selected by the first input.
Optionally, the encryption module 404 is specifically configured to:
associating the encryption information with image data of the N target video frames.
And storing the associated encryption information and the image data of the N target video frames to a metadata area of the target video.
Optionally, the first electronic device 40 further includes:
a third receiving module, configured to receive a second input of a second candidate video frame by the first electronic device user.
And the establishing module is used for responding to the second input and establishing the association relation among the second candidate video frame, the playing parameter and the encryption information.
The storage module is used for storing the association relation to the metadata area; wherein the playing parameter includes at least one of a playing time and a playing speed.
In summary, the first electronic device provided in the embodiment of the present invention receives a first input of a user of the first electronic device to at least one candidate video frame of a target video, then, in response to the first input, marks N target video frames, obtains encryption information input by the user, encrypts the N target video frames based on the encryption information, and finally, sends the encrypted target video to the second electronic device. Compared with a mode of directly sharing the video for part of users and sharing the video after cutting the video for part of users, in the embodiment of the invention, the second electronic equipment user can watch all contents of the target video or only part of the contents in the target video based on the decryption information only by encrypting part of target video frames in the target video, so that the convenience degree of the sharing operation can be improved to a certain extent, and the time consumption of the sharing operation is reduced.
Fig. 5 is a block diagram of a second electronic device according to an embodiment of the present invention, and as shown in fig. 5, the second electronic device 50 may include:
the receiving module 501 is configured to receive a target video sent by a first electronic device, where the target video includes N target video frames, and the N target video frames are video frames encrypted based on encryption information input by a user of the first electronic device.
The obtaining module 502 is configured to obtain decryption information input by a user of the second electronic device.
A first playing module 503, configured to play the target video when the decryption information matches the encryption information, and play the N target video frames in the playing process of the target video.
In summary, the second electronic device provided in the embodiment of the present invention can implement each process implemented by the second electronic device in the method embodiment of fig. 2, and for avoiding repetition, details are not described here again. The second electronic device provided in the embodiment of the present invention receives a target video sent by a first electronic device, where the target video includes N target video frames, where the N target video frames are video frames encrypted based on encryption information input by a user of the first electronic device, obtains decryption information input by the user of the second electronic device, and plays the target video and the N target video frames during the playing of the target video when the decryption information matches the encryption information. Therefore, the first electronic device only needs to encrypt part of target video frames in the target view, and a user of the second electronic device can watch all contents of the target video or only part of contents in the target video based on decryption information, so that convenience degree of sharing operation can be improved to a certain extent, and time consumption of the sharing operation is reduced.
Optionally, the encryption information is stored in a metadata area of the target video.
The first playing module 503 is specifically configured to:
and matching the decryption information with the encryption information in the metadata area.
If the matching is successful, reading the image data associated with the encrypted information from the metadata area;
and in the playing process of the target video, based on the image data, playing the N target video frames.
Optionally, the target video further includes a second candidate video frame; the second electronic device 50 further comprises:
and the reading module is used for reading the second candidate video frame and the playing parameter which are associated with the encryption information from the metadata area.
And the second playing module is used for playing the second candidate video frame according to the playing parameters in the playing process of the target video.
Optionally, the second electronic device 50 further includes:
and the third playing module is used for playing only the video frames except the N target video frames in the target video in the playing process of the target video under the condition that a direct playing request for playing the target video is received or the decryption information is not matched with the encryption information.
Wherein the direct play request does not include decryption information.
In summary, the second electronic device provided in the embodiment of the present invention receives a target video sent by a first electronic device, where the target video includes N target video frames, where the N target video frames are video frames encrypted based on encryption information input by a user of the first electronic device, obtains decryption information input by the user of the second electronic device, and plays the target video and the N target video frames in a playing process of the target video when the decryption information matches the encryption information. Therefore, the first electronic device only needs to encrypt part of target video frames in the target view, and a user of the second electronic device can watch all contents of the target video or only part of contents in the target video based on decryption information, so that convenience degree of sharing operation can be improved to a certain extent, and time consumption of the sharing operation is reduced.
Figure 6 is a schematic diagram of a hardware configuration of a mobile terminal implementing various embodiments of the present invention,
the mobile terminal 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and a power supply 611. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 6 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted mobile terminal, a wearable device, a pedometer, and the like.
The processor 610 is configured to receive a first input of at least one candidate video frame of a target video from a first electronic device user.
A processor 610 for tagging N target video frames in response to the first input.
And a processor 610 for acquiring the encrypted information input by the user.
A processor 610 configured to encrypt the N target video frames based on the encryption information.
The processor 610 is used for sending the encrypted target video to a second electronic device; wherein N is a positive integer.
In the embodiment of the invention, the first electronic device receives a first input of at least one candidate video frame of a target video from a first electronic device user, then, in response to the first input, marks N target video frames, acquires encryption information input by the user, encrypts the N target video frames based on the encryption information, and finally, sends the encrypted target video to the second electronic device. Compared with a mode of directly sharing the video for part of users and sharing the video after cutting the video for part of users, in the embodiment of the invention, the second electronic equipment user can watch all contents of the target video or only part of the contents in the target video based on the decryption information only by encrypting part of target video frames in the target video, so that the convenience degree of the sharing operation can be improved to a certain extent, and the time consumption of the sharing operation is reduced.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 610; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 601 may also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 602, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 may also provide audio output related to a specific function performed by the mobile terminal 600 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used to receive audio or video signals. The input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. The microphone 6042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 601 in case of the phone call mode.
The mobile terminal 600 also includes at least one sensor 605, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 6061 and/or the backlight when the mobile terminal 600 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 605 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 606 is used to display information input by the user or information provided to the user. The Display unit 106 may include a Display panel 6061, and the Display panel 6061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, receives a command from the processor 610, and executes the command. In addition, the touch panel 6071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, the other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 6071 can be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation on or near the touch panel 6071, the touch operation is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although the touch panel 6071 and the display panel 6061 are shown in fig. 6 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 608 is an interface through which an external device is connected to the mobile terminal 600. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 600 or may be used to transmit data between the mobile terminal 600 and external devices.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 609 and calling data stored in the memory 609, thereby integrally monitoring the mobile terminal. Processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The mobile terminal 600 may further include a power supply 611 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 611 is logically connected to the processor 610 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the mobile terminal 600 includes some functional modules that are not shown, and are not described in detail herein.
Further, an embodiment of the present invention further provides a mobile terminal, including a processor 610, a memory 609, and a computer program stored in the memory 609 and capable of running on the processor 610, where the computer program, when executed by the processor 610, implements each process of the above-mentioned video processing method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a mobile terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (17)

1. A video processing method applied to a first electronic device is characterized by comprising the following steps:
receiving a first input of at least one candidate video frame of a target video from a first electronic device user;
in response to the first input, marking N target video frames;
acquiring encrypted information input by a user;
encrypting the N target video frames based on the encryption information;
sending the encrypted target video to second electronic equipment;
wherein N is a positive integer.
2. The method of claim 1, wherein prior to receiving the first input of the at least one candidate video frame of the target video by the first electronic device user, the method further comprises:
receiving a second input of the first electronic device user;
in response to the second input, displaying at least one candidate video frame included in the target video.
3. The method of claim 2, wherein the displaying at least one candidate video frame included in the target video comprises:
extracting video key frames contained in the target video;
determining at least one candidate video frame based on the video key frame;
and displaying each candidate video frame according to the playing sequence of each candidate video frame in the target video.
4. The method of claim 2, wherein the displaying at least one candidate video frame included in the target video comprises:
determining all video frames contained in the target video as the candidate video frames.
5. The method of claim 3, wherein determining at least one candidate video frame based on the video keyframes comprises:
selecting a target video key frame from the video key frames as the candidate video frame;
wherein a target object included in the target video key frame is changed compared with a target object in an adjacent video key frame of the target video key frame; the target object comprises at least one of a scene and a video object.
6. The method of claim 3, wherein said tagging N target video frames in response to said first input comprises:
determining candidate video frames selected by the first input;
determining a candidate video frame selected by the first input as the target video frame; or determining the candidate video frame selected by the first input and the non-video key frame associated with the candidate video frame selected by the first input as the target video frame.
7. The method of claim 6, wherein the first input comprises a slide input over a first candidate video frame;
the determining the candidate video frame selected by the first input comprises:
and determining the first candidate video frame as the candidate video frame selected by the first input under the condition that the sliding direction of the sliding input is a preset direction.
8. The method of claim 6, wherein determining the candidate video frame selected by the first input comprises:
displaying a selection control on the at least one candidate video frame;
receiving a third input to the selection control by the first electronic device user;
in response to the third input, acquiring a first position and a second position indicated by the third input;
determining candidate video frames between the first location and the second location as candidate video frames selected by the first input.
9. The method according to claim 1, wherein said encrypting the N target video frames based on the encryption information comprises:
associating the encryption information with image data of the N target video frames;
and storing the associated encryption information and the image data of the N target video frames to a metadata area of the target video.
10. The method of claim 1, further comprising:
receiving a second input of a second candidate video frame by the first electronic device user;
establishing an association relationship among the second candidate video frame, the playing parameter and the encryption information in response to the second input;
storing the association relationship to the metadata area;
wherein the playing parameter includes at least one of a playing time and a playing speed.
11. A video processing method applied to a second electronic device is characterized by comprising the following steps:
receiving a target video sent by first electronic equipment, wherein the target video comprises N target video frames, and the N target video frames are video frames encrypted based on encryption information input by a user of the first electronic equipment;
acquiring decryption information input by a user of the second electronic equipment;
and under the condition that the decryption information is matched with the encryption information, playing the target video, and playing the N target video frames in the playing process of the target video.
12. The method according to claim 11, wherein the encryption information is stored in a metadata area of the target video;
the playing the target video and the N target video frames in the playing process of the target video under the condition that the decryption information is matched with the encryption information includes:
matching the decryption information with the encryption information in the metadata area;
if the matching is successful, reading the image data associated with the encrypted information from the metadata area;
and in the playing process of the target video, based on the image data, playing the N target video frames.
13. The method of claim 11, wherein the target video further comprises a second candidate video frame; the method further comprises the following steps:
reading the second candidate video frame and the playing parameter associated with the encryption information from the metadata area;
and in the playing process of the target video, playing the second candidate video frame according to the playing parameters.
14. The method of claim 11, further comprising:
when a direct playing request for playing the target video is received, or the decryption information is not matched with the encryption information, only playing video frames except the N target video frames in the target video in the playing process of the target video;
wherein the direct play request does not include decryption information.
15. A first electronic device, wherein the first electronic device comprises:
the device comprises a first receiving module, a second receiving module and a display module, wherein the first receiving module is used for receiving first input of at least one candidate video frame of a target video from a first electronic equipment user;
a tagging module for tagging N target video frames in response to the first input;
the acquisition module is used for acquiring the encrypted information input by the user;
an encryption module, configured to encrypt the N target video frames based on the encryption information;
the sending module is used for sending the encrypted target video to second electronic equipment;
wherein N is a positive integer.
16. A second electronic device, characterized in that the second electronic device comprises:
the receiving module is used for receiving a target video sent by first electronic equipment, wherein the target video comprises N target video frames, and the N target video frames are video frames encrypted based on encryption information input by a user of the first electronic equipment;
the acquisition module is used for acquiring decryption information input by a user of the second electronic equipment;
and the first playing module is used for playing the target video under the condition that the decryption information is matched with the encryption information, and playing the N target video frames in the playing process of the target video.
17. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the video processing method according to any one of claims 1 to 14.
CN201910944750.7A 2019-09-30 2019-09-30 Video processing method, electronic equipment and mobile terminal Pending CN110719527A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910944750.7A CN110719527A (en) 2019-09-30 2019-09-30 Video processing method, electronic equipment and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910944750.7A CN110719527A (en) 2019-09-30 2019-09-30 Video processing method, electronic equipment and mobile terminal

Publications (1)

Publication Number Publication Date
CN110719527A true CN110719527A (en) 2020-01-21

Family

ID=69212153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910944750.7A Pending CN110719527A (en) 2019-09-30 2019-09-30 Video processing method, electronic equipment and mobile terminal

Country Status (1)

Country Link
CN (1) CN110719527A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111314759A (en) * 2020-03-02 2020-06-19 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and storage medium
CN112633239A (en) * 2020-12-31 2021-04-09 中国工商银行股份有限公司 Micro-expression identification method and device
CN113965798A (en) * 2021-10-25 2022-01-21 北京百度网讯科技有限公司 Video information generating and displaying method, device, equipment and storage medium
CN114173177A (en) * 2021-12-03 2022-03-11 北京百度网讯科技有限公司 Video processing method, device, equipment and storage medium
CN115134635A (en) * 2022-06-07 2022-09-30 腾讯科技(深圳)有限公司 Method, device and equipment for processing media information and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020196939A1 (en) * 2001-06-06 2002-12-26 Unger Robert Allan Decoding and decryption of partially encrypted information
CN1812541A (en) * 2005-12-27 2006-08-02 浪潮电子信息产业股份有限公司 Digital copyright and digital watermark protecting method for video program
CN102905133A (en) * 2012-10-15 2013-01-30 南京邮电大学 Video stream-oriented hybrid encoding and encrypting method
CN104270676A (en) * 2014-09-28 2015-01-07 联想(北京)有限公司 Information processing method and electronic equipment
CN104683824A (en) * 2013-11-29 2015-06-03 航天信息股份有限公司 Encryption transmission method and system of flv format video file
CN105898520A (en) * 2016-04-07 2016-08-24 合网络技术(北京)有限公司 Video frame interception method and device
CN108966004A (en) * 2018-06-27 2018-12-07 维沃移动通信有限公司 A kind of method for processing video frequency and terminal
CN109905780A (en) * 2019-03-30 2019-06-18 山东云缦智能科技有限公司 A kind of video clip sharing method and Intelligent set top box

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020196939A1 (en) * 2001-06-06 2002-12-26 Unger Robert Allan Decoding and decryption of partially encrypted information
CN1812541A (en) * 2005-12-27 2006-08-02 浪潮电子信息产业股份有限公司 Digital copyright and digital watermark protecting method for video program
CN102905133A (en) * 2012-10-15 2013-01-30 南京邮电大学 Video stream-oriented hybrid encoding and encrypting method
CN104683824A (en) * 2013-11-29 2015-06-03 航天信息股份有限公司 Encryption transmission method and system of flv format video file
CN104270676A (en) * 2014-09-28 2015-01-07 联想(北京)有限公司 Information processing method and electronic equipment
CN105898520A (en) * 2016-04-07 2016-08-24 合网络技术(北京)有限公司 Video frame interception method and device
CN108966004A (en) * 2018-06-27 2018-12-07 维沃移动通信有限公司 A kind of method for processing video frequency and terminal
CN109905780A (en) * 2019-03-30 2019-06-18 山东云缦智能科技有限公司 A kind of video clip sharing method and Intelligent set top box

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111314759A (en) * 2020-03-02 2020-06-19 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and storage medium
CN112633239A (en) * 2020-12-31 2021-04-09 中国工商银行股份有限公司 Micro-expression identification method and device
CN113965798A (en) * 2021-10-25 2022-01-21 北京百度网讯科技有限公司 Video information generating and displaying method, device, equipment and storage medium
CN114173177A (en) * 2021-12-03 2022-03-11 北京百度网讯科技有限公司 Video processing method, device, equipment and storage medium
CN114173177B (en) * 2021-12-03 2024-03-19 北京百度网讯科技有限公司 Video processing method, device, equipment and storage medium
CN115134635A (en) * 2022-06-07 2022-09-30 腾讯科技(深圳)有限公司 Method, device and equipment for processing media information and storage medium
CN115134635B (en) * 2022-06-07 2024-04-19 腾讯科技(深圳)有限公司 Media information processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110087117B (en) Video playing method and terminal
CN108737904B (en) Video data processing method and mobile terminal
CN110248251B (en) Multimedia playing method and terminal equipment
CN110784771B (en) Video sharing method and electronic equipment
CN110719527A (en) Video processing method, electronic equipment and mobile terminal
CN111010610B (en) Video screenshot method and electronic equipment
CN107977652B (en) Method for extracting screen display content and mobile terminal
CN110557683B (en) Video playing control method and electronic equipment
CN111314784B (en) Video playing method and electronic equipment
CN108616771B (en) Video playing method and mobile terminal
CN106921791B (en) Multimedia file storage and viewing method and device and mobile terminal
CN107784232B (en) Picture processing method and mobile terminal
CN111177420B (en) Multimedia file display method, electronic equipment and medium
US11250046B2 (en) Image viewing method and mobile terminal
CN109922294B (en) Video processing method and mobile terminal
CN109618218B (en) Video processing method and mobile terminal
CN110958485A (en) Video playing method, electronic equipment and computer readable storage medium
CN110650367A (en) Video processing method, electronic device, and medium
CN109947988B (en) Information processing method and device, terminal equipment and server
CN111698550A (en) Information display method and device, electronic equipment and medium
CN109729431B (en) Video privacy protection method and terminal equipment
CN109669710B (en) Note processing method and terminal
CN111050214A (en) Video playing method and electronic equipment
CN108762641B (en) Text editing method and terminal equipment
CN110471895B (en) Sharing method and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200121

RJ01 Rejection of invention patent application after publication