CN112866809A - Video processing method and device, electronic equipment and readable storage medium - Google Patents
Video processing method and device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN112866809A CN112866809A CN202011623251.7A CN202011623251A CN112866809A CN 112866809 A CN112866809 A CN 112866809A CN 202011623251 A CN202011623251 A CN 202011623251A CN 112866809 A CN112866809 A CN 112866809A
- Authority
- CN
- China
- Prior art keywords
- preset
- video
- information
- video segment
- state information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 21
- 238000012545 processing Methods 0.000 claims abstract description 235
- 238000000034 method Methods 0.000 claims abstract description 124
- 230000008569 process Effects 0.000 claims abstract description 86
- 238000012544 monitoring process Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 11
- 230000003993 interaction Effects 0.000 abstract description 4
- 238000004891 communication Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 210000001747 pupil Anatomy 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 210000003128 head Anatomy 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 239000012141 concentrate Substances 0.000 description 2
- 238000010411 cooking Methods 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/443—OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47217—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- User Interface Of Digital Computer (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The application discloses a video processing method and device, electronic equipment and a readable storage medium, and relates to the technical field of video processing, in particular to the technical field of video playing and the technical field of user interaction. The specific implementation scheme is as follows: the server can divide the target video into a plurality of video segments according to the content of the target video, and preset state information and processing information matched with the content for each video segment. Therefore, when the target video is played, the terminal can monitor the real-time state of each video segment in the playing process according to the preset state information and the preset processing information of the currently played video segment so as to automatically adjust the playing of the target video, the user does not need to perform manual operation on the playing of the target video, the intelligent playing control of the target video can be realized, and the intelligent degree of video playing is improved.
Description
Technical Field
The present application relates to the field of video processing technologies, and in particular, to the field of video playing technologies and user interaction technologies, and in particular, to a video processing method and apparatus, an electronic device, and a readable storage medium.
Background
During the video playing process, the user may not watch the video and miss the video content due to some behaviors, for example, when the lecturer is in a vague state or leaves temporarily during the video playing of a web lesson, or when the video playing is performed manually, during cooking, etc., the user follows the video to perform manual or cooking without watching the video.
In the prior art, a user needs to manually perform operations on a video to achieve processing such as pausing the video or playing back the video, the user operations are cumbersome, and the user cannot perform operations on the video due to ongoing transactions. Therefore, the current video playing method has low intelligent degree.
Disclosure of Invention
The application provides a video processing method and device, electronic equipment and a readable storage medium.
According to an aspect of the present application, there is provided a video processing method, performed by a server, including:
acquiring a target video;
dividing the target video into N video segments according to the content of the target video, wherein N is greater than 1;
setting preset state information of the N video segments and preset processing information corresponding to the preset state information according to the contents of the N video segments;
the preset state information comprises preset user state information, and first preset processing information corresponding to the preset user state information is used for indicating: in the playing process of the video segment, when the terminal detects that the real-time user state information meets the preset user state information, the video segment of the target video is processed according to the first preset processing information.
According to another aspect of the present application, there is provided a video processing method, performed by a terminal, including:
monitoring real-time state information in a playing process of a first video segment of a target video in the process of playing the first video segment, wherein the real-time state information comprises real-time user state information;
processing the first video segment according to first preset processing information under the condition that the real-time user state information of the first video segment is detected to meet the preset user state information of the first video segment;
the target video is divided into N video segments in advance, the N video segments are respectively preset with corresponding preset state information and preset processing information corresponding to the preset state information, the preset state information comprises preset user state information, the first preset processing information is preset processing information corresponding to the preset user state information, the N video segments comprise the first video segment, and N is greater than 1.
According to another aspect of the present application, there is provided a video processing apparatus including:
the acquisition module is used for acquiring a target video;
the dividing module is used for dividing the target video into N video segments according to the content of the target video, wherein N is greater than 1;
the setting module is used for setting preset state information of the N video segments and preset processing information corresponding to the preset state information according to the contents of the N video segments;
the preset state information comprises preset user state information, and first preset processing information corresponding to the preset user state information is used for indicating: in the playing process of the video segment, when the terminal detects that the real-time user state information meets the preset user state information, the video segment of the target video is processed according to the first preset processing information.
According to another aspect of the present application, there is provided a video processing apparatus including:
the monitoring module is used for monitoring real-time state information in the playing process of a first video segment of a target video in the process of playing the first video segment, wherein the real-time state information comprises real-time user state information;
the first processing module is used for processing the first video segment according to first preset processing information under the condition that the real-time user state information of the first video segment is detected to meet the preset user state information of the first video segment;
the target video is divided into N video segments in advance, the N video segments are respectively preset with corresponding preset state information and preset processing information corresponding to the preset state information, the preset state information comprises preset user state information, the first preset processing information is preset processing information corresponding to the preset user state information, the N video segments comprise the first video segment, and N is greater than 1.
According to another aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video processing methods provided herein.
According to another aspect of the present application, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the video processing method provided herein.
According to another aspect of the present application, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the video processing method provided herein.
According to the technology of the application, the intelligent degree of the video playing method is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic flow chart of a video processing method according to a first embodiment of the present application;
fig. 2 is a schematic flow chart of a video processing method according to a second embodiment of the present application;
fig. 3 is a schematic diagram of video segment partitioning of a target video according to a second embodiment of the present application;
fig. 4 is a block diagram of a video processing apparatus according to a third embodiment of the present application;
fig. 5 is a block diagram of a video processing apparatus according to a fourth embodiment of the present application;
fig. 6 is a block diagram of an electronic device for implementing a video processing method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In order to solve the problem of low intelligent degree of the video playing method in the prior art,
referring to fig. 1, fig. 1 is a video processing method provided by an embodiment of the present application, where the method may be executed by a server, and the method includes:
and step 101, acquiring a target video.
In this embodiment of the application, the target video may be any one of the following types: the video uploaded by the user is the video generated by the user after shooting, screen recording, clipping and the like; video of live broadcast of a user; the network video conference can be understood as a video acquired by each camera in real time in the network video conference. It is to be understood that the representation form of the target video is not limited thereto, and may be determined according to actual situations, and the embodiment of the present application is not limited thereto.
In specific implementation, the server may obtain the target video.
Step 102, dividing the target video into N video segments according to the content of the target video, wherein N is greater than 1.
In a specific implementation, the server may divide the target video into a plurality of video segments according to the content of the target video. Illustratively, the target video is a web lesson video, and the web lesson video can be divided into four stages, namely a beginning part, a lecture part, an exercise part and an ending part. The server can determine the boundary time point of each stage by checking the content of the online lesson video, and further divide the online lesson video into four video segments. In another example, the target video is a sales-class live video, and the sales-class live video may determine, according to a category of a commodity, an introduction part of the commodity as a video segment.
103, setting preset state information of the N video segments and preset processing information corresponding to the preset state information according to the contents of the N video segments;
the preset state information comprises preset user state information, and first preset processing information corresponding to the preset user state information is used for indicating that in the video segment playing process, when the terminal detects that the real-time user state information meets the preset user state information, the video segment of the target video is processed according to the first preset processing information.
In this embodiment of the application, the server may preset corresponding preset state information and preset processing information corresponding to the preset state information for each video segment. The preset state information corresponding to each video segment may be one type or multiple types, and each type of preset state information may correspond to one type of preset processing information. When the preset state information corresponding to the video segment includes multiple types, the multiple types of preset state information may be the same type of preset state information, or different types of preset state information, which may be determined specifically according to actual conditions, and the embodiment of the present application is not limited herein.
In the embodiment of the application, the preset state information includes preset user state information, and the server may preset corresponding preset user state information and first preset processing information corresponding to the preset user state information for each video segment.
During specific implementation, according to the content of each video segment, the preset user state information corresponding to the video segment is determined. Taking the first video segment as an example, the preset user state information corresponding to the first video segment can be understood as a user state which may not meet expectations in the playing process of the first video segment. Illustratively, the user is not looking directly at the screen, or the user leaves a preset area in front of the screen, or talks with others, etc. These undesirable user states may affect the user's viewing of the first video segment, such as missing the playing content of the first video segment, so as to possibly cause the user to perform manual operations on the playing of the first video segment, such as pausing or playback, etc.
The first preset processing information corresponding to the preset user state information can be understood as what processing is performed on the first video segment in the playing process of the first video segment if the terminal detects that the real-time user state information meets the preset user state information. Illustratively, when the user does not look directly at the screen or leaves a preset area in front of the screen, the playing of the first video segment is suspended, or the first video segment is played back, or the playing of the first video segment is recorded (for live video).
In the embodiment of the application, the server may determine the preset user state information and the first preset processing information corresponding to each preset user state information based on the pre-collected historical information.
The history information may include history operation information of the user and user state information at each history operation. The historical operation information of the user can be understood as the historical operation information of the user in the playing process of various videos of the terminal. For example, the server may capture what state the user is in, and pause the video, or review the video, or record the video. After the server acquires the target video, the server can divide video segments of the target video according to the content of the target video and the history information, and determine preset user state information of each video segment and first preset processing information corresponding to each preset user state information.
The above-described embodiments of the present application have the following advantages or beneficial effects: the server can divide the target video into a plurality of video segments according to the content of the target video, and preset state information and preset processing information matched with the content of each video segment are preset for each video segment, wherein the preset state information comprises preset user state information. Therefore, when the terminal plays the target video, the real-time state of the user can be monitored according to different currently played video segments, and whether the real-time state of the user meets the preset user state information of the current video segment or not is correspondingly determined. Therefore, when a user has certain states, especially when the user is inconvenient or has no time to perform manual operation on the terminal, the terminal can automatically adjust the playing of the target video, and different playing processes can be correspondingly performed for different user states on the basis of ensuring that the user does not miss the playing content of the target video, so that the intelligent playing control of the target video is realized, and the intelligent degree of video playing is improved.
In the following, the preset user status information is specifically described in the embodiment of the present application:
in a specific implementation, the preset user state information may be determined from one or more of the following points: a user's eye viewing state; a viewing pose of the user; the user is in the preset visible area or outside the preset visible area, and the preset visible area can be an area which can be collected by a camera of the terminal. Correspondingly, the preset user state information may include at least one of the following: 1) the user's eyes do not view the screen of the terminal; 2) the watching posture of the user is a non-front-view posture; 3) the user leaves the preset visible area, or the user is not in the preset visible area.
For 1), the screen of the terminal that is not viewed by the eyes of the user may be embodied as: the pupil of the user is not focused on the playing interface of the target video; or the user does not look directly at the playing interface of the target video. The server may set that the preset user state information of the video segment includes a screen of the terminal that is not watched by eyes of the user, so as to instruct the terminal to monitor the eye watching state of the user in the playing process of the target video. Specifically, the terminal can acquire a real-time image of the user through the camera and analyze the eye watching state of the user.
Illustratively, the preset user state information acquired by the terminal for the first video segment includes a playing interface where the pupil of the user is not focused on the target video; in the playing process of the first video segment, if the terminal detects that the pupil of the user is not focused on the playing interface of the target video, it indicates that the current real-time user state information meets the preset user state information, and the first video segment may be processed according to first preset processing information corresponding to the "the pupil of the user is not focused on the playing interface of the target video". For example, a prompt box pops up on the playing picture of the first video segment, and may be accompanied by a prompt tone, and the playing of the first video segment may also be paused at the same time.
The viewing posture of the user being the non-front-view posture for 2) may be specifically expressed as: at least one of a heads-down attitude, a heads-up attitude, a left-turn attitude, or a right-turn attitude. The server can set the preset user state information of the video segment to include that the watching gesture of the user is a non-front-view gesture so as to indicate the terminal to monitor the watching gesture of the user in the playing process of the target video. Specifically, the terminal can acquire a real-time image of the user through the camera and analyze the watching posture of the user.
Illustratively, the preset user state information acquired by the terminal for the first video segment includes that the watching gesture of the user is a non-front-view gesture; in the playing process of the first video segment, if the terminal detects that the user is in the non-front-view posture, it indicates that the current real-time user state information meets the preset user state information, and the first video segment can be processed according to the first preset processing information corresponding to the fact that the watching posture of the user is in the non-front-view posture. For example, a pause process or a recording process (for live video) is performed on the first video segment.
For 3) the user may be embodied outside a preset visible area as: the user is not within the preset visible area or the user is leaving the preset visible area. The server can set the preset user state information of the video segment to include that the user is outside the preset visible region so as to indicate the terminal to monitor the position of the user in the playing process of the target video. Specifically, the terminal may acquire a real-time image of the user through the camera to determine whether the user is not in the preset visible area or is leaving the preset visible area.
Illustratively, the terminal acquires the preset user state information of the first video segment, wherein the preset user state information includes that a user is not in a preset visible area; in the playing process of the first video segment, if the terminal detects that the user is not in the preset visible area, it indicates that the current real-time user state information meets the preset user state information, and the first video segment can be processed according to the first preset processing information corresponding to the condition that the user is not in the preset visible area. For example, a pause process or a recording process (for live video) is performed on the first video segment.
It should be understood that the preset user status information is not limited thereto, and may be determined according to actual situations, and the embodiment of the present application is not limited thereto.
In this embodiment of the application, optionally, the preset state information further includes preset play state information, and second preset processing information corresponding to the preset play state information is used to indicate: in the playing process of the video segment, when the terminal detects that the real-time playing state information of the video segment at the terminal meets the preset playing state information, the video segment of the target video is processed according to the second preset processing information.
In the following, the preset playing state information is specifically described as follows:
in a specific implementation, the preset playing state information may be understood as an undesirable playing state that may occur during the playing of the first video segment. For example, the video segment is blocked by other display interfaces on the playing interface of the terminal, which may cause the user to miss the content of the video segment; or the video segment is played in a playing interface of the terminal and other display interfaces in a split screen mode, which may indicate that the user does not carefully watch the playing content of the video segment. These undesirable play states may affect the user's viewing of the first video segment.
The preset playing state information may be determined from one or more of the following points: whether the video segment is played in a full screen or a small screen at the terminal; whether the playing interface of the video segment is played with other display interfaces in a split screen manner or not; whether the video segment is played in the foreground or the background at the terminal; whether the playing interface of the video segment on the terminal is covered by other display interfaces or not. Correspondingly, the preset playing state information may include at least one of the following: 1) the video segment is played on a small terminal screen; 2) the playing interface of the video segment is played in a split screen mode with other display interfaces; 3) the video segment is played in the background of the terminal; 4) and the video segment is covered by other display interfaces on the playing interface of the terminal. The server can set preset playing state information of the video segment to indicate the terminal to monitor the real-time playing state of the video segment in the terminal in the playing process of the target video.
For 1), exemplarily, the preset playing state information of the first video segment acquired by the terminal includes that the first video segment is played on a small terminal screen; in the playing process of the first video segment, if the terminal detects that the playing interface of the first video segment is switched to small-screen playing by a user, which indicates that the current real-time playing state information of the first video segment meets preset playing state information, the first video segment can be processed according to second preset processing information corresponding to the fact that the first video segment is played on the terminal in a small-screen mode. For example, a pause process or a recording process (for live video) is performed on the first video segment.
For 2), exemplarily, the preset playing state information of the first video segment acquired by the terminal comprises that the playing interface of the first video segment and other display interfaces are played in a split screen mode; in the playing process of the first video segment, if the terminal detects that the playing interface of the first video segment is played in a split screen manner with other display interfaces, it indicates that the current real-time playing state information of the first video segment meets preset playing state information, and the first video segment can be processed according to second preset processing information corresponding to the split screen playing of the playing interface of the first video segment and other display interfaces. For example, a prompt box pops up on the playing picture of the first video segment, and may be accompanied by a prompt tone.
For 3), exemplarily, the terminal acquires the preset playing state information of the first video segment, including that the first video segment is played in a terminal background; in the playing process of the first video segment, if the terminal detects that the playing interface of the first video segment is switched to background playing by a user, which indicates that the current real-time playing state information of the first video segment meets preset playing state information, the first video segment can be processed according to second preset processing information corresponding to the playing of the first video segment in the background of the terminal. For example, a pause process or a recording process (for live video) is performed on the first video segment.
For 4), exemplarily, the terminal acquires the preset playing state information of the first video segment, wherein the first video segment is covered by other display interfaces on the playing interface of the terminal; in the playing process of the first video segment, if the terminal detects that the playing interface of the first video segment is covered by other display interfaces at the terminal, it indicates that the current real-time playing state information of the first video segment meets the preset playing state information, and the first video segment can be processed according to second preset processing information corresponding to the condition that the playing interface of the first video segment at the terminal is covered by other display interfaces. For example, a pause process or a recording process (for live video) is performed on the first video segment.
In this optional embodiment, the server may determine the preset playing state information and the second preset processing information corresponding to each preset playing state information based on the pre-collected history information and the content of the target video.
The historical information may include, but is not limited to, historical operation information of a user's play interface for the video. The server can record the historical operation information, and obtains the requirement of the user on the target video playing state and the operation habit of the user through statistical analysis of the historical operation information. After the server acquires the target video, the server can divide the target video according to the content of the target video and the history information, and determine preset playing state information and corresponding second preset processing information of each video segment.
It should be understood that the preset playing state information is not limited thereto, and may be determined according to actual situations, and the embodiment of the present application is not limited thereto.
In this optional embodiment, by presetting the playing state information of the video segment at the terminal in the playing process of the video segment, the real-time playing state information of the video segment at the terminal can be detected in the playing process of the video segment. Therefore, the terminal can determine whether the playing of the current video segment is normal or not, whether the watching of the user on the video content is influenced or not, and further determine whether the target video needs to be processed or not. Under the condition of no need of user operation, the playing of the target video can be automatically controlled according to the user state so as to meet the playing requirement of the user on the target video, ensure that the user does not miss the playing content of the target video, and further improve the intelligent degree of video playing.
In this embodiment of the application, optionally, the preset state information further includes preset playing content information, and third preset processing information corresponding to the preset playing content information is used to instruct that, when the terminal detects that the content of the video segment meets the preset playing content information, the video segment of the target video is processed according to the third preset processing information.
In the following, this optional embodiment will be specifically described in which the preset playing content information is:
in a specific implementation, the preset playing content information may be determined from one or more of the following points: the importance degree of the playing content of the video segment; whether a key picture is contained within the video segment. Correspondingly, the preset user state information may include at least one of the following: 1) the importance degree of the playing content of the video segment is higher; 2) the importance degree of the playing content of the video segment is common; 3) the importance degree of the playing content of the video segment is lower; 4) the video segment contains key pictures.
For the video segments with different importance degrees, the server can correspondingly set different preset processing information so as to indicate the terminal to determine the corresponding preset processing information according to the importance degree of the currently played video segment in the playing process of the target video, and then process the video segments.
Illustratively, the importance degree of preset playing content information of a first video segment, which is acquired by a terminal and includes the playing content of the video segment, is higher; in the playing process of the target video, if the terminal detects that the current playing is carried out to the first video segment, the first video segment can be processed according to third preset processing information corresponding to the fact that the importance degree of the playing content of the video segment is higher. For example, playing the first video segment using 0.75 times speed; or after the first video segment is played, returning to the time point when the first video segment starts to play the first video segment again.
Or, exemplarily, the preset playing content information of the first video segment acquired by the terminal includes the importance level of the playing content of the video segment; in the playing process of the target video, if the terminal detects that the current playing is carried out to the first video segment, the first video segment can be processed according to third preset processing information corresponding to 'the importance degree of the playing content of the video segment is common'. For example, the first video segment is played normally, i.e., the playing of the first video segment is not processed.
Or, for example, the importance degree that the preset playing content information of the first video segment acquired by the terminal includes the playing content of the video segment is low; in the playing process of the target video, if the terminal detects that the current playing is carried out to the first video segment, the first video segment can be processed according to third preset processing information corresponding to the fact that the importance degree of the playing content of the video segment is low. For example, the first video segment is played using 1.25 times speed or 1.5 times speed; or jumping to the time point of the first video segment end, and continuously playing the video segments after the first video segment.
And 4), the server can determine whether a key picture exists in each video segment according to the content of each video segment, and in the case that a key picture exists in a video segment, add a mark to at least one time point displayed by the key picture, and determine preset processing information corresponding to a current key picture, so as to indicate the terminal to process the video segment according to the preset processing information corresponding to the key picture when the mark is detected in the playing process of the target video.
Illustratively, the terminal acquires preset playing content information of a first video segment, wherein the preset playing content information comprises a key picture contained in the video segment, and the key picture is assumed to be a picture for displaying a target presentation, and a corresponding identifier of the key picture is identifier 1; in the playing process of the target video, if the terminal detects the identifier 1, it may be determined that the current key picture is a picture of the target presentation, and the first video segment is processed according to third preset processing information corresponding to the picture of the target presentation. For example, pausing playback of the first video segment to a frame of the target presentation; or, further, the screen of the target presentation is enlarged.
In this optional embodiment, the server may determine the preset playing content information and the third preset processing information corresponding to each preset playing content information based on the pre-collected history information and the content of the target video.
The history information may include, but is not limited to, operation information of the user during the video playing process, such as a double speed function, a bullet screen setting, a locking function, a mirror playing, and the like. For example, the user may use a double speed function or a mirror function, etc. in viewing which type of video or what content of the video. The server can record the user operation information and obtain the division standard of video division and the operation habit of the user through statistical analysis. Based on this, after the server acquires the target video, the server can divide the target video according to the content of the target video, and determine the preset playing content information and the corresponding preset processing information of each video segment.
After the server acquires the target video, the server can divide the target video according to the content of the target video and the history information, and determine preset playing state information and corresponding third preset processing information of each video segment.
It should be understood that the preset playing content information is not limited thereto, and may be determined according to actual situations, and the embodiment of the present application is not limited thereto.
In this optional embodiment, the playing content information of the video segment is preset in the playing process of the video segment. Therefore, the terminal can correspondingly process the video frequency band according to the content of the video frequency band playing, the playing of the target video can be automatically controlled without manual operation of a user, the playing requirement of the user on the target video is met, and the intelligent degree of video playing is further improved.
In this embodiment of the application, optionally, the preset processing information is used to indicate at least one of the following:
1) performing pause processing on the video segment;
2) carrying out speed doubling processing on the video segment;
3) amplifying the video segment;
4) performing repeated playing processing on the video segment;
5) recording the video segment;
6) and outputting prompt information to the video segment.
For 1), the preset processing information includes the pause processing on the video segment.
In specific implementation, in the playing process of the video segment, under the condition that the terminal detects that the real-time state information meets the preset state information, the playing of the video segment can be immediately paused.
Therefore, under the condition that the terminal detects that the real-time state information meets the preset state information, the terminal can automatically pause the video segment without manual operation of a user, so that the user is ensured not to miss the playing content of the video segment. After the condition that the real-time state information meets the preset state information is eliminated, the video segment can be continuously played from the currently paused time point, or the video segment can be continuously played from the currently paused time point by returning to the preset time length, so that the integrity of playing specific contents in the video segment is ensured.
For 2), the preset processing information includes speed doubling processing on the video segment. The speed doubling processing includes fast speed doubling processing, such as 1.25 speed, 1.5 speed, or 2 speed; slow speed multiple speed processing, such as 0.75 speed or 0.5 speed, may also be included.
For 3), the preset processing information includes enlargement processing on the video segment. The enlarging process may be a process of enlarging a picture of a currently played video segment, and playing the video segment continuously with the enlarged picture; the currently paused picture may be enlarged. Further, optionally, the currently paused screen may also be subjected to screenshot processing.
For 4), the preset processing information of the video segment includes a repeat playing process of the video segment. The repeat playing process may be repeated once or multiple times.
For 5), the preset processing information of the video segment includes recording processing of the video segment. The recording process may be a recording process for the entire device interface including the playing interface of the video segment, or may be a recording process for only the playing interface of the video segment.
For 6), the preset processing information of the video segment comprises outputting prompt information to the video segment. The prompt information can be a text prompt, an animation prompt and the like displayed on a playing interface of the video segment, and can also be a voice prompt.
It is to be understood that the representation form of the preset processing information is not limited thereto, and may be determined according to actual situations, and this is not specifically limited in the embodiments of the present application.
In this embodiment of the application, the server may provide the preset state information of the N video segments and the preset processing information corresponding to each preset state information to the terminal.
Specifically, optionally, after the preset state information of the N video segments and the preset processing information corresponding to the preset state information are set according to the content of the N video segments, the method further includes:
and sending the target video, the preset state information of the N video segments and the preset processing information corresponding to the preset state information to a terminal.
In this optional embodiment, the server may send the preset state information of the N video segments of the target video and the preset processing information corresponding to each preset state information to the terminal while issuing the target video to the terminal. Therefore, when the terminal plays the target video, the preset state information of each video segment and the preset processing information corresponding to each preset state information can be directly obtained locally, the terminal does not need to request the server for obtaining in real time, and the response time of the terminal for playing the target video is shortened.
Optionally, after the preset state information of the N video segments and the preset processing information corresponding to the preset state information are set according to the content of the N video segments, the method further includes:
sending the target video to a terminal;
and under the condition of receiving a request message sent by a terminal, responding to the request message, and sending preset state information of the N video segments and preset processing information corresponding to the preset state information to the terminal.
In this optional implementation, the server issues the target video to the terminal. When the target video needs to be played, the subsequent terminal may send a request message to the server, and request the server to acquire the preset state information of the N video segments and the preset processing information corresponding to each preset state information. Therefore, the terminal does not need to locally store the preset state information of all videos and the preset processing information corresponding to the preset state information, and requests the server to acquire the preset processing information when the videos need to be played, so that the occupation of the local space of the terminal is reduced.
According to an embodiment of the present application, another video processing method is provided.
Referring to fig. 2, fig. 2 is another video playing method provided by an embodiment of the present application, where the method may be executed by a terminal, and the method includes:
step 201, in the process of playing a first video segment of a target video, monitoring real-time state information in the playing process of the first video segment, wherein the real-time state information comprises real-time user state information;
step 202, processing the first video segment according to first preset processing information under the condition that it is detected that the real-time user state information of the first video segment meets the preset user state information of the first video segment;
the target video is divided into N video segments in advance, the N video segments are respectively preset with corresponding preset state information and preset processing information corresponding to the preset state information, the preset state information comprises preset user state information, the first preset processing information is preset processing information corresponding to the preset user state information, the N video segments comprise the first video segment, and N is greater than 1.
Optionally, before playing the target video, the method further includes:
and receiving the target video sent by a server, and preset state information of the N video segments and preset processing information corresponding to the preset state information.
Optionally, the real-time status information further includes real-time playing status information, the preset status information further includes preset playing status information, and the preset processing information further includes second preset processing information corresponding to the preset playing status information;
after monitoring the real-time status information in the playing process of the first video segment, the method further comprises:
and processing the first video segment according to the second preset processing information under the condition that the real-time playing state information of the first video segment is detected to meet the preset playing state information of the first video segment.
Optionally, the real-time status information further includes real-time playing status information, the preset status information further includes preset playing content information, and the preset processing information further includes third preset processing information corresponding to the preset playing content information;
after monitoring the real-time status information in the playing process of the first video segment, the method further comprises:
and processing the first video segment according to the third preset processing information under the condition that the real-time playing content information of the first video segment is detected to meet the preset playing content information of the first video segment.
Optionally, the preset processing information is used to indicate at least one of the following:
performing pause processing on the first video segment;
carrying out speed doubling processing on the first video segment;
amplifying the first video segment;
performing repeated playing processing on the first video segment;
recording the first video segment;
and displaying prompt information on the first video segment.
It should be noted that the embodiments of the present application are implemented as a terminal corresponding to the above method embodiments, and therefore, reference may be made to the relevant descriptions in the above method embodiments, and the same beneficial effects may be achieved. To avoid repetition of the description, the description is omitted.
For the sake of understanding, a specific implementation of the embodiments of the present application is described below:
illustratively, the target video is a network lesson in a primary school language. After receiving the network course uploaded by the user, the server divides the network course into 5 video segments according to the content of the network course, as shown in fig. 3, which are a viewing stage 1, a dictation stage 1, a rest stage, a dictation stage 2, and a viewing stage 2. The server is respectively provided with corresponding preset state information and preset processing information for each video segment. Wherein:
1) viewing stage 1 and viewing stage 2: the preset state information includes:
a. the pupil is not focused on the playing interface of the video segment: in the playing process of the watching stage 1 or the watching stage 2, when it is detected that the playing interface of the video segment on which the pupil of the user is not focused exceeds the preset time length, the corresponding preset processing information is that a text prompt "you are not sure!is output on the playing interface of the video segment! ";
b. the playing interface of the video segment and other display interfaces are played in a split screen mode: in the playing process of the watching stage 1 or the watching stage 2, when it is detected that the playing interface of the video segment is played separately from other display interfaces, the corresponding preset processing information is that a text prompt' please concentrate attention!is output on the playing interface of the video segment! ";
c. and in the playing process of the watching stage 1 or the watching stage 2, when detecting that the user is not in the preset visible area, the corresponding preset processing information is a pause playing video segment.
2) Dictation phase 1 and dictation phase 2: the preset state information includes:
a. the user is in a left-handed or right-handed head posture: in the playing process of the dictation stage 1 or the dictation stage 2, when detecting that the left turn-round head posture or the right turn-round head posture of the user exceeds a preset threshold, the corresponding preset processing information is that a text prompt' please concentrate attention!is output on a playing interface of the video segment! ";
d. and in the playing process of the dictation stage 1 or the dictation stage 2, when detecting that the user is not in the preset visible area, the corresponding preset processing information is a pause playing video segment.
c. The video segment is played in the background of the terminal: in the playing process of the dictation stage 1 or the dictation stage 2, when it is detected that the user switches the video segment to background playing, the corresponding preset processing information is the playing pause video segment.
3) A rest stage: the preset state information may not be set, or only the preset processing information may be set, including: and displaying the remaining duration of the rest period on the playing interface of the video segment.
The server sends the preset state information of the network lesson video and the 5 video segments and the preset processing information of each preset state information to the terminal, and the terminal can detect whether the real-time state information is matched with the preset state information of each video segment or not in the process of playing the network lesson video and process the video segments according to the preset processing information corresponding to the preset state information.
The above-described embodiments of the present application have the following advantages or beneficial effects: the server can divide the target video into a plurality of video segments according to the content of the target video, and preset state information and processing information matched with the content of each video segment. Therefore, when the target video is played, the terminal can monitor the real-time state of each video segment in the playing process according to the preset state information and the preset processing information of the currently played video segment so as to automatically adjust the playing of the target video, the user does not need to perform manual operation on the playing of the target video, the intelligent playing control of the target video can be realized, and the intelligent degree of video playing is improved.
The application also provides a video processing device.
As shown in fig. 4, the video processing apparatus 400 includes:
an obtaining module 401, configured to obtain a target video;
a dividing module 402, configured to divide the target video into N video segments according to the content of the target video, where N is greater than 1;
a setting module 403, configured to set preset state information of the N video segments and preset processing information corresponding to the preset state information according to the content of the N video segments;
the preset state information comprises preset user state information, and first preset processing information corresponding to the preset user state information is used for indicating that in the video segment playing process, when the terminal detects that the real-time user state information meets the preset user state information, the video segment of the target video is processed according to the first preset processing information.
Optionally, the video processing apparatus 400 further includes:
and the sending module is used for sending the target video, the preset state information of the N video segments and the preset processing information corresponding to the preset state information to a terminal.
Optionally, the preset state information further includes preset playing state information, and second preset processing information corresponding to the preset playing state information is used for indicating that, in the playing process of the video segment, when the terminal detects that the real-time playing state information of the video segment at the terminal meets the preset playing state information, the video segment of the target video is processed according to the second preset processing information.
Optionally, the preset state information further includes preset playing content information, and third preset processing information corresponding to the preset playing content information is used for indicating that, when the terminal detects that the content of the video segment meets the preset playing content information, the video segment of the target video is processed according to the third preset processing information.
Optionally, the preset processing information is used to indicate at least one of the following:
performing pause processing on the video segment;
carrying out speed doubling processing on the video segment;
amplifying the video segment;
performing repeated playing processing on the video segment;
recording the video segment;
and outputting prompt information to the video segment.
In the foregoing embodiment of the present application, the video processing apparatus 400 may implement each process implemented in the method embodiment shown in fig. 1, and may achieve the same beneficial effects, and for avoiding repetition, the details are not repeated here.
The application also provides another video processing device.
As shown in fig. 5, the video processing apparatus 500 includes:
the monitoring module 501 is configured to monitor real-time state information in a playing process of a first video segment of a target video, where the real-time state information includes real-time user state information;
a first processing module 502, configured to process the first video segment according to first preset processing information when it is detected that the real-time user state information of the first video segment meets preset user state information of the first video segment;
the target video is divided into N video segments in advance, the N video segments are respectively preset with corresponding preset state information and preset processing information corresponding to the preset state information, the preset state information comprises preset user state information, the first preset processing information is preset processing information corresponding to the preset user state information, the N video segments comprise the first video segment, and N is greater than 1.
Optionally, the video processing apparatus 500 further includes:
and the receiving module is used for receiving the target video sent by the server, and the preset state information of the N video segments and the preset processing information corresponding to the preset state information.
Optionally, the real-time status information further includes real-time playing status information, the preset status information further includes preset playing status information, and the preset processing information further includes second preset processing information corresponding to the preset playing status information;
the video processing apparatus 500 further includes:
and the second processing module is used for processing the first video segment according to the second preset processing information under the condition that the real-time playing state information of the first video segment is detected to meet the preset playing state information of the first video segment.
Optionally, the real-time status information further includes real-time playing status information, the preset status information further includes preset playing content information, and the preset processing information further includes third preset processing information corresponding to the preset playing content information;
the video processing apparatus 500 further includes:
and the third processing module is used for processing the first video segment according to the third preset processing information under the condition that the real-time playing content information of the first video segment is detected to meet the preset playing content information of the first video segment.
Optionally, the preset processing information is used to indicate at least one of the following:
performing pause processing on the first video segment;
carrying out speed doubling processing on the first video segment;
amplifying the first video segment;
performing repeated playing processing on the first video segment;
recording the first video segment;
and displaying prompt information on the first video segment.
In the foregoing embodiment of the present application, the video processing apparatus 500 may implement each process implemented in the method embodiment shown in fig. 2, and may achieve the same beneficial effects, and for avoiding repetition, the details are not repeated here.
There is also provided, in accordance with an embodiment of the present application, an electronic device, a readable storage medium, and a computer program product.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the respective methods and processes described above, such as a video processing method. For example, in some embodiments, the video processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the video processing method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the video processing method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include a terminal and a server. A terminal and server are generally remote from each other and typically interact through a communication network. The relationship of terminal and server arises by virtue of computer programs running on the respective computers and having a terminal-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (23)
1. A video processing method, performed by a server, comprising:
acquiring a target video;
dividing the target video into N video segments according to the content of the target video, wherein N is greater than 1;
setting preset state information of the N video segments and preset processing information corresponding to the preset state information according to the contents of the N video segments;
the preset state information comprises preset user state information, and first preset processing information corresponding to the preset user state information is used for indicating: in the playing process of the video segment, when the terminal detects that the real-time user state information meets the preset user state information, the video segment of the target video is processed according to the first preset processing information.
2. The method according to claim 1, wherein after said setting preset state information of said N video segments and preset processing information corresponding to said preset state information according to contents of said N video segments, said method further comprises:
and sending the target video, the preset state information of the N video segments and the preset processing information corresponding to the preset state information to a terminal.
3. The method according to claim 1, wherein the preset state information further includes preset playing state information, and second preset processing information corresponding to the preset playing state information is used for indicating: in the playing process of the video segment, when the terminal detects that the real-time playing state information of the video segment at the terminal meets the preset playing state information, the video segment of the target video is processed according to the second preset processing information.
4. The method according to claim 1, wherein the preset state information further includes preset playback content information, and third preset processing information corresponding to the preset playback content information is used to indicate: and when the terminal detects that the content of the video segment meets the preset playing content information, processing the video segment of the target video according to the third preset processing information.
5. The method according to any one of claims 1 to 4, wherein the preset processing information is used for indicating at least one of:
performing pause processing on the video segment;
carrying out speed doubling processing on the video segment;
amplifying the video segment;
performing repeated playing processing on the video segment;
recording the video segment;
and outputting prompt information to the video segment.
6. A video processing method, performed by a terminal, comprising:
monitoring real-time state information in a playing process of a first video segment of a target video in the process of playing the first video segment, wherein the real-time state information comprises real-time user state information;
processing the first video segment according to first preset processing information under the condition that the real-time user state information of the first video segment is detected to meet the preset user state information of the first video segment;
the target video is divided into N video segments in advance, the N video segments are respectively preset with corresponding preset state information and preset processing information corresponding to the preset state information, the preset state information comprises preset user state information, the first preset processing information is preset processing information corresponding to the preset user state information, the N video segments comprise the first video segment, and N is greater than 1.
7. The method of claim 6, wherein prior to playing the target video, the method further comprises:
and receiving the target video sent by a server, and preset state information of the N video segments and preset processing information corresponding to the preset state information.
8. The method according to claim 6, wherein the real-time status information further includes real-time playing status information, the preset status information further includes preset playing status information, and the preset processing information further includes second preset processing information corresponding to the preset playing status information;
after monitoring the real-time status information in the playing process of the first video segment, the method further comprises:
and processing the first video segment according to the second preset processing information under the condition that the real-time playing state information of the first video segment is detected to meet the preset playing state information of the first video segment.
9. The method according to claim 6, wherein the real-time status information further includes real-time playing content information, the preset status information further includes preset playing content information, and the preset processing information further includes third preset processing information corresponding to the preset playing content information;
after monitoring the real-time status information in the playing process of the first video segment, the method further comprises:
and processing the first video segment according to the third preset processing information under the condition that the real-time playing content information of the first video segment is detected to meet the preset playing content information of the first video segment.
10. The method according to any one of claims 6 to 9, wherein the preset processing information is used for indicating at least one of:
performing pause processing on the first video segment;
carrying out speed doubling processing on the first video segment;
amplifying the first video segment;
performing repeated playing processing on the first video segment;
recording the first video segment;
and outputting prompt information to the first video segment.
11. A video processing apparatus comprising:
the acquisition module is used for acquiring a target video;
the dividing module is used for dividing the target video into N video segments according to the content of the target video, wherein N is greater than 1;
the setting module is used for setting preset state information of the N video segments and preset processing information corresponding to the preset state information according to the contents of the N video segments;
the preset state information comprises preset user state information, and first preset processing information corresponding to the preset user state information is used for indicating: in the playing process of the video segment, when the terminal detects that the real-time user state information meets the preset user state information, the video segment of the target video is processed according to the first preset processing information.
12. The apparatus of claim 11, wherein the apparatus further comprises:
and the sending module is used for sending the target video, the preset state information of the N video segments and the preset processing information corresponding to the preset state information to a terminal.
13. The apparatus according to claim 11, wherein the preset state information further includes preset playing state information, and second preset processing information corresponding to the preset playing state information is used to indicate: in the playing process of the video segment, when the terminal detects that the real-time playing state information of the video segment at the terminal meets the preset playing state information, the video segment of the target video is processed according to the second preset processing information.
14. The apparatus according to claim 11, wherein the preset state information further includes preset playback content information, and third preset processing information corresponding to the preset playback content information is used to indicate: and when the terminal detects that the content of the video segment meets the preset playing content information, processing the video segment of the target video according to the third preset processing information.
15. The apparatus of claim 12, wherein the preset processing information is used to indicate at least one of:
performing pause processing on the video segment;
carrying out speed doubling processing on the video segment;
amplifying the video segment;
performing repeated playing processing on the video segment;
recording the video segment;
and outputting prompt information to the video segment.
16. A video playback apparatus comprising:
the monitoring module is used for monitoring real-time state information in the playing process of a first video segment of a target video in the process of playing the first video segment, wherein the real-time state information comprises real-time user state information;
the first processing module is used for processing the first video segment according to first preset processing information under the condition that the real-time user state information of the first video segment is detected to meet the preset user state information of the first video segment;
the target video is divided into N video segments in advance, the N video segments are respectively preset with corresponding preset state information and preset processing information corresponding to the preset state information, the preset state information comprises preset user state information, the first preset processing information is preset processing information corresponding to the preset user state information, the N video segments comprise the first video segment, and N is greater than 1.
17. The apparatus of claim 16, wherein the apparatus further comprises:
and the receiving module is used for receiving the target video sent by the server, and the preset state information of the N video segments and the preset processing information corresponding to the preset state information.
18. The apparatus according to claim 16, wherein the real-time status information further includes real-time playing status information, the preset status information further includes preset playing status information, and the preset processing information further includes second preset processing information corresponding to the preset playing status information;
the device further comprises:
and the second processing module is used for processing the first video segment according to the second preset processing information under the condition that the real-time playing state information of the first video segment is detected to meet the preset playing state information of the first video segment.
19. The apparatus according to claim 16, wherein the real-time status information further includes real-time playing content information, the preset status information further includes preset playing content information, and the preset processing information further includes third preset processing information corresponding to the preset playing content information;
the device further comprises:
and the third processing module is used for processing the first video segment according to the third preset processing information under the condition that the real-time playing content information of the first video segment is detected to meet the preset playing content information of the first video segment.
20. The apparatus of claim 16, wherein the preset processing information is used to indicate at least one of:
performing pause processing on the first video segment;
carrying out speed doubling processing on the first video segment;
amplifying the first video segment;
performing repeated playing processing on the first video segment;
recording the first video segment;
and displaying prompt information on the first video segment.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
22. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-10.
23. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011623251.7A CN112866809B (en) | 2020-12-31 | 2020-12-31 | Video processing method, device, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011623251.7A CN112866809B (en) | 2020-12-31 | 2020-12-31 | Video processing method, device, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112866809A true CN112866809A (en) | 2021-05-28 |
CN112866809B CN112866809B (en) | 2023-06-23 |
Family
ID=75999267
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011623251.7A Active CN112866809B (en) | 2020-12-31 | 2020-12-31 | Video processing method, device, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112866809B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114007122A (en) * | 2021-10-13 | 2022-02-01 | 深圳Tcl新技术有限公司 | Video playing method and device, electronic equipment and storage medium |
CN114374878A (en) * | 2021-12-28 | 2022-04-19 | 苏州金螳螂文化发展股份有限公司 | Interactive display system based on action recognition |
CN114845153A (en) * | 2022-03-31 | 2022-08-02 | 广州方硅信息技术有限公司 | Display processing method of live interface, electronic terminal and storage medium |
CN115802089A (en) * | 2022-10-08 | 2023-03-14 | 北京达佳互联信息技术有限公司 | Page interaction method and device, electronic equipment and storage medium |
CN117560538A (en) * | 2024-01-12 | 2024-02-13 | 江西微博科技有限公司 | Service method and device of interactive voice video based on cloud platform |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070067795A1 (en) * | 2005-09-20 | 2007-03-22 | Fuji Xerox Co., Ltd. | Video playing system, video playing apparatus, control method for playing video, storage medium storing program for playing video |
CN104750358A (en) * | 2015-03-09 | 2015-07-01 | 深圳市艾优尼科技有限公司 | Terminal |
CN104750359A (en) * | 2015-03-09 | 2015-07-01 | 深圳市艾优尼科技有限公司 | Control method for dynamic information play window |
US20150208125A1 (en) * | 2014-01-22 | 2015-07-23 | Lenovo (Singapore) Pte. Ltd. | Automated video content display control using eye detection |
CN107484021A (en) * | 2017-09-27 | 2017-12-15 | 广东小天才科技有限公司 | Video playing method, system and terminal equipment |
CN107613368A (en) * | 2017-09-26 | 2018-01-19 | 珠海市魅族科技有限公司 | Video pause method and apparatus, computer installation and computer-readable recording medium |
CN108259988A (en) * | 2017-12-26 | 2018-07-06 | 努比亚技术有限公司 | A kind of video playing control method, terminal and computer readable storage medium |
CN109195015A (en) * | 2018-08-21 | 2019-01-11 | 北京奇艺世纪科技有限公司 | A kind of video playing control method and device |
CN110113639A (en) * | 2019-05-14 | 2019-08-09 | 北京儒博科技有限公司 | Video playing control method, device, terminal, server and storage medium |
US20190268669A1 (en) * | 2016-11-11 | 2019-08-29 | Alibaba Group Holding Limited | Playing Control Method and Apparatus |
CN110730387A (en) * | 2019-11-13 | 2020-01-24 | 腾讯科技(深圳)有限公司 | Video playing control method and device, storage medium and electronic device |
CN111327958A (en) * | 2020-02-28 | 2020-06-23 | 北京百度网讯科技有限公司 | Video playing method and device, electronic equipment and storage medium |
CN111615003A (en) * | 2020-05-29 | 2020-09-01 | 腾讯科技(深圳)有限公司 | Video playing control method, device, equipment and storage medium |
CN111615002A (en) * | 2020-04-30 | 2020-09-01 | 腾讯科技(深圳)有限公司 | Video background playing control method, device and system and electronic equipment |
-
2020
- 2020-12-31 CN CN202011623251.7A patent/CN112866809B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070067795A1 (en) * | 2005-09-20 | 2007-03-22 | Fuji Xerox Co., Ltd. | Video playing system, video playing apparatus, control method for playing video, storage medium storing program for playing video |
CN1937749A (en) * | 2005-09-20 | 2007-03-28 | 富士施乐株式会社 | Video playing system, video playing apparatus, control method and storage medium for playing video |
US20150208125A1 (en) * | 2014-01-22 | 2015-07-23 | Lenovo (Singapore) Pte. Ltd. | Automated video content display control using eye detection |
CN104750358A (en) * | 2015-03-09 | 2015-07-01 | 深圳市艾优尼科技有限公司 | Terminal |
CN104750359A (en) * | 2015-03-09 | 2015-07-01 | 深圳市艾优尼科技有限公司 | Control method for dynamic information play window |
US20190268669A1 (en) * | 2016-11-11 | 2019-08-29 | Alibaba Group Holding Limited | Playing Control Method and Apparatus |
CN107613368A (en) * | 2017-09-26 | 2018-01-19 | 珠海市魅族科技有限公司 | Video pause method and apparatus, computer installation and computer-readable recording medium |
CN107484021A (en) * | 2017-09-27 | 2017-12-15 | 广东小天才科技有限公司 | Video playing method, system and terminal equipment |
CN108259988A (en) * | 2017-12-26 | 2018-07-06 | 努比亚技术有限公司 | A kind of video playing control method, terminal and computer readable storage medium |
CN109195015A (en) * | 2018-08-21 | 2019-01-11 | 北京奇艺世纪科技有限公司 | A kind of video playing control method and device |
CN110113639A (en) * | 2019-05-14 | 2019-08-09 | 北京儒博科技有限公司 | Video playing control method, device, terminal, server and storage medium |
CN110730387A (en) * | 2019-11-13 | 2020-01-24 | 腾讯科技(深圳)有限公司 | Video playing control method and device, storage medium and electronic device |
CN111327958A (en) * | 2020-02-28 | 2020-06-23 | 北京百度网讯科技有限公司 | Video playing method and device, electronic equipment and storage medium |
CN111615002A (en) * | 2020-04-30 | 2020-09-01 | 腾讯科技(深圳)有限公司 | Video background playing control method, device and system and electronic equipment |
CN111615003A (en) * | 2020-05-29 | 2020-09-01 | 腾讯科技(深圳)有限公司 | Video playing control method, device, equipment and storage medium |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114007122A (en) * | 2021-10-13 | 2022-02-01 | 深圳Tcl新技术有限公司 | Video playing method and device, electronic equipment and storage medium |
CN114007122B (en) * | 2021-10-13 | 2024-03-15 | 深圳Tcl新技术有限公司 | Video playing method and device, electronic equipment and storage medium |
CN114374878A (en) * | 2021-12-28 | 2022-04-19 | 苏州金螳螂文化发展股份有限公司 | Interactive display system based on action recognition |
CN114845153A (en) * | 2022-03-31 | 2022-08-02 | 广州方硅信息技术有限公司 | Display processing method of live interface, electronic terminal and storage medium |
CN115802089A (en) * | 2022-10-08 | 2023-03-14 | 北京达佳互联信息技术有限公司 | Page interaction method and device, electronic equipment and storage medium |
CN117560538A (en) * | 2024-01-12 | 2024-02-13 | 江西微博科技有限公司 | Service method and device of interactive voice video based on cloud platform |
CN117560538B (en) * | 2024-01-12 | 2024-03-22 | 江西微博科技有限公司 | Service method of interactive voice video based on cloud platform |
Also Published As
Publication number | Publication date |
---|---|
CN112866809B (en) | 2023-06-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112866809B (en) | Video processing method, device, electronic equipment and readable storage medium | |
US9563983B2 (en) | Filtering information within augmented reality overlays | |
US9557951B2 (en) | Filtering information within augmented reality overlays | |
CN112738418B (en) | Video acquisition method and device and electronic equipment | |
CN113325954B (en) | Method, apparatus, device and medium for processing virtual object | |
US20200213678A1 (en) | Wager information based prioritized live event display system | |
CN113596488B (en) | Live broadcast room display method and device, electronic equipment and storage medium | |
CN114881708A (en) | Method, device, system, equipment and storage medium for monitoring delivered content | |
US11889127B2 (en) | Live video interaction method and apparatus, and computer device | |
CN112929728A (en) | Video rendering method, device and system, electronic equipment and storage medium | |
CN113691864A (en) | Video clipping method, video clipping device, electronic equipment and readable storage medium | |
CN113473086A (en) | Video playing method and device, electronic equipment and intelligent high-speed large screen | |
CN111970560B (en) | Video acquisition method and device, electronic equipment and storage medium | |
CN114168793A (en) | Anchor display method, device, equipment and storage medium | |
CN114125498A (en) | Video data processing method, device, equipment and storage medium | |
CN113873318A (en) | Video playing method, device, equipment and storage medium | |
CN106878773B (en) | Electronic device, video processing method and apparatus, and storage medium | |
US11209902B2 (en) | Controlling input focus based on eye gaze | |
US10386933B2 (en) | Controlling navigation of a visual aid during a presentation | |
CN113784217A (en) | Video playing method, device, equipment and storage medium | |
CN114363704B (en) | Video playing method, device, equipment and storage medium | |
US11201683B2 (en) | Monitoring video broadcasts | |
CN113873323B (en) | Video playing method, device, electronic equipment and medium | |
CN114268847A (en) | Video playing method and device, electronic equipment and storage medium | |
CN109729410B (en) | Live broadcast room interactive event processing method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |