CN112084370A - Video processing method and device and electronic equipment - Google Patents

Video processing method and device and electronic equipment Download PDF

Info

Publication number
CN112084370A
CN112084370A CN202010949290.XA CN202010949290A CN112084370A CN 112084370 A CN112084370 A CN 112084370A CN 202010949290 A CN202010949290 A CN 202010949290A CN 112084370 A CN112084370 A CN 112084370A
Authority
CN
China
Prior art keywords
target
input
key information
video
identifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010949290.XA
Other languages
Chinese (zh)
Inventor
彭述功
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010949290.XA priority Critical patent/CN112084370A/en
Publication of CN112084370A publication Critical patent/CN112084370A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a video processing method and electronic equipment, belongs to the technical field of electronics, and aims to solve the problem that when a user wants to search some contents, the user can only browse a large number of short videos in a favorite list in sequence, so that the user operation is complicated. Wherein the method comprises the following steps: acquiring characteristic information of a target video; determining a target type of the target video according to the characteristic information of the target video; acquiring target key information associated with the target type in the target video; and adding the target key information to a target page. The video processing method is applied to the electronic equipment.

Description

Video processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of electronics, and particularly relates to a video processing method and device and electronic equipment.
Background
With the rapid development of internet short videos, a large number of short videos emerge endlessly. Among them, the content expressed in the short video is rich and varied, and there are short videos expressing music content, short videos expressing movie content, and so on.
Typically, the user has a double click on a short video that is of interest to him, so that the short video is moved to the user's favorites list. As the daily browsing volume of users is huge, short videos in the favorite list are also more and more. When a user wants to search some contents, the user can only browse a large number of short videos in the favorite list in sequence, so that the user operation is complicated.
Therefore, in the process of implementing the present application, the inventors found that at least the following problems exist in the prior art: when a user wants to search some contents, the user can only browse a large number of short videos in the favorite list in sequence, so that the user operation is complicated.
Disclosure of Invention
The embodiment of the application aims to provide a video processing method, which can solve the problem that when a user wants to search some contents, the user can only browse a large number of short videos in a favorite list in sequence, so that the user operation is complicated.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video processing method, where the method includes: acquiring characteristic information of a target video; determining a target type of the target video according to the characteristic information of the target video; acquiring target key information associated with the target type in the target video; and adding the target key information to a target page.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including: the characteristic acquisition module is used for acquiring characteristic information of the target video; the type determining module is used for determining the target type of the target video according to the characteristic information of the target video; the information acquisition module is used for acquiring target key information associated with the target type in the target video; and the information adding module is used for adding the target key information to a target page.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In this way, in the embodiment of the application, the video being browsed by the user can be used as the target video, so that the feature information of the target video is obtained, and the target video is classified based on the feature information to determine that the target video belongs to the target type. Further, after the target type is determined, key information associated with the target type is extracted from the content of the target video, and is added to the target page as the target key information of the embodiment. Therefore, compared with the prior art, on one hand, the key information is automatically collected when the user browses the video, the user does not need to stop browsing the video and then manually collect the video, and the user operation is simplified; on the other hand, the target page presents the key information, so that the user can quickly inquire in the key information, a large number of videos are prevented from being browsed, and the user operation is simplified.
Drawings
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application;
FIG. 2 is a second flowchart of a video processing method according to an embodiment of the present application;
fig. 3 is a third flowchart of a video processing method according to an embodiment of the present application;
FIG. 4 is a fourth flowchart of a video processing method according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an operation of a video processing method according to an embodiment of the present application;
FIG. 6 is a second schematic diagram illustrating the operation of the video processing method according to the embodiment of the present application;
FIG. 7 is a fifth flowchart of a video processing method according to an embodiment of the present application;
fig. 8 is a third schematic operational diagram of a video processing method according to an embodiment of the present application;
FIG. 9 is a sixth flowchart of a video processing method according to an embodiment of the present application;
fig. 10 is a seventh flowchart of a video processing method according to an embodiment of the present application;
FIG. 11 is a fourth operational diagram of a video processing method according to an embodiment of the present application;
fig. 12 is an eighth flowchart of a video processing method according to an embodiment of the present application;
FIG. 13 is a fifth operational diagram of a video processing method according to an embodiment of the present application;
fig. 14 is one of block diagrams of a video processing apparatus according to an embodiment of the present application;
fig. 15 is a hardware configuration diagram of an electronic device according to an embodiment of the present application.
Fig. 16 is a second hardware configuration diagram of the electronic device according to the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 shows a flowchart of a video processing method according to an embodiment of the present application, including:
step S1: and acquiring the characteristic information of the target video.
Optionally, the target video is a short video.
Figure BDA0002676379680000041
TABLE 1
Referring to table 1, the characteristic information includes, but is not limited to, those listed in table 1.
Step S2: and determining the target type of the target video according to the characteristic information of the target video.
Different characteristic information corresponds to different types.
Referring to table 1, for example, for a video containing a music name, it belongs to a music genre.
Step S3: and acquiring target key information associated with the target type in the target video.
For example, in the case where the target video belongs to a music genre, a music name in the target video is acquired as the target key information.
Step S4: and adding the target key information to the target page.
Optionally, the target page is a browsing list generated for the new function of the embodiment. When a user browses videos, information related to the types can be automatically extracted from the videos according to the types of the videos and added to a browsing list.
In this step, the user can quickly locate specific information in the target page without having to page through the video.
In this way, in the embodiment of the application, the video being browsed by the user can be used as the target video, so that the feature information of the target video is obtained, and the target video is classified based on the feature information to determine that the target video belongs to the target type. Further, after the target type is determined, key information associated with the target type is extracted from the content of the target video, and is added to the target page as the target key information of the embodiment. Therefore, compared with the prior art, on one hand, the key information is automatically collected when the user browses the video, the user does not need to stop browsing the video and then manually collect the video, and the user operation is simplified; on the other hand, the target page presents the key information, so that the user can quickly inquire in the key information, a large number of videos are prevented from being browsed, and the user operation is simplified.
On the basis of the embodiment shown in fig. 1, fig. 2 shows a flowchart of a video processing method according to another embodiment of the present application, and step S1 includes at least one of the following steps:
step S101: and acquiring the characteristic information of the target video under the condition of receiving the preset input of the user to the target video. Wherein the preset input comprises at least one of praise, comment, collection and attention.
Optionally, the preset input includes a touch input performed by a user on a screen and an idle input, and is not limited to a click input, a slide input, and the like; the preset input also includes an input of a physical key on the device by a user, and is not limited to an input such as pressing. Further, the preset input includes one or more inputs, wherein the plurality of inputs may be continuous or intermittent.
The preset inputs are: and the input mode is set in advance so that a viewer can input the target video conveniently.
Step S102: and acquiring the characteristic information of the target video under the condition that the playing time of the target video is detected to be greater than a preset threshold value.
In this embodiment, in order to make the relationship between the information collected in the target page and the user behavior close to meet the preference of the user, on one hand, the key information in the video can be actively acquired and collected for the condition that the user inputs at least one of praise, comment, collection, attention and the like to the video when browsing the video; on the other hand, the key information in the video can be actively acquired and collected aiming at the condition that the time of the user staying on the video exceeds a certain threshold value when the user browses the video.
Wherein, the preset threshold is: a threshold value set in advance. Optionally, the preset threshold is 4/5 of the video duration.
In this embodiment, the relevant data of the user browsing the video, including the data of the user input, browsing duration, and the like, may be reported to the back end for data acquisition through the big data embedding point.
In the embodiment, on the basis of automatically acquiring the key information in the video, only the video in which the user is interested is automatically acquired, so that the relevance between the key information in the target page and the user is stronger, and the user requirements are met.
On the basis of the embodiment shown in fig. 1, fig. 3 shows a flowchart of a video processing method according to another embodiment of the present application, and step S2 includes:
step S201: and respectively calculating probability values of the target videos belonging to the types according to the characteristic information of the target videos.
Step S202: and determining that the target video belongs to at least one of the types according to the probability value.
Wherein, each type at least comprises a music type, a family type, a travel type, a food type and the like.
In this embodiment, the features may be extracted through a machine learning algorithm, specifically, selected through a Gradient Boosting iterative Decision Tree (GBDT) algorithm, where the features adopted by the GBDT include, but are not limited to, the features listed in table 1. In table 1, the feature corresponding to the first column is a music type feature, the feature corresponding to the second column is a movie type feature, the feature corresponding to the third column is a food type feature, and the feature corresponding to the fourth column is a travel type feature.
Specifically, the GBDT algorithm is manually labeled by offline training samples: after the modeled features are selected, it is necessary to start the acquisition of training samples, and the embodiment includes, but is not limited to, the following acquisition modes: supervised manual annotation is performed by commercially purchased short video content, short video content uploaded by the user himself, and open source video collections. And then performing off-line training through the sample feature sets and the manual marking results.
The machine learning algorithm used for the offline training of the model is GBDT, but is not limited thereto, and the GBDT parameters used in the present embodiment are as follows, but are not limited thereto. The maximum likelihood logarithm is used as a loss function, MMSE is used as a decision tree feature splitting principle, and the depth of the GBDT tree, the number of the tree and the learning factor are all configurable items.
The video types of the embodiment are four types, namely music, video, food and travel. Therefore, the multi-classification model of GBDT is adopted for off-line training. The core idea is to train four learners through the GBDT, train a weak learner through a plurality of decision trees of the GBDT, and finally fit four classified strong learners, wherein the four strong learners respectively correspond to the four types of the embodiment, and the probabilities of the four types can be obtained by the four learners trained by the GBDT. Each decision outputs an Identity Document (ID) value by comparing the maximum of the four types of probabilities.
Wherein the probability belonging to category c is calculated as:
Figure BDA0002676379680000071
Gc(x) The class c learner is a GBDT to be trained, and the learner is formed by M weak learners. Wherein the content of the first and second substances,
Figure BDA0002676379680000072
(to which leaf node x belongs)
It can be seen from the above formula that the final learners are obtained by superposing the output result of the last GBDT on the residual R of the current GBDT. Then the final learner G is obtainedc(x) Then, the final probability distribution of the multi-type IDs can be obtained as follows, wherein the music type is 1, the video type is 2, the food type is 3, and the travel type is 4.
Figure BDA0002676379680000073
Wherein c is 1, 2, 3 or 4.
When videos are classified, all feature information reported by multiple devices selects final leaf nodes in the M decision trees in sequence, so that the g (x) value of the information to be predicted in the final learner can be calculated, and then the g (x) value is substituted into a third formula, so that the four types of probability values can be obtained.
Optionally, the type corresponding to the obtained maximum probability value is used as the finally determined video type.
Further, for music videos, song names, song links and the like in the videos are extracted as key information; for the movie video, the movie name, the movie link and the like are extracted as key information; for the food video, the address of the food store and the like are extracted as key information.
In the present embodiment, a method for classifying videos is provided. First, four types of video are obtained from a large amount of data. More types are available, and the video type can be customized by the user. Further, by using the characteristic information in the video, probability values of the video belonging to each type are calculated, and then the type corresponding to the maximum probability value is determined as the type to which the video belongs. Therefore, the video classification is more accurate according to the video type obtained by the embodiment based on the characteristic information.
On the basis of the embodiment shown in fig. 1, fig. 4 shows a flowchart of a video processing method according to another embodiment of the present application, and after step S4, the method further includes:
step S5: a first input to a first identification is received. The first identification is used for indicating a target page.
Optionally, the first input includes a touch input performed by a user on a screen and a blank input, and is not limited to a click, a slide and the like; the first input also includes user input to physical keys on the device, not limited to press or the like. Furthermore, the first input includes one or more inputs, wherein the plurality of inputs may be continuous or intermittent.
Optionally, the first identifier includes an icon or the like.
Referring to fig. 5, in an application scenario, for example, in an interface of video playing, a list icon (indicated by a finger in the figure) may be added on the right side, and a user clicks the list icon.
Step S6: at least one second identifier is displayed in response to the first input. Wherein the second identifier is used for indicating the type.
And the application scene is that the user clicks the list icon to display at least one type in the target page, and each type corresponds to a second identifier.
Wherein, the target page can only display the type identifiers focused on by the user. For example, the target page is displayed by four parts of contents, namely music, movies, gourmet and tourism.
Step S7: a second input is received for a second identification corresponding to the target type.
Optionally, the second input includes a touch input performed by the user on the screen and a blank input, and is not limited to a click, a slide and the like; the second input also includes user input to physical keys on the device, not limited to press or the like. Also, the second input includes one or more inputs, wherein the plurality of inputs may be continuous or intermittent.
In this step, taking the target type as an example, the user clicks the second identifier corresponding to the target type.
Step S8: and responding to the second input, and displaying key information corresponding to the target type. Wherein the key information includes target key information.
Referring to fig. 6, the user may select the second identifier corresponding to the different type, thereby displaying key information corresponding to the type.
Taking the target type as an example, the user clicks the second identifier corresponding to the target type, and displays the key information corresponding to the target type. Since the target key information is collected in the target type, the displayed key information includes the target key information of the embodiment of the present application.
Optionally, after displaying the key information, the source address of the key information is noted. For example, a link to the video. In this way, the user can jump to the original video for viewing, and the user is helped to know more content related to the key information.
In this embodiment, the content in the target page is displayed in a classified manner according to each type, so that information classification management is realized, and a user can quickly view information based on classification, thereby further simplifying user operation.
On the basis of the embodiment shown in fig. 4, fig. 7 shows a flowchart of a video processing method according to another embodiment of the present application, and after step S7, the method further includes:
step S9: and responding to the second input, displaying target key information corresponding to the target type under the condition that preset input of the user to the target video is received or preset input of the user to the target key information is received, and displaying an input mark corresponding to the target key information.
Wherein the preset input comprises at least one of praise, comment, collection and attention.
And in the case of displaying the key information corresponding to the target type, the key information corresponding to the input mark is ranked in priority to other key information.
In this embodiment, for the case that a preset input of the target video by the user is received, the target video is first moved to a list corresponding to the preset input, such as a favorite list, and the like. Meanwhile, the target page is shared with each list data, and the target key information extracted from the target video can be marked in the target page, so that corresponding input marks are displayed simultaneously when the target key information is displayed.
For example, the user double clicks on the target video to like, lights up the "heart" and the target video is moved to the like list, and correspondingly, when the target key information in the target page is displayed, notes the lighted "heart" to indicate that the target video to which the user belongs is like.
In addition, for the condition that preset input of the target key information by the user on the target page is received, the target key information can be marked in the target page, so that corresponding input marks are displayed simultaneously when the target key information is displayed.
Wherein, for different types of input, different input marks can be corresponded. For example, for a collection input, the corresponding input is labeled "five stars in yellow".
It should be noted that the "preset input" mentioned in the two places in the present embodiment may be a same type of preset input, or may be different types of preset inputs, and is not limited.
Alternatively, based on the display method of the input mark, the marked information and the unmarked information can be distinguished in the target page, and the marked information is ranked in the top in the target page.
In this embodiment, two methods for marking information in a target page are provided, so that information that a user is interested in is highlighted in the target page, the user can quickly view the information, and user operation is further simplified.
In further embodiments, on the basis of automatically acquiring the key information in the video, the user may manually select the key information to be added to the target page.
Referring to fig. 8, the application scenarios are as follows: and the user double clicks the screen to input approval to the video, so that the key information in the video is automatically acquired, the acquired key information is displayed, and the user can select part or all of the displayed key information as the key information added to the target page. In addition, the user can also cancel all key information from being added to the target page.
Therefore, on the basis of automatic acquisition, a manual selection function is added to avoid the target page from containing information which is not needed by the user, and the personalized requirements of the user are met.
On the basis of the embodiment shown in fig. 4, fig. 9 shows a flowchart of a video processing method according to another embodiment of the present application, and after step S8, the method further includes:
step S10: a third input to a third identifier is received.
Optionally, the third input includes a touch input performed by the user on the screen and a blank input, and is not limited to a click, a slide and the like; the third input may also include user input to physical keys on the device, not limited to push or the like. Also, the third input includes one or more inputs, wherein the plurality of inputs may be continuous or intermittent.
Optionally, the third identifier includes an icon or the like.
Optionally, a third identifier is used to indicate a key item of information.
Step S11: in response to a third input, the program associated with the key information indicated by the third identifier is searched for the key information indicated by the third identifier.
Step S12: the search results are displayed in the program.
For example, if the user double-clicks a third identifier corresponding to key information containing a music name, the user jumps to the music application program with the music name, automatically searches the search page with the music name, and displays the searched related list.
For another example, if the user double clicks a third identifier corresponding to key information containing the food address link and the food store name, the map application program is skipped by carrying the food address link and the food store name, and the search page carries the food address link and the food store name for searching, so that a navigation route reaching the food store is displayed.
The program associated with the key information is not limited, and may include a plurality of programs, and the user may select any one of the plurality of programs to open.
In the embodiment, the target page is communicated with the plurality of application programs, and when the user views information in the target page, the user can search more related information in other application programs without manually switching the application programs, so that the user operation is further simplified.
On the basis of the embodiment shown in fig. 4, fig. 10 shows a flowchart of a video processing method according to another embodiment of the present application, and after step S8, the method further includes:
step S13: a fourth input to the second identifier or the fourth identifier is received.
Optionally, the fourth input includes a touch input performed by the user on the screen and a blank input, and is not limited to a click, a slide and the like; the fourth input also includes user input to physical keys on the device, not limited to press or the like. Further, the fourth input includes one or more inputs, wherein the plurality of inputs may be continuous or intermittent.
Optionally, the fourth identifier includes an icon, a button, or the like.
Optionally, a fourth flag is used to indicate a key information item.
Step S14: searching and displaying a video associated with the type indicated by the second identifier in response to a fourth input; alternatively, videos associated with the key information indicated by the fourth identification are searched and displayed.
Referring to fig. 11, for example, a user clicks a button set on one item of key information, and searches for and displays a video recommendation list similar to the original video where the key information is located.
For another example, the user clicks a preset button in a sub-page of one type, and searches and displays a similar video recommendation list of the same type.
In this embodiment, a quick search method is provided. When the user views the information collected in the target page, the user can quickly retrieve the related videos without manually switching a search interface, so that the user operation is further simplified. Meanwhile, pushing is carried out according to the information in the target page, and the accuracy of the pushed video is ensured.
On the basis of the embodiment shown in fig. 1, fig. 12 shows a flowchart of a video processing method according to another embodiment of the present application, and after step S4, the method further includes:
step S15: a fifth input by the user is received.
Optionally, the fifth input includes a touch input performed by the user on the screen and a blank input, and is not limited to a click, a slide and the like; the fifth input may also include user input to physical keys on the device, and is not limited to press or the like. Moreover, the fifth input includes one or more inputs, wherein the plurality of inputs may be continuous or intermittent.
Referring to fig. 13, in an application scenario, for example, a user clicks a sharing icon in my browsing list (i.e., a target page in this embodiment), selects at least one item of key information as target content, and then selects a target object from a plurality of popped objects.
In an application scene, for example, based on an application program in which a video is located, a user takes a head portrait of a friend, activates a sharing function, and directly shares a target page in the application program to the friend.
Step S16: and responding to the fifth input, and sending the target content in the target page to the target object.
In this step, key information selected by the user through the fifth input is transmitted to the target object selected through the fifth input.
Wherein the target content is associated with a fifth input and the target object is associated with the fifth input.
Alternatively, the target object may be from a social platform.
The target content may be one or more items of information, may be all of the information in the same type, or may be all of the information in the target page.
In this embodiment, the user can share the target page in the same application or among multiple applications, so that the user does not need to switch the applications, and the rapid content sharing can be realized, thereby further simplifying the user operation.
In summary, an object of the present application is to provide a method for performing browsing history topic classification management based on short video browsing content, so that a user can intelligently manage associated topic information while paying attention to a short video, thereby facilitating quick query and various topic contents in short video application. The method comprises the steps of intelligently analyzing the related theme information in the video content, and the operations such as collection and the like are not needed by a user, so that the short video interaction experience of the user is greatly improved, and the user can conveniently and quickly search the historical video content information when forgetting to collect the video content alone. Meanwhile, browsing list operation management is provided, so that users can conveniently and quickly share interested subject contents and search similar short videos. In addition, the method and the device well combine machine learning to improve the management of the user for the short video content information, and greatly improve the user experience of short video searching subject information.
It should be noted that, in the video processing method provided in the embodiment of the present application, the execution subject may be a video processing apparatus, or a control module in the video processing apparatus for executing the video processing method. In the embodiment of the present application, a video processing apparatus executes a video processing method as an example, and an apparatus of the video processing method provided in the embodiment of the present application is described.
Fig. 14 shows a block diagram of a video processing apparatus according to another embodiment of the present application, including:
the feature obtaining module 10 is configured to obtain feature information of a target video;
the type determining module 20 is configured to determine a target type to which the target video belongs according to the feature information of the target video;
the information acquisition module 30 is configured to acquire target key information associated with a target type in a target video;
and the information adding module 40 is used for adding the target key information to the target page.
In this way, in the embodiment of the application, the video being browsed by the user can be used as the target video, so that the feature information of the target video is obtained, and the target video is classified based on the feature information to determine that the target video belongs to the target type. Further, after the target type is determined, key information associated with the target type is extracted from the content of the target video, and is added to the target page as the target key information of the embodiment. Therefore, compared with the prior art, on one hand, the key information is automatically collected when the user browses the video, the user does not need to stop browsing the video and then manually collect the video, and the user operation is simplified; on the other hand, the target page presents the key information, so that the user can quickly inquire in the key information, a large number of videos are prevented from being browsed, and the user operation is simplified.
Optionally, the feature obtaining module 10 includes at least one of:
the first acquisition unit is used for acquiring the characteristic information of the target video under the condition of receiving the preset input of a user to the target video; the preset input comprises at least one of praise, comment, collection and attention;
and the second acquisition unit is used for acquiring the characteristic information of the target video under the condition that the playing time of the target video is detected to be greater than the preset threshold value.
Optionally, the type determining module 20 includes:
the calculating unit is used for respectively calculating probability values of the target videos belonging to various types according to the characteristic information of the target videos;
and the determining unit is used for determining that the target video belongs to at least one of the types according to the probability value.
Optionally, the apparatus further comprises:
the first input receiving module is used for receiving first input of the first identifier; the first identification is used for indicating a target page;
a first input response module for displaying at least one second identifier in response to a first input; wherein the second identifier is used for indicating the type;
the second input receiving module is used for receiving second input of a second identifier corresponding to the target type;
the second input response module is used for responding to second input and displaying key information corresponding to the target type; wherein the key information includes target key information.
Optionally, the apparatus further comprises:
the marking unit is used for responding to the second input, displaying the target key information corresponding to the target type and displaying an input mark corresponding to the target key information under the condition that the preset input of the user to the target video is received or the preset input of the user to the target key information is received;
the preset input comprises at least one of praise, comment, collection and attention;
and in the case of displaying the key information corresponding to the target type, the key information corresponding to the input mark is ranked in priority to other key information.
Optionally, the apparatus further comprises:
a third input receiving module for receiving a third input to the third identifier;
a third input response module, configured to search, in response to a third input, for the key information indicated by the third identifier in the program associated with the key information indicated by the third identifier;
and the display module is used for displaying the search result in the program.
Optionally, the apparatus further comprises:
the fourth input receiving module is used for receiving fourth input of the second identifier or the fourth identifier;
a fourth input response module for searching and displaying the video associated with the type indicated by the second identifier in response to a fourth input; alternatively, videos associated with the key information indicated by the fourth identification are searched and displayed.
Optionally, the apparatus further comprises:
the fifth input receiving module is used for receiving fifth input of a user;
the fifth input response module is used for responding to the fifth input and sending the target content in the target page to the target object;
wherein the target content is associated with a fifth input and the target object is associated with the fifth input.
The video processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The video processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The video processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments in fig. 1 to fig. 13, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 15, an electronic device 100 is further provided in this embodiment of the present application, and includes a processor 101, a memory 102, and a program or an instruction stored in the memory 102 and executable on the processor 101, where the program or the instruction is executed by the processor 101 to implement each process of the above-mentioned video processing method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 16 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 16 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted.
The processor 1010 is configured to obtain feature information of a target video; determining a target type of the target video according to the characteristic information of the target video; acquiring target key information associated with the target type in the target video; and adding the target key information to a target page.
In this way, in the embodiment of the application, the video being browsed by the user can be used as the target video, so that the feature information of the target video is obtained, and the target video is classified based on the feature information to determine that the target video belongs to the target type. Further, after the target type is determined, key information associated with the target type is extracted from the content of the target video, and is added to the target page as the target key information of the embodiment. Therefore, compared with the prior art, on one hand, the key information is automatically collected when the user browses the video, the user does not need to stop browsing the video and then manually collect the video, and the user operation is simplified; on the other hand, the target page presents the key information, so that the user can quickly inquire in the key information, a large number of videos are prevented from being browsed, and the user operation is simplified.
Optionally, the processor 1010 is further configured to, in a case that a preset input of a target video by a user is received, obtain feature information of the target video; the preset input comprises at least one of praise, comment, collection and attention; and acquiring the characteristic information of the target video under the condition that the playing time of the target video is detected to be greater than a preset threshold value.
Optionally, the processor 1010 is further configured to calculate probability values of the target videos belonging to the respective types according to the feature information of the target videos; and determining that the target video belongs to at least one of the types according to the probability value.
Optionally, a user input unit 1007 for receiving a first input to the first identifier; wherein the first identifier is used for indicating the target page; receiving a second input to a second identifier corresponding to the target type; a processor 1010 further configured to display at least one second identifier in response to the first input; wherein the second identifier is used for indicating a type; responding to the second input, and displaying key information corresponding to the target type; wherein the key information includes the target key information.
Optionally, the processor 1010 is further configured to, in response to the second input, in a case that a preset input of the user to the target video is received or a preset input of the user to the target key information is received, display the target key information corresponding to the target type and display an input mark corresponding to the target key information; the preset input comprises at least one of praise, comment, collection and attention; and under the condition that the key information corresponding to the target type is displayed, the ranking of the key information corresponding to the input mark is prior to the ranking of other key information.
Optionally, the user input unit 1007 is further configured to receive a third input to the third identifier; a processor 1010, further configured to search, in response to the third input, for key information indicated by the third identifier in a program associated with the key information indicated by the third identifier; a display unit 1006 that displays the search result in the program.
Optionally, the user input unit 1007 is further configured to receive a fourth input for the second identifier or the fourth identifier; a processor 1010 further configured to search for and display video associated with the type indicated by the second identifier in response to the fourth input; or searching and displaying videos associated with the key information indicated by the fourth identification.
Optionally, the user input unit 1007 is further configured to receive a fifth input from the user; a processor 1010, further configured to send target content in the target page to a target object in response to the fifth input; wherein the target content is associated with the fifth input and the target object is associated with the fifth input.
The application aims to provide browsing history theme classification management based on short video browsing content, so that a user can intelligently manage related theme information while paying attention to a short video, and quick inquiry and various theme contents in short video application are facilitated. The method comprises the steps of intelligently analyzing the related theme information in the video content, and the operations such as collection and the like are not needed by a user, so that the short video interaction experience of the user is greatly improved, and the user can conveniently and quickly search the historical video content information when forgetting to collect the video content alone. Meanwhile, browsing list operation management is provided, so that users can conveniently and quickly share interested subject contents and search similar short videos. In addition, the method and the device well combine machine learning to improve the management of the user for the short video content information, and greatly improve the user experience of short video searching subject information.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 1041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned shooting control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above-mentioned embodiment of the shooting control method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (18)

1. A method of video processing, the method comprising:
acquiring characteristic information of a target video;
determining a target type of the target video according to the characteristic information of the target video;
acquiring target key information associated with the target type in the target video;
and adding the target key information to a target page.
2. The method according to claim 1, wherein the obtaining of the feature information of the target video comprises at least one of:
acquiring characteristic information of a target video under the condition of receiving preset input of a user on the target video; the preset input comprises at least one of praise, comment, collection and attention;
and acquiring the characteristic information of the target video under the condition that the playing time of the target video is detected to be greater than a preset threshold value.
3. The method according to claim 1, wherein the determining the target type to which the target video belongs according to the feature information of the target video comprises:
respectively calculating probability values of the target videos belonging to various types according to the characteristic information of the target videos;
and determining that the target video belongs to at least one of the types according to the probability value.
4. The method of claim 1, wherein after adding the target key information to the target page, further comprising:
receiving a first input to a first identifier; wherein the first identifier is used for indicating the target page;
displaying at least one second identifier in response to the first input; wherein the second identifier is used for indicating a type;
receiving a second input to a second identifier corresponding to the target type;
responding to the second input, and displaying key information corresponding to the target type; wherein the key information includes the target key information.
5. The method of claim 4, wherein after receiving the second input for the second identifier corresponding to the target type, further comprising:
responding to the second input, and under the condition that preset input of a user to the target video is received or preset input of the user to the target key information is received, displaying the target key information corresponding to the target type and displaying an input mark corresponding to the target key information;
the preset input comprises at least one of praise, comment, collection and attention;
and under the condition that the key information corresponding to the target type is displayed, the ranking of the key information corresponding to the input mark is prior to the ranking of other key information.
6. The method of claim 4, wherein after displaying key information corresponding to the target type in response to the second input, further comprising:
receiving a third input to the third identifier;
in response to the third input, searching for key information indicated by the third identifier in a program associated with the key information indicated by the third identifier;
the search results are displayed in the program.
7. The method of claim 4, wherein after displaying key information corresponding to the target type in response to the second input, further comprising:
receiving a fourth input to the second identifier or the fourth identifier;
searching for and displaying a video associated with the type indicated by the second identifier in response to the fourth input; or searching and displaying videos associated with the key information indicated by the fourth identification.
8. The method of claim 1, wherein after adding the target key information to the target page, further comprising:
receiving a fifth input of the user;
responding to the fifth input, and sending target content in the target page to a target object;
wherein the target content is associated with the fifth input and the target object is associated with the fifth input.
9. A video processing apparatus, characterized in that the apparatus comprises:
the characteristic acquisition module is used for acquiring characteristic information of the target video;
the type determining module is used for determining the target type of the target video according to the characteristic information of the target video;
the information acquisition module is used for acquiring target key information associated with the target type in the target video;
and the information adding module is used for adding the target key information to a target page.
10. The apparatus of claim 9, wherein the feature obtaining module comprises at least one of:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring the characteristic information of a target video under the condition of receiving preset input of a user on the target video; the preset input comprises at least one of praise, comment, collection and attention;
and the second acquisition unit is used for acquiring the characteristic information of the target video under the condition that the playing time of the target video is detected to be greater than a preset threshold value.
11. The apparatus of claim 9, wherein the type determination module comprises:
the calculating unit is used for respectively calculating probability values of the target videos belonging to various types according to the characteristic information of the target videos;
and the determining unit is used for determining that the target video belongs to at least one of the types according to the probability value.
12. The apparatus of claim 9, further comprising:
the first input receiving module is used for receiving first input of the first identifier; wherein the first identifier is used for indicating the target page;
a first input response module for displaying at least one second identifier in response to the first input; wherein the second identifier is used for indicating a type;
a second input receiving module, configured to receive a second input to a second identifier corresponding to the target type;
the second input response module is used for responding to the second input and displaying the key information corresponding to the target type; wherein the key information includes the target key information.
13. The apparatus of claim 12, further comprising:
the marking unit is used for responding to the second input, displaying the target key information corresponding to the target type and displaying an input mark corresponding to the target key information under the condition that preset input of a user to the target video is received or preset input of the user to the target key information is received;
the preset input comprises at least one of praise, comment, collection and attention;
and under the condition that the key information corresponding to the target type is displayed, the ranking of the key information corresponding to the input mark is prior to the ranking of other key information.
14. The apparatus of claim 12, further comprising:
a third input receiving module for receiving a third input to the third identifier;
a third input response module, configured to search, in response to the third input, for key information indicated by the third identifier in a program associated with the key information indicated by the third identifier;
and the display module is used for displaying the search result in the program.
15. The apparatus of claim 12, further comprising:
the fourth input receiving module is used for receiving fourth input of the second identifier or the fourth identifier;
a fourth input response module for searching and displaying videos associated with the type indicated by the second identifier in response to the fourth input; or searching and displaying videos associated with the key information indicated by the fourth identification.
16. The apparatus of claim 9, further comprising:
the fifth input receiving module is used for receiving fifth input of a user;
the fifth input response module is used for responding to the fifth input and sending the target content in the target page to a target object;
wherein the target content is associated with the fifth input and the target object is associated with the fifth input.
17. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the video processing method according to any one of claims 1 to 8.
18. A readable storage medium, on which a program or instructions are stored, which, when executed by the processor, carry out the steps of the video processing method according to any one of claims 1 to 8.
CN202010949290.XA 2020-09-10 2020-09-10 Video processing method and device and electronic equipment Pending CN112084370A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010949290.XA CN112084370A (en) 2020-09-10 2020-09-10 Video processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010949290.XA CN112084370A (en) 2020-09-10 2020-09-10 Video processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112084370A true CN112084370A (en) 2020-12-15

Family

ID=73737059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010949290.XA Pending CN112084370A (en) 2020-09-10 2020-09-10 Video processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112084370A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113315691A (en) * 2021-05-20 2021-08-27 维沃移动通信有限公司 Video processing method and device and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872813A (en) * 2015-12-10 2016-08-17 乐视网信息技术(北京)股份有限公司 Hotspot video displaying method and device
CN107451148A (en) * 2016-05-31 2017-12-08 北京金山安全软件有限公司 Video classification method and device and electronic equipment
CN107566917A (en) * 2017-09-15 2018-01-09 维沃移动通信有限公司 A kind of video marker method and video playback apparatus
CN108540848A (en) * 2018-03-01 2018-09-14 北京达佳互联信息技术有限公司 Video collection method and apparatus
CN109376268A (en) * 2018-11-27 2019-02-22 北京微播视界科技有限公司 Video classification methods, device, electronic equipment and computer readable storage medium
CN111125435A (en) * 2019-12-17 2020-05-08 北京百度网讯科技有限公司 Video tag determination method and device and computer equipment
US10671852B1 (en) * 2017-03-01 2020-06-02 Matroid, Inc. Machine learning in video classification
CN111400551A (en) * 2020-03-13 2020-07-10 咪咕文化科技有限公司 Video classification method, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872813A (en) * 2015-12-10 2016-08-17 乐视网信息技术(北京)股份有限公司 Hotspot video displaying method and device
CN107451148A (en) * 2016-05-31 2017-12-08 北京金山安全软件有限公司 Video classification method and device and electronic equipment
US10671852B1 (en) * 2017-03-01 2020-06-02 Matroid, Inc. Machine learning in video classification
CN107566917A (en) * 2017-09-15 2018-01-09 维沃移动通信有限公司 A kind of video marker method and video playback apparatus
CN108540848A (en) * 2018-03-01 2018-09-14 北京达佳互联信息技术有限公司 Video collection method and apparatus
CN109376268A (en) * 2018-11-27 2019-02-22 北京微播视界科技有限公司 Video classification methods, device, electronic equipment and computer readable storage medium
CN111125435A (en) * 2019-12-17 2020-05-08 北京百度网讯科技有限公司 Video tag determination method and device and computer equipment
CN111400551A (en) * 2020-03-13 2020-07-10 咪咕文化科技有限公司 Video classification method, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113315691A (en) * 2021-05-20 2021-08-27 维沃移动通信有限公司 Video processing method and device and electronic equipment
CN113315691B (en) * 2021-05-20 2023-02-24 维沃移动通信有限公司 Video processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN112084268B (en) Method and device for displaying search results and computer storage medium
KR101278406B1 (en) System and method for assisting search requests with vertical suggestions
CN107562939B (en) Vertical domain news recommendation method and device and readable storage medium
US20240078273A1 (en) Systems and methods for facilitating enhancements to related electronic searches coordinated for a group of users
US20170228428A1 (en) Systems and Methods for Filtering Application Search Results
WO2017205036A1 (en) Task completion using world knowledge
CN107526744A (en) A kind of information displaying method and device based on search
CN113039539A (en) Extending search engine capabilities using AI model recommendations
KR20150100683A (en) Improving people searches using images
CN112615958A (en) Contact person display method and device and electronic equipment
CN101763211A (en) System for analyzing semanteme in real time and controlling related operation
GB2592884A (en) System and method for enabling a search platform to users
CN113869063A (en) Data recommendation method and device, electronic equipment and storage medium
CN112084370A (en) Video processing method and device and electronic equipment
CN113190752A (en) Information recommendation method, mobile terminal and storage medium
JP2007148476A (en) Information retrieval support system, information retrieval support method, retrieval support module program and information retrieval support program
CN116579819A (en) Machine learning-based commodity accurate and comprehensive display method and device
KR20140056635A (en) System and method for providing contents recommendation service
CN111223014A (en) Method and system for online generating subdivided scene teaching courses from large amount of subdivided teaching contents
CN111813236B (en) Input method, input device, electronic equipment and readable storage medium
CN112084151A (en) File processing method and device and electronic equipment
CN114218930A (en) Title generation method and device and title generation device
CN113177170A (en) Comment display method and device and electronic equipment
Fakhfakh et al. Fuzzy User Profile Modeling for Information Retrieval.
CN111813285B (en) Floating window management method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination