CN109815365A - Method and apparatus for handling video - Google Patents
Method and apparatus for handling video Download PDFInfo
- Publication number
- CN109815365A CN109815365A CN201910084731.1A CN201910084731A CN109815365A CN 109815365 A CN109815365 A CN 109815365A CN 201910084731 A CN201910084731 A CN 201910084731A CN 109815365 A CN109815365 A CN 109815365A
- Authority
- CN
- China
- Prior art keywords
- video
- classification
- classification information
- mark
- marked
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Embodiment of the disclosure discloses the method and apparatus for handling video.One specific embodiment of this method includes: to obtain video to be marked;The video classification model that video input to be marked is trained in advance obtains classification information set and the corresponding probability value of classification information, wherein classification information is for characterizing classification belonging to video to be marked;Classification information is selected from classification information set, wherein the corresponding probability value of selected classification information is more than or equal to preset probability threshold value;For the classification information in selected classification information, by video to be marked storage into the mark queue for pre-establishing corresponding relationship with category information, so that video to be marked is sent to the mark terminal for establishing corresponding relationship with mark queue in advance.The embodiment, which realizes, combines machine recognition and artificial mark, helps to improve the accuracy and efficiency being labeled to video.
Description
Technical field
Embodiment of the disclosure is related to field of computer technology, and in particular to the method and apparatus for handling video.
Background technique
With the development of internet technology, more and more videos appear in internet.For the ease of to these videos
It is managed, video classification model can be used, type identification is carried out to video.For training video disaggregated model, usually need
A large amount of Sample video is labeled, to distinguish the type of Sample video.At present mainly by being manually labeled to video.
Summary of the invention
Embodiment of the disclosure proposes the method and apparatus for handling video.
In a first aspect, embodiment of the disclosure provides a kind of method for handling video, this method comprises: obtain to
Mark video;The video classification model that video input to be marked is trained in advance obtains classification information set and classification information pair
The probability value answered, wherein classification information is for characterizing classification belonging to video to be marked;Classification is selected from classification information set
Information, wherein the corresponding probability value of selected classification information is more than or equal to preset probability threshold value;For selected classification
Classification information in information, by video to be marked storage into the mark queue for pre-establishing corresponding relationship with category information,
So that video to be marked is sent to the mark terminal for establishing corresponding relationship with mark queue in advance.
In some embodiments, classification information is selected from classification information set, comprising: in response to determining classification information collection
The quantity that probability value that conjunction includes, corresponding is more than or equal to the classification information of preset probability threshold value is greater than preset quantity, from right
The probability value answered, which is more than or equal in the classification information of preset probability threshold value, selects preset quantity classification information.
In some embodiments, classification information is selected from classification information set, comprising: in response to determining classification information collection
The quantity that probability value that conjunction includes, corresponding is more than or equal to the classification information of preset probability threshold value is less than or equal to preset quantity,
Corresponding probability value is selected to be more than or equal to the classification information of preset probability threshold value.
In some embodiments, training obtains video classification model in accordance with the following steps in advance: training sample set is obtained,
Wherein, training sample include Sample video, Sample video is labeled in advance sample class information aggregate;Utilize engineering
Learning method, the Sample video for including using the training sample in training sample set is as input, by the Sample video pair with input
The sample class information aggregate answered obtains video classification model as desired output, training.
In some embodiments, video classification model is multi-tag disaggregated model.
In some embodiments, for the classification information in selected classification information, video storage to be marked is arrived
It pre-establishes in the mark queue of corresponding relationship with category information, is built in advance with mark queue so that video to be marked is sent to
After the mark terminal of vertical corresponding relationship, this method further include: obtain mark rear video set, wherein mark rear video be by
The video to be marked that mark end-on is received carries out the video obtained after classification information mark;Using machine learning method, will mark
Input of the mark rear video in rear video set as video classification model is infused, by class corresponding with the mark rear video of input
Desired output of the other information aggregate as video classification model, training obtain updated video classification model.
Second aspect, embodiment of the disclosure provide a kind of for handling the device of video, which includes: first to obtain
Unit is taken, is configured to obtain video to be marked;Taxon is configured to the video for training video input to be marked in advance
Disaggregated model obtains classification information set and the corresponding probability value of classification information, wherein classification information is for characterizing view to be marked
Classification belonging to frequency;Selecting unit is configured to select classification information from classification information set, wherein selected classification
The corresponding probability value of information is more than or equal to preset probability threshold value;Storage unit is configured to for selected classification information
In classification information, by video to be marked storage into the mark queue for pre-establishing corresponding relationship with category information so that
Video to be marked is sent to the mark terminal for establishing corresponding relationship with mark queue in advance.
In some embodiments, selecting unit is further configured to: that classification information set includes, right in response to determining
The quantity that the probability value answered is more than or equal to the classification information of preset probability threshold value is greater than preset quantity, big from corresponding probability value
Preset quantity classification information is selected in the classification information for being equal to preset probability threshold value.
In some embodiments, selecting unit is further configured to: that classification information set includes, right in response to determining
The quantity that the probability value answered is more than or equal to the classification information of preset probability threshold value is less than or equal to preset quantity, selects corresponding general
Rate value is more than or equal to the classification information of preset probability threshold value.
In some embodiments, training obtains video classification model in accordance with the following steps in advance: training sample set is obtained,
Wherein, training sample include Sample video, Sample video is labeled in advance sample class information aggregate;Utilize engineering
Learning method, the Sample video for including using the training sample in training sample set is as input, by the Sample video pair with input
The sample class information aggregate answered obtains video classification model as desired output, training.
In some embodiments, video classification model is multi-tag disaggregated model.
In some embodiments, the device further include: second acquisition unit is configured to obtain mark rear video set,
Wherein, mark rear video is the video that the video to be marked received by mark end-on obtain after classification information mark;Instruction
Practice unit, is configured to that the mark rear video in rear video set will be marked as visual classification mould using machine learning method
The input of type, using classification information set corresponding with the mark rear video of input as the desired output of video classification model, instruction
Get updated video classification model.
The third aspect, embodiment of the disclosure provide a kind of electronic equipment, which includes: one or more places
Manage device;Storage device is stored thereon with one or more programs;When one or more programs are held by one or more processors
Row, so that one or more processors realize the method as described in implementation any in first aspect.
Fourth aspect, embodiment of the disclosure provide a kind of computer-readable medium, are stored thereon with computer program,
The method as described in implementation any in first aspect is realized when the computer program is executed by processor.
The method and apparatus for handling video that embodiment of the disclosure provides, it is defeated by the video to be marked that will acquire
Enter video classification model trained in advance, classification information set and the corresponding probability value of classification information is obtained, further according to probability value
Classification information is selected from classification information set, and finally video to be marked is stored to the classification information with selection and is pre-established pair
In the mark queue that should be related to so that video to be marked be sent in advance with mark queue establish the mark terminal of corresponding relationship with
Video to be marked is labeled, to be realized by using video classification model and mark queue by machine recognition and people
Work mark combines, and helps to improve the accuracy and efficiency being labeled to video.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the disclosure is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the disclosure can be applied to exemplary system architecture figure therein;
Fig. 2 is according to an embodiment of the present disclosure for handling the flow chart of one embodiment of the method for video;
Fig. 3 is according to an embodiment of the present disclosure for handling the schematic diagram of an application scenarios of the method for video;
Fig. 4 is according to an embodiment of the present disclosure for handling the flow chart of another embodiment of the method for video;
Fig. 5 is according to an embodiment of the present disclosure for handling the structural schematic diagram of one embodiment of the device of video;
Fig. 6 is adapted for the structural schematic diagram for realizing the electronic equipment of embodiment of the disclosure.
Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining that correlation is open, rather than the restriction to the disclosure.It also should be noted that in order to
Convenient for description, is illustrated only in attached drawing and disclose relevant part to related.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase
Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can the method for handling video using embodiment of the disclosure or the dress for handling video
The exemplary system architecture 100 set.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104,105 and of server
Mark terminal 106.Network 104 is logical to provide between terminal device 101,102,103, server 105, mark terminal 106
Believe the medium of link.Network 104 may include various connection types, such as wired, wireless communication link or fiber optic cables etc.
Deng.
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out
Send message etc..Various telecommunication customer end applications can be installed, such as video playback class is answered on terminal device 101,102,103
With video handles class application, web browser applications, social platform software etc..
Terminal device 101,102,103, can be hardware, is also possible to software.When for hardware, it can be various electronics
Equipment.When for software, it may be mounted in above-mentioned electronic equipment.Its may be implemented into multiple softwares or software module (such as
For providing the software or software module of Distributed Services), single software or software module also may be implemented into.It does not do and has herein
Body limits.
Mark terminal 106 can be hardware, be also possible to software.When for hardware, it can be various electronic equipments.When for
When software, it may be mounted in above-mentioned electronic equipment.Multiple softwares or software module may be implemented into (such as providing point in it
The software or software module of cloth service), single software or software module also may be implemented into.It is not specifically limited herein.It is logical
Often, mark terminal 106 can be used in mark personnel, is labeled to received video to be marked.
Server 105 can be to provide the server of various services, for example, to terminal device 101,102,103 upload to
The background video processing server that mark video is handled.Background video processing server can be to the video to be marked of acquisition
It is handled, and corresponding relationship (such as selected classification information and with category information is pre-established according to processing result
Mark queue) the corresponding mark terminal that sends video to be marked to.
It should be noted that can be by server 105 for handling the method for video provided by embodiment of the disclosure
It executes, can also be by terminal device 101,102,103, correspondingly, the device for handling video can be set in server 105
In, it also can be set in terminal device 101,102,103.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software
To be implemented as multiple softwares or software module (such as providing the software of Distributed Services or software module), also may be implemented
At single software or software module.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the process of one embodiment of the method for handling video according to the disclosure is shown
200.The method for being used to handle video, comprising the following steps:
Step 201, video to be marked is obtained.
In the present embodiment, (such as server shown in FIG. 1 or terminal are set the executing subject for handling the method for video
It is standby) it can be obtained from remotely obtaining video to be marked, or from local wait mark by wired connection mode or radio connection
Watch frequency attentively.Wherein, video to be marked is the video to carry out classification mark to it.
Step 202, video classification model video input to be marked trained in advance obtains classification information set and classification
The corresponding probability value of information.
In the present embodiment, the video classification model that above-mentioned executing subject can train video input to be marked in advance,
Obtain classification information set and the corresponding probability value of classification information.Wherein, video classification model for characterize video to be marked and
The corresponding relationship of classification information set.Classification information is for characterizing classification belonging to video to be marked.Classification information may include
But it is not limited to the information of following at least one form: number, text, symbol etc..As an example, classification information set may include
The classification information of following written form: seashore, hotel, automobile, forest, bedroom.As another example, classification information set can be with
Classification information including following digital form: 001,002,003,004,005, wherein each classification information is for characterizing a view
Frequency classification.
In the present embodiment, the corresponding probability value of classification information belongs to classification information characterization for characterizing video to be marked
The other probability of video class.For example, it is assumed that the corresponding probability value of classification information " seashore " be 0.6, then the probability value for characterize to
The classification of mark video is that the probability on " seashore " is 0.6.
In general, video classification model may include characteristic extraction part and classified part.Wherein, characteristic extraction part is used for
Extract the characteristic for characterizing the various features (such as color characteristic, shape feature etc.) of video to be marked.Classified part can be with
Classify to characteristic, to obtain the classification information set of video to be marked.As an example, video classification model can be with
It is convolutional neural networks model, characteristic extraction part includes convolutional layer, pond layer etc., the view for including according to video to be marked
Frequency frame (can be all videos frame or partial video frame), generate characteristic, classified part includes full articulamentum, for that will give birth to
At characteristic be connected as a feature vector, and classify to this feature vector, finally obtain the class of video to be marked
Other information aggregate.
Above-mentioned video classification model can be the various models for classifying to video.As an example, visual classification
Model can be single labeling model.The final output result of single labeling model is selected from multiple classifications for characterizing
One kind selects the corresponding maximum classification information of probability value as final result.In general, single labeling model is convolution mind
Through network model, including the full articulamentum for classifying to video, full articulamentum can be exported in classification information aggregate
The corresponding probability value of each classification information, all probability values and be one.
In some optional implementations of the present embodiment, video classification model is multi-tag disaggregated model.Multi-tag
The final output result of disaggregated model includes the classification information set being made of at least one classification information (i.e. label), is used for table
The video of sign input can belong to multiple classifications simultaneously.In general, each of the classification information set of multi-tag disaggregated model output
The corresponding probability value of classification information and can be greater than one.As an example it is supposed that classification information set includes seashore, hotel, vapour
Vehicle, corresponding probability value can be 0.6,0.7,0.5.By using multi-tag disaggregated model, can more fully treat
Mark video is classified, to help to improve the accuracy being labeled to video to be marked.
In some optional implementations of the present embodiment, video classification model can train in accordance with the following steps in advance
It obtains:
Firstly, obtaining training sample set.Wherein, training sample includes Sample video, marks in advance to Sample video
The sample class information aggregate of note.As an example, sample class information aggregate can be characterized with the form of vector, it is every in vector
A element is classification information.For example, it is assumed that vector includes N (N is preset positive integer) a element, wherein the member of serial number 1
Element belongs to " seashore " class for characterizing Sample video, and being worth is 1, and the element of serial number 2 belongs to " hotel " for characterizing Sample video
Class, being worth is 1.The corresponding element of other serial numbers is 0, and characterization Sample video is not belonging to the corresponding classification of element.
Then, using machine learning method, the Sample video for including using the training sample in training sample set is as defeated
Enter, using sample class information aggregate corresponding with the Sample video of input as desired output, training obtains video classification model.
Specifically, the executing subject for training video disaggregated model can use machine learning algorithm, will acquire
The Sample video that training sample in training sample set includes is as input, by sample class corresponding with the Sample video of input
Other information aggregate is trained initial model (such as convolutional neural networks model) as desired output, for each training
The Sample video of input, available reality output.Wherein, reality output is the data of initial model reality output, is used for table
Levy classification information set.Then, above-mentioned executing subject can use gradient descent method, be based on reality output and desired output, adjust
The parameter of whole initial model, using the model obtained after each adjusting parameter as the initial model of training next time, and it is pre- meeting
If termination condition in the case where, terminate training, thus training obtain video classification model.It should be noted that here presetting at
Training termination condition can include but is not limited to it is at least one of following: the training time is more than preset duration;Frequency of training is more than
Preset times;It calculates resulting penalty values and is less than default loss threshold value.Wherein, above-mentioned penalty values are using preset loss function
(such as cross entropy loss function) be calculated, for characterizing number of the reality output relative to the difference between desired output
Value.
Herein, above-mentioned executing subject can be trained initial model using batch training algorithm, can also use
Random training algorithm is trained initial model, and the embodiment of the present application does not limit this.
Step 203, classification information is selected from classification information set.
In the present embodiment, above-mentioned executing subject can select classification information from classification information set.Wherein, selected
The corresponding probability value of classification information be more than or equal to preset probability threshold value.
In some optional implementations of the present embodiment, above-mentioned executing subject can be believed from classification in accordance with the following steps
Classification information is selected in breath set:
In response to determining that classification information set includes, corresponding probability value is more than or equal to the classification of preset probability threshold value
The quantity of information is greater than preset quantity, and selection is pre- from the classification information that corresponding probability value is more than or equal to preset probability threshold value
If quantity classification information.As an example it is supposed that probability threshold value is 0.6, above-mentioned executing subject can determine first to be more than or equal to
The corresponding classification information of 0.6 probability value, then from identified classification information, select in various manners (such as random choosing
Select, or the sequential selection descending according to corresponding probability value) preset quantity (such as 3) a classification information.
In some optional implementations of the present embodiment, above-mentioned executing subject can be believed from classification in accordance with the following steps
Classification information is selected in breath set:
In response to determining that classification information set includes, corresponding probability value is more than or equal to the classification of preset probability threshold value
The quantity of information is less than or equal to preset quantity, and corresponding probability value is selected to be more than or equal to the classification information of preset probability threshold value.
As an example it is supposed that probability threshold value is 0.6, above-mentioned executing subject can determine corresponding more than or equal to 0.6 probability value first
The quantity of classification information, if quantity be less than or equal to preset quantity (such as 3), by it is all be more than or equal to 0.6 probability value pair
The classification information answered is determined as selected classification information.
Step 204, for the classification information in selected classification information, video to be marked storage is believed to the category
Breath pre-establishes in the mark queue of corresponding relationship, so that video to be marked, which is sent to, establishes corresponding relationship with mark queue in advance
Mark terminal.
In the present embodiment, for the classification information in selected classification information, above-mentioned executing subject can will be wait mark
Frequency storage is watched attentively into the mark queue for pre-establishing corresponding relationship with category information, so that video to be marked is sent in advance
The mark terminal (mark terminal as shown in Figure 1) of corresponding relationship is established with the mark queue stored.
Classification information and the corresponding relationship of mark queue can be characterized by forms such as two-dimensional table, chained lists.For example, mark
Note queue can correspond to preset number, and number and classification information can accordingly store in two-dimensional table, above-mentioned execution
Main body can search number corresponding with classification information according to each classification information from two-dimensional table, so that it is determined that going out to mark
Queue.Similarly, the corresponding relationship of mark queue and mark terminal can also be characterized by forms such as two-dimensional table, chained lists.
For example, be stored with the number of mark queue in two-dimensional table and mark the mark (such as network address information) of terminal, it is above-mentioned to hold
Row main body can find the mark of corresponding mark terminal, thus will be to be marked according to mark according to the number of mark queue
Video is sent to corresponding mark terminal.
Wherein, mark queue can be memory block pre-set, for storing video to be marked.Marking queue can be with
It is arranged in above-mentioned executing subject, also can be set in other electronic equipments communicated to connect with above-mentioned executing subject.Work as mark
When note queue stores at least two videos to be marked, it can be sent in sequence to and mark queue according to the sequencing of storage
Corresponding mark terminal.
As an example it is supposed that the classification information selected in step 203 includes: seashore, hotel, automobile, correspond respectively to pre-
Mark queue A, B, the C first established, then above-mentioned executing subject can send the video to be marked marked in queue A to corresponding
Terminal a is marked, mark personnel are labeled video to be marked using mark terminal a, and generating for characterizing video to be marked is
The classification information of the no video for belonging to " seashore " classification.Meanwhile video to be marked can be separately sent to by above-mentioned executing subject
Mark terminal b, c corresponding with mark queue B, C generates so that mark personnel are labeled video to be marked for characterizing
Whether video to be marked belongs to the classification information of the video of " hotel " classification and for characterizing whether video to be marked belongs to " vapour
The classification information of the video of vehicle " classification.
It should be noted that the corresponding mark terminal of above-mentioned each mark queue can be hardware.Mark terminal
It can be software, for example, each mark terminal corresponds to a mark interface, mark personnel can treat by the mark interface
Mark the corresponding classification information of video labeling.
By using mark queue, each mark terminal targetedly can be sent by video to be marked, mark people
Member can only judge whether the received video to be marked of institute belongs to video classification corresponding with mark queue, to simplify to view
The process of frequency marking note, is conducive to improve the efficiency to video labeling.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for handling video of the present embodiment
Figure.In the application scenarios of Fig. 3, electronic equipment 301 obtains a video 302 to be marked from local first.Then, electronic equipment
301 input video 302 to be marked in advance trained video classification model 303, obtain classification information set 304 (for example including
Classification information " seashore ", " hotel ", " automobile ", " forest ", " bedroom ").Corresponding with each classification information probability value (such as divide
Not Wei 0.8,0.7,0.6,0.4,0.1).Then, electronic equipment 301 selects corresponding probability value from classification information set 304
More than or equal to the classification information " seashore " of preset probability threshold value (such as 0.6), " hotel ", " automobile ".Finally, electronic equipment
301 arrive video 302 to be marked storage and classification information " seashore ", " hotel ", " automobile " corresponding mark queue A, B, C
In, the video to be marked 302 in mark queue A, B, C is respectively sent to corresponding mark terminal 3051,3052,3053, from
And mark personnel can use mark end-on receive video to be marked 302 be labeled.
The method provided by the above embodiment of the disclosure, the video trained in advance by the video input to be marked that will acquire
Disaggregated model obtains classification information set and the corresponding probability value of classification information, further according to probability value from classification information set
Classification information is selected, video to be marked is finally stored to the classification information with selection to the mark queue for pre-establishing corresponding relationship
In, so that video to be marked, which is sent to, establishes the mark terminal of corresponding relationship with mark queue in advance to carry out to video to be marked
Mark determines the classification that video to be marked may belong to by using video classification model, then by mark queue, into one
Step is accurately labeled video to be marked, to be realized using video classification model and mark queue by machine recognition
It is combined with artificial mark, improves the accuracy and efficiency being labeled to video.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for handling video.The use
In the process 400 of the method for processing video, comprising the following steps:
Step 401, video to be marked is obtained.
In the present embodiment, step 401 and the step 201 in Fig. 2 corresponding embodiment are almost the same, and which is not described herein again.
Step 402, video classification model video input to be marked trained in advance obtains classification information set and classification
The corresponding probability value of information.
In the present embodiment, step 402 and the step 202 in Fig. 2 corresponding embodiment are almost the same, and which is not described herein again.
Step 403, classification information is selected from classification information set.
In the present embodiment, step 403 and the step 203 in Fig. 2 corresponding embodiment are almost the same, and which is not described herein again.
Step 404, for the classification information in selected classification information, video to be marked storage is believed to the category
Breath pre-establishes in the mark queue of corresponding relationship, so that video to be marked, which is sent to, establishes corresponding relationship with mark queue in advance
Mark terminal.
In the present embodiment, step 404 and the step 204 in Fig. 2 corresponding embodiment are almost the same, and which is not described herein again.
Step 405, mark rear video set is obtained.
In the present embodiment, (such as server shown in FIG. 1 or terminal are set the executing subject for handling the method for video
It is standby) it can be from long-range or acquisition marks rear video set from local.Wherein, mark rear video be received by mark end-on to
Mark video carries out the video obtained after classification information mark.In general, mark terminal can will mark rear video be sent to it is above-mentioned
The each mark rear video received is combined into mark rear video set again by executing subject, above-mentioned executing subject.
Mark the classification information set that rear video corresponds to mark.In general, classification information set can be the form of vector,
Each element in vector is classification information.Since a mark terminal corresponds to a mark queue, and mark queue pair
Ying Yuyi classification information, therefore, for a mark rear video, in the element which includes,
The element for characterizing the classification information of the mark rear video can be set to default value (such as 1), and other elements can be set to
Other numerical value (such as 0).
It should be noted that in following steps 406, the corresponding classification of mark rear video used in training video disaggregated model
The classification information set of mark terminal mark can be directly used in information aggregate, at this time can be using single labeling model
Training method training video disaggregated model.In addition, the corresponding classification information set of mark rear video is also possible to by above-mentioned execution
The new markup information set that main body will generate after the corresponding classification information set polymerization of multiple identical mark rear videos.
For example, the form of markup information set is vector, it include the element that multiple values are 1 in vector, characterization mark rear video belongs to often
The classification that the element that a value is 1 characterizes, the value of other elements are 0, and characterization mark rear video is not belonging to the class that these elements characterize
Not.At this point it is possible to using the training method training video disaggregated model of multi-tag disaggregated model.
Step 406, using machine learning method, the mark rear video in rear video set will be marked as visual classification mould
The input of type, using classification information set corresponding with the mark rear video of input as the desired output of video classification model, instruction
Get updated video classification model.
In the present embodiment, above-mentioned executing subject can use machine learning method, will mark the mark in rear video set
Infuse input of the rear video as above-mentioned video classification model, using classification information set corresponding with the mark rear video of input as
The desired output of video classification model, training obtain updated video classification model.
Specifically, above-mentioned executing subject can use machine learning algorithm, will mark the mark backsight in rear video set
Frequency is as input, using classification information set corresponding with the mark rear video of input as desired output, to video classification model
It is trained, for the mark rear video of each training input, available reality output.Wherein, reality output is video point
The data of class model reality output, for characterizing classification information set.Then, above-mentioned executing subject can be declined using gradient
Method is based on reality output and desired output, adjusts the parameter of video classification model, and the model obtained after each adjusting parameter is made
Terminate training and in the case where meeting preset termination condition for the video classification model of training next time, so that training obtains
Updated video classification model.It should be noted that the training termination condition here preset at can include but is not limited to it is following
At least one of: the training time is more than preset duration;Frequency of training is more than preset times;It calculates resulting penalty values and is less than default damage
Lose threshold value.Wherein, above-mentioned penalty values are calculated using preset loss function (such as softmax loss function), are used
In numerical value of the characterization reality output relative to the difference between desired output.
Herein, above-mentioned executing subject can be trained initial model using batch training algorithm, can also use
Random training algorithm is trained initial model, and the embodiment of the present application does not limit this.
Figure 4, it is seen that the method for handling video compared with the corresponding embodiment of Fig. 2, in the present embodiment
Process 400 highlight the step of being updated to video classification model.By using the mark backsight obtained from mark terminal
Frequently, video classification model is updated, can expands the quantity of the training sample of training video disaggregated model, also, due to
Marking the corresponding classification information set of rear video is by manually determining, accuracy is higher, therefore, can be further improved video
The accuracy that disaggregated model classifies to video.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, present disclose provides one kind for handling view
One embodiment of the device of frequency, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, the present embodiment includes: first acquisition unit 501 for handling the device 500 of video, it is configured
At acquisition video to be marked;Taxon 502 is configured to the video classification model for training video input to be marked in advance,
Obtain classification information set and the corresponding probability value of classification information, wherein classification information is for characterizing belonging to video to be marked
Classification;Selecting unit 503 is configured to select classification information from classification information set, wherein selected classification information pair
The probability value answered is more than or equal to preset probability threshold value;Storage unit 504 is configured in selected classification information
Classification information, by video to be marked storage into the mark queue for pre-establishing corresponding relationship with category information, so that wait mark
Watch the mark terminal for taking place frequently and being sent to and establishing corresponding relationship with mark queue in advance attentively.
In the present embodiment, first acquisition unit 501 can be by wired connection mode or radio connection from remote
Journey obtains video to be marked, or obtains video to be marked from local.Wherein, video to be marked is to carry out classification mark to it
Video.
In the present embodiment, the video classification model that taxon 502 can train video input to be marked in advance, obtains
To classification information set and the corresponding probability value of classification information.Wherein, video classification model is for characterizing video to be marked and class
The corresponding relationship of other information aggregate.Classification information is for characterizing classification belonging to video to be marked.Classification information may include but
It is not limited to the information of following at least one form: number, text, symbol etc..As an example, classification information set may include as
The classification information of lower written form: seashore, hotel, automobile, forest, bedroom.As another example, classification information set can wrap
Include the classification information of following digital form: 001,002,003,004,005, wherein each classification information is for characterizing a video
Classification.
In the present embodiment, the corresponding probability value of classification information belongs to classification information characterization for characterizing video to be marked
The other probability of video class.For example, it is assumed that the corresponding probability value of classification information " seashore " be 0.6, then the probability value for characterize to
The classification of mark video is that the probability on " seashore " is 0.6.
In general, video classification model may include characteristic extraction part and classified part.Wherein, characteristic extraction part is used for
Extract the characteristic for characterizing the various features (such as color characteristic, shape feature etc.) of video to be marked.Classified part can be with
Classify to characteristic, to obtain the classification information set of video to be marked.As an example, video classification model can be with
It is convolutional neural networks model, characteristic extraction part includes convolutional layer, pond layer etc., the view for including according to video to be marked
Frequency frame (can be all videos frame or partial video frame), generate characteristic, classified part includes full articulamentum, for that will give birth to
At characteristic be connected as a feature vector, and classify to this feature vector, finally obtain the class of video to be marked
Other information aggregate.
In the present embodiment, selecting unit 503 can select classification information from classification information set.Wherein, selected
The corresponding probability value of classification information be more than or equal to preset probability threshold value.
In the present embodiment, for the classification information in selected classification information, storage unit 504 can will be to be marked
Video store into the mark queue for pre-establishing corresponding relationship with category information so that video to be marked be sent in advance with
The mark terminal (mark terminal as shown in Figure 1) of corresponding relationship is established in mark queue.
Wherein, mark queue can be memory block pre-set, for sequentially storing video to be marked.Mark team
Video to be marked in column can be sent in sequence to mark terminal corresponding with mark queue according to the sequence of storage.
It should be noted that the corresponding mark terminal of above-mentioned each mark queue can be hardware.Mark terminal
It can be software, for example, each mark terminal corresponds to a mark interface, mark personnel can treat by the mark interface
Mark the corresponding classification information of video labeling.
By using mark queue, each mark terminal targetedly can be sent by video to be marked, mark people
Member can only judge whether the received video to be marked of institute belongs to video classification corresponding with mark queue, to simplify to view
The process of frequency marking note, is conducive to improve the efficiency to video labeling.
In some optional implementations of the present embodiment, selecting unit 503 can be further configured to: in response to
Determine that probability value that classification information set includes, corresponding is greater than more than or equal to the quantity of the classification information of preset probability threshold value
Preset quantity, the selection preset quantity classification letter from the classification information that corresponding probability value is more than or equal to preset probability threshold value
Breath.
In some optional implementations of the present embodiment, selecting unit 503 can be further configured to: in response to
Determine that probability value that classification information set includes, corresponding is less than more than or equal to the quantity of the classification information of preset probability threshold value
Equal to preset quantity, corresponding probability value is selected to be more than or equal to the classification information of preset probability threshold value.
In some optional implementations of the present embodiment, video classification model can train in accordance with the following steps in advance
It obtains: obtaining training sample set, wherein training sample includes Sample video, the sample that is labeled in advance to Sample video
Classification information set;Using machine learning method, the Sample video for including using the training sample in training sample set is as defeated
Enter, using sample class information aggregate corresponding with the Sample video of input as desired output, training obtains video classification model.
In some optional implementations of the present embodiment, video classification model is multi-tag disaggregated model.
In some optional implementations of the present embodiment, the device can also include: second acquisition unit (in figure not
Show), it is configured to obtain mark rear video set, wherein mark rear video is the view to be marked received by mark end-on
Frequency carries out the video obtained after classification information mark;Training unit (not shown) is configured to using machine learning method,
Will mark rear video set in mark rear video as the input of video classification model, will be corresponding with the mark rear video of input
Desired output of the classification information set as video classification model, training obtains updated video classification model.
The device provided by the above embodiment of the disclosure, the video trained in advance by the video input to be marked that will acquire
Disaggregated model obtains classification information set and the corresponding probability value of classification information, further according to probability value from classification information set
Classification information is selected, video to be marked is finally stored to the classification information with selection to the mark queue for pre-establishing corresponding relationship
In, so that video to be marked, which is sent to, establishes the mark terminal of corresponding relationship with mark queue in advance to carry out to video to be marked
Mark, to realize by using video classification model and mark queue and combine machine recognition and artificial mark, improve
Accuracy and efficiency that video is labeled.
Below with reference to Fig. 6, it illustrates the electronic equipment that is suitable for being used to realize embodiment of the disclosure, (example is as shown in figure 1
Server or terminal device) 600 structural schematic diagram.Terminal device in embodiment of the disclosure can include but is not limited to all
As mobile phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP are (portable
Formula multimedia player), the mobile terminal and such as number TV, desk-top meter of car-mounted terminal (such as vehicle mounted guidance terminal) etc.
The fixed terminal of calculation machine etc..Electronic equipment shown in Fig. 6 is only an example, should not be to the function of embodiment of the disclosure
Any restrictions are brought with use scope.
As shown in fig. 6, electronic equipment 600 may include processing unit (such as central processing unit, graphics processor etc.)
601, random access can be loaded into according to the program being stored in read-only memory (ROM) 602 or from storage device 608
Program in memory (RAM) 603 and execute various movements appropriate and processing.In RAM 603, it is also stored with electronic equipment
Various programs and data needed for 600 operations.Processing unit 601, ROM 602 and RAM603 are connected with each other by bus 604.
Input/output (I/O) interface 605 is also connected to bus 604.
In general, following device can connect to I/O interface 605: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph
As the input unit 606 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration
The output device 607 of dynamic device etc.;Storage device 608 including such as tape, hard disk etc.;And communication device 609.Communication device
609, which can permit electronic equipment 600, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 6 shows tool
There is the electronic equipment 600 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with
Alternatively implement or have more or fewer devices.Each box shown in Fig. 6 can represent a device, can also root
According to needing to represent multiple devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communication device 609, or from storage device 608
It is mounted, or is mounted from ROM 602.When the computer program is executed by processing unit 601, the implementation of the disclosure is executed
The above-mentioned function of being limited in the method for example.It should be noted that computer-readable medium described in embodiment of the disclosure can be with
It is computer-readable signal media or computer readable storage medium either the two any combination.It is computer-readable
Storage medium for example may be-but not limited to-the system of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, device or
Device, or any above combination.The more specific example of computer readable storage medium can include but is not limited to: have
The electrical connection of one or more conducting wires, portable computer diskette, hard disk, random access storage device (RAM), read-only memory
(ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-
ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.In embodiment of the disclosure, computer
Readable storage medium storing program for executing can be any tangible medium for including or store program, which can be commanded execution system, device
Either device use or in connection.And in embodiment of the disclosure, computer-readable signal media may include
In a base band or as the data-signal that carrier wave a part is propagated, wherein carrying computer-readable program code.It is this
The data-signal of propagation can take various forms, including but not limited to electromagnetic signal, optical signal or above-mentioned any appropriate
Combination.Computer-readable signal media can also be any computer-readable medium other than computer readable storage medium, should
Computer-readable signal media can send, propagate or transmit for by instruction execution system, device or device use or
Person's program in connection.The program code for including on computer-readable medium can transmit with any suitable medium,
Including but not limited to: electric wire, optical cable, RF (radio frequency) etc. or above-mentioned any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not
It is fitted into the electronic equipment.Above-mentioned computer-readable medium carries one or more program, when said one or more
When a program is executed by the electronic equipment, so that the electronic equipment: obtaining video to be marked;Video input to be marked is instructed in advance
Experienced video classification model obtains classification information set and the corresponding probability value of classification information, wherein classification information is for characterizing
Classification belonging to video to be marked;Classification information is selected from classification information set, wherein selected classification information is corresponding
Probability value is more than or equal to preset probability threshold value;For the classification information in selected classification information, video to be marked is deposited
Store up and pre-established in the mark queue of corresponding relationship with category information so that video to be marked be sent in advance with mark team
Column establish the mark terminal of corresponding relationship.
The behaviour for executing embodiment of the disclosure can be write with one or more programming languages or combinations thereof
The computer program code of work, described program design language include object oriented program language-such as Java,
Smalltalk, C++ further include conventional procedural programming language-such as " C " language or similar program design language
Speech.Program code can be executed fully on the user computer, partly be executed on the user computer, as an independence
Software package execute, part on the user computer part execute on the remote computer or completely in remote computer or
It is executed on server.In situations involving remote computers, remote computer can pass through the network of any kind --- packet
It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit
It is connected with ISP by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in embodiment of the disclosure can be realized by way of software, can also be passed through
The mode of hardware is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor
Including first acquisition unit, taxon, selecting unit and storage unit.Wherein, the title of these units is under certain conditions
The restriction to the unit itself is not constituted, for example, first acquisition unit is also described as " obtaining the list of video to be marked
Member ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art
Member it should be appreciated that embodiment of the disclosure involved in invention scope, however it is not limited to the specific combination of above-mentioned technical characteristic and
At technical solution, while should also cover do not depart from foregoing invention design in the case where, by above-mentioned technical characteristic or its be equal
Feature carries out any combination and other technical solutions for being formed.Such as disclosed in features described above and embodiment of the disclosure (but
It is not limited to) technical characteristic with similar functions is replaced mutually and the technical solution that is formed.
Claims (14)
1. a kind of method for handling video, comprising:
Obtain video to be marked;
The video classification model that the video input to be marked is trained in advance, obtains classification information set and classification information is corresponding
Probability value, wherein classification information is for characterizing classification belonging to the video to be marked;
Classification information is selected from the classification information set, wherein the corresponding probability value of selected classification information is greater than etc.
In preset probability threshold value;
For the classification information in selected classification information, the video storage to be marked is built in advance to category information
In the mark queue of vertical corresponding relationship, so that the video to be marked is sent to the mark for establishing corresponding relationship with mark queue in advance
Injection terminal.
2. described to select classification information from the classification information set according to the method described in claim 1, wherein, comprising:
Probability value that include in response to the determination classification information set, corresponding is more than or equal to the classification of preset probability threshold value
The quantity of information is greater than preset quantity, and selection is pre- from the classification information that corresponding probability value is more than or equal to preset probability threshold value
If quantity classification information.
3. described to select classification information from the classification information set according to the method described in claim 1, wherein, comprising:
Probability value that include in response to the determination classification information set, corresponding is more than or equal to the classification of preset probability threshold value
The quantity of information is less than or equal to preset quantity, and corresponding probability value is selected to be more than or equal to the classification information of preset probability threshold value.
4. according to the method described in claim 1, wherein, training obtains the video classification model in accordance with the following steps in advance:
Obtain training sample set, wherein training sample includes Sample video, the sample class that is labeled in advance to Sample video
Other information aggregate;
Using machine learning method, the Sample video for including using the training sample in the training sample set, will as input
Sample class information aggregate corresponding with the Sample video of input obtains the video classification model as desired output, training.
5. method described in one of -4 according to claim 1, wherein the video classification model is multi-tag disaggregated model.
6. method described in one of -4 according to claim 1, wherein believe in the classification in selected classification information
Breath, by the video to be marked storage into the mark queue for pre-establishing corresponding relationship with category information so that it is described to
Mark video be sent in advance with mark queue establish the mark terminal of corresponding relationship after, the method also includes:
Obtain mark rear video set, wherein mark rear video is that the video to be marked received by mark end-on carries out classification
The video obtained after information labeling;
Using machine learning method, using the mark rear video in the mark rear video set as the video classification model
Input, using classification information set corresponding with the mark rear video of input as the desired output of the video classification model, instruction
Get updated video classification model.
7. a kind of for handling the device of video, comprising:
First acquisition unit is configured to obtain video to be marked;
Taxon is configured to the video classification model for training the video input to be marked in advance, obtains classification information
Gather probability value corresponding with classification information, wherein classification information is for characterizing classification belonging to the video to be marked;
Selecting unit is configured to select classification information from the classification information set, wherein selected classification information pair
The probability value answered is more than or equal to preset probability threshold value;
Storage unit is configured to arriving the video storage to be marked into the classification information in selected classification information
Pre-established in the mark queue of corresponding relationship with category information so that the video to be marked be sent in advance with mark team
Column establish the mark terminal of corresponding relationship.
8. device according to claim 7, wherein the selecting unit is further configured to:
Probability value that include in response to the determination classification information set, corresponding is more than or equal to the classification of preset probability threshold value
The quantity of information is greater than preset quantity, and selection is pre- from the classification information that corresponding probability value is more than or equal to preset probability threshold value
If quantity classification information.
9. device according to claim 7, wherein the selecting unit is further configured to:
Probability value that include in response to the determination classification information set, corresponding is more than or equal to the classification of preset probability threshold value
The quantity of information is less than or equal to preset quantity, and corresponding probability value is selected to be more than or equal to the classification information of preset probability threshold value.
10. device according to claim 7, wherein training obtains the video classification model in accordance with the following steps in advance:
Obtain training sample set, wherein training sample includes Sample video, the sample class that is labeled in advance to Sample video
Other information aggregate;
Using machine learning method, the Sample video for including using the training sample in the training sample set, will as input
Sample class information aggregate corresponding with the Sample video of input obtains the video classification model as desired output, training.
11. the device according to one of claim 7-10, wherein the video classification model is multi-tag disaggregated model.
12. the device according to one of claim 7-10, wherein described device further include:
Second acquisition unit is configured to obtain mark rear video set, wherein mark rear video is received by mark end-on
Video to be marked carry out obtained video after classification information mark;
Training unit, is configured to using machine learning method, using the mark rear video in the mark rear video set as
The input of the video classification model, using classification information set corresponding with the mark rear video of input as the visual classification
The desired output of model, training obtain updated video classification model.
13. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
Now such as method as claimed in any one of claims 1 to 6.
14. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor
Such as method as claimed in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910084731.1A CN109815365A (en) | 2019-01-29 | 2019-01-29 | Method and apparatus for handling video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910084731.1A CN109815365A (en) | 2019-01-29 | 2019-01-29 | Method and apparatus for handling video |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109815365A true CN109815365A (en) | 2019-05-28 |
Family
ID=66605552
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910084731.1A Pending CN109815365A (en) | 2019-01-29 | 2019-01-29 | Method and apparatus for handling video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109815365A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111291688A (en) * | 2020-02-12 | 2020-06-16 | 咪咕文化科技有限公司 | Video tag obtaining method and device |
CN111582360A (en) * | 2020-05-06 | 2020-08-25 | 北京字节跳动网络技术有限公司 | Method, apparatus, device and medium for labeling data |
CN112434548A (en) * | 2019-08-26 | 2021-03-02 | 杭州海康威视数字技术股份有限公司 | Video labeling method and device |
WO2021082499A1 (en) * | 2019-10-31 | 2021-05-06 | 百果园技术(新加坡)有限公司 | Resource annotation management system |
CN112905291A (en) * | 2021-03-19 | 2021-06-04 | 北京字节跳动网络技术有限公司 | Data display method and device and electronic equipment |
CN112989114A (en) * | 2021-02-04 | 2021-06-18 | 有米科技股份有限公司 | Video information generation method and device applied to video screening |
CN113160984A (en) * | 2021-04-20 | 2021-07-23 | 郑州大学第一附属医院 | Nasal nursing sensitivity quality evaluation index system |
CN116910164A (en) * | 2023-07-21 | 2023-10-20 | 北京火山引擎科技有限公司 | Label generation method and device for content push, electronic equipment and medium |
CN112434548B (en) * | 2019-08-26 | 2024-06-04 | 杭州海康威视数字技术股份有限公司 | Video labeling method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170347159A1 (en) * | 2016-05-30 | 2017-11-30 | Samsung Sds Co., Ltd. | Qoe analysis-based video frame management method and apparatus |
CN107679560A (en) * | 2017-09-15 | 2018-02-09 | 广东欧珀移动通信有限公司 | Data transmission method, device, mobile terminal and computer-readable recording medium |
CN108694217A (en) * | 2017-04-12 | 2018-10-23 | 合信息技术(北京)有限公司 | The label of video determines method and device |
CN108777815A (en) * | 2018-06-08 | 2018-11-09 | Oppo广东移动通信有限公司 | Method for processing video frequency and device, electronic equipment, computer readable storage medium |
CN108846375A (en) * | 2018-06-29 | 2018-11-20 | 山东大学 | A kind of multi-modal Cooperative Study method and device neural network based |
CN109190482A (en) * | 2018-08-06 | 2019-01-11 | 北京奇艺世纪科技有限公司 | Multi-tag video classification methods and system, systematic training method and device |
-
2019
- 2019-01-29 CN CN201910084731.1A patent/CN109815365A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170347159A1 (en) * | 2016-05-30 | 2017-11-30 | Samsung Sds Co., Ltd. | Qoe analysis-based video frame management method and apparatus |
KR20170135069A (en) * | 2016-05-30 | 2017-12-08 | 삼성에스디에스 주식회사 | Method and apparatus for managing video frame based on QoE analysis |
CN108694217A (en) * | 2017-04-12 | 2018-10-23 | 合信息技术(北京)有限公司 | The label of video determines method and device |
CN107679560A (en) * | 2017-09-15 | 2018-02-09 | 广东欧珀移动通信有限公司 | Data transmission method, device, mobile terminal and computer-readable recording medium |
CN108777815A (en) * | 2018-06-08 | 2018-11-09 | Oppo广东移动通信有限公司 | Method for processing video frequency and device, electronic equipment, computer readable storage medium |
CN108846375A (en) * | 2018-06-29 | 2018-11-20 | 山东大学 | A kind of multi-modal Cooperative Study method and device neural network based |
CN109190482A (en) * | 2018-08-06 | 2019-01-11 | 北京奇艺世纪科技有限公司 | Multi-tag video classification methods and system, systematic training method and device |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112434548A (en) * | 2019-08-26 | 2021-03-02 | 杭州海康威视数字技术股份有限公司 | Video labeling method and device |
CN112434548B (en) * | 2019-08-26 | 2024-06-04 | 杭州海康威视数字技术股份有限公司 | Video labeling method and device |
WO2021082499A1 (en) * | 2019-10-31 | 2021-05-06 | 百果园技术(新加坡)有限公司 | Resource annotation management system |
CN111291688A (en) * | 2020-02-12 | 2020-06-16 | 咪咕文化科技有限公司 | Video tag obtaining method and device |
CN111291688B (en) * | 2020-02-12 | 2023-07-14 | 咪咕文化科技有限公司 | Video tag acquisition method and device |
CN111582360A (en) * | 2020-05-06 | 2020-08-25 | 北京字节跳动网络技术有限公司 | Method, apparatus, device and medium for labeling data |
CN111582360B (en) * | 2020-05-06 | 2023-08-15 | 北京字节跳动网络技术有限公司 | Method, apparatus, device and medium for labeling data |
CN112989114A (en) * | 2021-02-04 | 2021-06-18 | 有米科技股份有限公司 | Video information generation method and device applied to video screening |
CN112989114B (en) * | 2021-02-04 | 2023-08-29 | 有米科技股份有限公司 | Video information generation method and device applied to video screening |
CN112905291A (en) * | 2021-03-19 | 2021-06-04 | 北京字节跳动网络技术有限公司 | Data display method and device and electronic equipment |
CN113160984A (en) * | 2021-04-20 | 2021-07-23 | 郑州大学第一附属医院 | Nasal nursing sensitivity quality evaluation index system |
CN116910164A (en) * | 2023-07-21 | 2023-10-20 | 北京火山引擎科技有限公司 | Label generation method and device for content push, electronic equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109815365A (en) | Method and apparatus for handling video | |
CN109902186A (en) | Method and apparatus for generating neural network | |
CN109858445A (en) | Method and apparatus for generating model | |
CN109460513A (en) | Method and apparatus for generating clicking rate prediction model | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
CN109800732A (en) | The method and apparatus for generating model for generating caricature head portrait | |
CN109740018A (en) | Method and apparatus for generating video tab model | |
CN110288049A (en) | Method and apparatus for generating image recognition model | |
CN108446387A (en) | Method and apparatus for updating face registration library | |
CN109002842A (en) | Image-recognizing method and device | |
CN110110811A (en) | Method and apparatus for training pattern, the method and apparatus for predictive information | |
CN108595628A (en) | Method and apparatus for pushed information | |
CN110378474A (en) | Fight sample generating method, device, electronic equipment and computer-readable medium | |
CN110162670A (en) | Method and apparatus for generating expression packet | |
CN109993150A (en) | The method and apparatus at age for identification | |
CN109947989A (en) | Method and apparatus for handling video | |
CN109086719A (en) | Method and apparatus for output data | |
CN109308490A (en) | Method and apparatus for generating information | |
CN109359170A (en) | Method and apparatus for generating information | |
CN108960316A (en) | Method and apparatus for generating model | |
CN109829432A (en) | Method and apparatus for generating information | |
CN108345387A (en) | Method and apparatus for output information | |
CN109919244A (en) | Method and apparatus for generating scene Recognition model | |
CN109360028A (en) | Method and apparatus for pushed information | |
CN109299477A (en) | Method and apparatus for generating text header |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |