CN109089128A - A kind of method for processing video frequency, device, equipment and medium - Google Patents
A kind of method for processing video frequency, device, equipment and medium Download PDFInfo
- Publication number
- CN109089128A CN109089128A CN201810752194.9A CN201810752194A CN109089128A CN 109089128 A CN109089128 A CN 109089128A CN 201810752194 A CN201810752194 A CN 201810752194A CN 109089128 A CN109089128 A CN 109089128A
- Authority
- CN
- China
- Prior art keywords
- video
- target
- featured videos
- target video
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 238000012545 processing Methods 0.000 title claims abstract description 17
- 238000000605 extraction Methods 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 13
- 239000000284 extract Substances 0.000 claims description 12
- 230000005540 biological transmission Effects 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 5
- 239000002699 waste material Substances 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000012512 characterization method Methods 0.000 description 4
- 239000000686 essence Substances 0.000 description 4
- 239000008280 blood Substances 0.000 description 3
- 210000004369 blood Anatomy 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234309—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4781—Games
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present invention discloses a kind of method for processing video frequency, device, equipment and medium, this method comprises: characteristic information is arranged according to the video classification of target video;To the target video carry out characteristic matching, with determine in the target video with the matched target frame of the characteristic information;According to the target frame and preset featured videos interception rule, featured videos section is determined in the target video, the featured videos section includes the target frame, and the featured videos interception rule is corresponding with the characteristic information;The featured videos section is extracted from the target video.Method, apparatus, equipment and medium provided by the present application can solve viewing historical game play live video in the prior art, existing waste spectators' viewing time and the technical problem for causing the probability of spectators' acquisition featured videos segment lower.Realize the technical effect for saving viewing time.
Description
Technical field
The present invention relates to field of computer technology more particularly to a kind of method for processing video frequency, device, equipment and media.
Background technique
Currently, with the progress of network communication technology and the speed-raising of broadband network, network direct broadcasting has been obtained more and more
Development and application.In order to make user not miss the excellent live video of main broadcaster, video website is often recorded and provides main broadcaster's
History live video is watched for user.
In game live streaming, often there are some excellent game scenarios, successfully regarded for example, killing killing in class game
The marriage video clip acquired in successfully video clip or friend-making class game etc. in frequency segment, collection class game.These essences
Color frequency range is often the part of most excellent most worth viewing in game live streaming, and spectators user is in order to watching these excellent
Camera lens generally requires completely to carry out the viewing of entire video since the beginning of history live video, just can guarantee and do not miss
These wonderfuls.It will lead to spectators in this way and waste more time in its less interested video of viewing, and be also easy to miss
Excellent video moment.
As it can be seen that watching historical game play live video in the prior art, there is waste spectators' viewing time and spectators is caused to obtain
The technical problem for taking the probability of featured videos segment lower.
Summary of the invention
The present invention provides a kind of method for processing video frequency, device, equipment and medium, watches history in the prior art to solve
Game live video, existing waste spectators' viewing time and the lower technology of probability for leading to spectators' acquisition featured videos segment
Problem.
In a first aspect, the present invention provides a kind of method for processing video frequency, comprising:
According to the video classification of target video, characteristic information is set;
Characteristic matching is carried out to the target video, it is matched with the characteristic information in the target video to determine
Target frame;
According to the target frame and preset featured videos interception rule, featured videos are determined in the target video
Section, the featured videos section includes the target frame, and the featured videos interception rule is corresponding with the characteristic information;
The featured videos section is extracted from the target video.
Optionally, characteristic information is arranged, comprising: according to the target video in the video classification according to target video
Video classification, the corresponding characteristic information of the video classification, the characteristic information are determined from preset characteristic information library
It is the information extracted from excellent image, the excellent image is the image in the corresponding video of the video classification.
Optionally, when the target video be include reaching the game video of target plot when, the characteristic information is set
For from reaching the information extracted in target success picture;When the target video be include acquire plot game video when, if
Setting the characteristic information is from acquiring the information extracted in successfully picture.
Optionally, described that rule is intercepted according to the target frame and preset featured videos, in the target video really
Determine featured videos section, comprising: intercept rule according to the preset featured videos, determine the start frame of the featured videos section away from
The end frame of the playing duration of the target frame and the featured videos section is away from the playing duration of the target frame, wherein in institute
It states in target video, the play position of the start frame is located at before the target frame or is equal to the target frame, the end
The play position of frame is located at after the target frame or is equal to the target frame.
Optionally, described that the featured videos section is extracted from the target video, comprising: to obtain the target video
Attribute information;Judge whether the target video needs precisely to extract mode using time stamp according to the attribute information;If needed
It wants, then video decoding is carried out to the target video;And rule is intercepted according to the featured videos, it is regarded according to decoded target
The time stab information of frequency extracts the featured videos section from decoded target video;If it is not required, then in not decoded mesh
The immediate video unit of time stab information of corresponding time stab information Yu the target frame is searched in mark video, wherein the mesh
Mark video includes N number of video unit, and N is the positive integer greater than 1;According to the immediate video unit described in determining and extraction
Featured videos section.
Optionally, it if extracting multiple featured videos sections from the target video, is regarded from the target
After extracting the featured videos section in frequency, further includes: splice the multiple featured videos section, form splicing video;Work as reception
To client send for when the request of splicing video, send described in request the splicing video to the client into
Row plays.
Optionally, in the target video after determining featured videos section, further includes: obtain the featured videos section and exist
Play time information in the target video;According to the play time information, in the playing progress bar of the target video
Upper target position corresponding with the play time information marks the featured videos section.
Second aspect provides a kind of video process apparatus, comprising:
Characteristic information is arranged for the video classification according to target video in setting unit;
Matching unit, for the target video carry out characteristic matching, with determine in the target video with it is described
The matched target frame of characteristic information;
Determination unit, for intercepting rule according to the target frame and preset featured videos, in the target video
Determine that featured videos section, the featured videos section include the target frame, the featured videos interception rule is believed with the feature
Breath corresponds to;
Extraction unit, for extracting the featured videos section from the target video.
The third aspect, provides a kind of electronic equipment, including memory, processor and storage on a memory and can handled
The computer program run on device, the processor realize method described in first aspect when executing described program.
Fourth aspect provides a kind of computer readable storage medium, is stored thereon with computer program, and the program is processed
Method described in first aspect is realized when device executes.
The one or more technical solutions provided in the embodiment of the present invention, have at least the following technical effects or advantages:
Method, apparatus, equipment and medium provided by the embodiments of the present application, according to the video classification of target video, setting is special
Reference breath, and by characteristic matching, determine in target video with the matched target frame of characteristic information, further according to target frame and pre-
If featured videos intercept rule, the featured videos section that spectators most pay close attention to is determined in target video, and extract excellent view
Frequency range, so that spectators can not have to completely watch entire target video, it is only necessary to which the direct viewing featured videos section can
To see most excellent, not want to miss most video clip, it is effectively saved spectators' viewing time, makes spectators in the short period
It is interior to get featured videos segment.
Further, by judging whether the target video needs using time stamp essence according to the attribute information of target video
Quasi- extraction mode, and when needed, video decoding is carried out to the target video, further according to the time stamp of decoded target video
Featured videos section described in information extraction directly searches corresponding time stab information in not decoded target video when not needed
It effectively reduces and does not need to extract the featured videos section with the immediate video unit of time stab information of the target frame
The video extraction time precisely extracted also ensures the extraction accuracy for the video that part needs precisely to extract.
Further, the multiple featured videos sections of splicing are taken to form splicing video to play after being extracted video, and/or will be smart
The play time information of color frequency range, is labeled in the playing progress bar of target video, to be convenient for user more easily to obtain
Featured videos are got, viewing Discussing Convenience is further improved, has saved the operating time.
The above description is only an overview of the technical scheme of the present invention, in order to better understand the technical means of the present invention,
And it can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can
It is clearer and more comprehensible, the followings are specific embodiments of the present invention.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair
Bright some embodiments for those of ordinary skill in the art without creative efforts, can be with root
Other attached drawings are obtained according to these attached drawings.
Fig. 1 is the flow chart of method for processing video frequency in the embodiment of the present invention;
Fig. 2 is the extraction schematic diagram for not using time stamp precisely to extract mode in the embodiment of the present invention;
Fig. 3 is the structural schematic diagram of video process apparatus in the embodiment of the present invention;
Fig. 4 is the structural schematic diagram of electronic equipment in the embodiment of the present invention;
Fig. 5 is the structural schematic diagram of storage medium in the embodiment of the present invention.
Specific embodiment
The embodiment of the present application is by providing a kind of method for processing video frequency, device, equipment and medium, to solve the prior art
Middle viewing historical game play live video, existing waste spectators' viewing time and the probability for leading to spectators' acquisition featured videos segment
Lower technical problem.It realizes and has saved spectators' viewing time, make spectators that can get featured videos within a short period of time
The technical effect of segment.
Technical solution in the embodiment of the present application has thinking as follows:
According to the video classification of target video, characteristic information is set, and by characteristic matching, determine in target video with
The matched target frame of characteristic information intercepts rule further according to target frame and preset featured videos, determines in target video
The featured videos section that spectators most pay close attention to, and featured videos section is extracted, so that spectators can not have to completely watch entire mesh
Mark video, it is only necessary to which it is seen that most excellent, the video clip that do not want to miss most has the direct viewing featured videos section
Effect has saved spectators' viewing time, makes spectators that can get featured videos segment within a short period of time.
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Embodiment one
The present embodiment provides a kind of method for processing video frequency, as shown in Figure 1, comprising:
Characteristic information is arranged according to the video classification of target video in step S101;
Step S102, to the target video carry out characteristic matching, with determine in the target video with the feature
The target frame of information matches;
Step S103 intercepts rule according to the target frame and preset featured videos, determines in the target video
Featured videos section, the featured videos section include the target frame, the featured videos interception rule and the characteristic information pair
It answers;
Step S104 extracts the featured videos section from the target video.
In the embodiment of the present application, the method can be applied to server, also can be applied to viewer end or main broadcaster end,
This is not restricted, and facilities and equipments can be the electronic equipments such as smart phone, desktop computer, notebook or tablet computer,
This is also with no restriction.
Below with reference to Fig. 1, the specific implementation step of method provided in this embodiment is described in detail:
Firstly, executing step S101, according to the video classification of target video, characteristic information is set.
It should be noted that the target video can be the video of main broadcaster end upload;It is also possible to the live streaming before
In the process, the video that server stores;It can also be the live video being currently broadcast live.If the target video is
The live video being currently broadcast live, then during method provided by the embodiment is live streaming at the scene, the live streaming to receiving
Video flowing carries out real-time target frame matching and featured videos section is extracted.
In the specific implementation process, the video classification of target video is different, and corresponding characteristic information is also different, this feature letter
Breath can be voice characteristics information, be also possible to image feature information, this is not restricted, separately below for example:
The first, characteristic information is image feature information.
I.e. according to the video classification of the target video, the video classification pair is determined from preset characteristic information library
The characteristic information answered, the characteristic information are the information extracted from excellent image, and the excellent image is the video classification
Image in corresponding video.That is, often there is default phase to similar featured videos segments certain in target video
Same some excellent image frames, then it is special to can be the common images extracted from these excellent image frames for characteristic information
Sign.
For example, when the target video be include reaching target plot, such as when killing the game video of plot, if
Setting the characteristic information is from reaching target success, such as game kills successfully the information extracted in picture.Specifically, it is hitting
After killing successfully, often display reminding kills successful image, such as " KO " printed words, or " number adds 1 " printed words on video, or
Blood cake pattern etc., then can be using these characteristics of image as characteristic information.
When the target video be include acquiring the game video of plot when, it is from acquiring successfully that the characteristic information, which is arranged,
The information extracted in picture.Specifically, after acquiring successfully, often display reminding acquires successful image on video,
Such as: " adding 1 " printed words, or acquisition article pattern etc., then it can be using these characteristics of image as characteristic information.
Second, characteristic information is voice characteristics information.
I.e. according to the video classification of the target video, the video classification pair is determined from preset characteristic information library
The characteristic information answered, the characteristic information are the information extracted from video speech file.That is, to certain in target video
A little similar featured videos segments, which often exist, defaults identical some voice messagings, then characteristic information can be from these voices
The common phonetic feature extracted in information.
For example, when the target video be include the game video for killing plot when, the characteristic information, which is arranged, is
The information extracted in voice is killed successfully from game.Specifically, it after killing successfully, is often killed with video playing prompt
Successful voice, such as " KO " pronunciation, or " killing success " pronunciation, or the horrible cries such as " " pronunciation, then can be with these voices spy
Sign is used as characteristic information.
When the target video is prize drawing class video, it is the voice extracted from video of announcing the winners in a lottery that the characteristic information, which is arranged,
Information.Specifically, when announcing the winners in a lottery, the voice announced the winners in a lottery, such as specific music often are prompted with video playing, or " at once
Make known " etc. voices, then can be using these phonetic features as characteristic information.
Certainly, in the specific implementation process, the characteristic information is not limited to above two, can also be temporal information,
This also will not enumerate with no restriction.
In the specific implementation process, according to the needs of video type and video content, a target video can be arranged
A variety of or a kind of characteristic information, so as to the subsequent featured videos section that can extract a variety of or a kind of content.
Then, execute step S102, to the target video carry out characteristic matching, with determine in the target video with
The matched target frame of characteristic information.
In the specific implementation process, characteristic information is different, and corresponding matching process is also different:
If the characteristic information is image feature information, each frame image of characteristic information and target video is carried out
Images match, or the interval frame image of characteristic information and target video is subjected to images match, it is deposited on certain frame image when matching
In image corresponding with this feature information, it is determined that the frame is target frame.For example, it is assumed that characteristic information is blood cake pattern, then
When matching the frame comprising the blood cake pattern, then using the frame image as target frame.
If the characteristic information is voice characteristics information, the audio file of characteristic information and target video is subjected to sound
Frequency matches, when match somewhere audio and this feature information to it is corresponding when this at the corresponding frame of audio be target frame, specifically
For at this corresponding frame of audio be time stab information and the consistent frame of audio file time stab information at this.For example, it is assumed that feature is believed
Breath is that " killing success " pronounces, then when matching the audio file comprising audio, then with identical with the time stamp of the audio file
Frame is as target frame.
Certainly, the method for carrying out characteristic matching is not limited to above two, and this is not restricted, also will not enumerate.
Next, executing step S103, rule is intercepted according to the target frame and preset featured videos, in the target
Featured videos section is determined in video, the featured videos section includes the target frame, featured videos interception rule with it is described
Characteristic information is corresponding.
In the embodiment of the present application, rule is intercepted according to the preset featured videos, determines the featured videos section
The end frame of playing duration and the featured videos section of the start frame away from the target frame away from the playing duration of the target frame,
Wherein, in the target video, the play position of the start frame is located at before the target frame or is equal to the target frame,
The play position of the end frame is located at after the target frame or is equal to the target frame.
Specifically, the featured videos interception rule is corresponding with the characteristic information, refers to, different features is believed
Breath has corresponding featured videos interception rule, for example:
Assuming that this feature information is to include in the game video for kill plot, characterization game kills successful information, considers
To excellent aiming and kill probably occur killing successfully first 1 minute or so time, then such characteristic information can be set
Corresponding featured videos interception rule are as follows: determine target frame forward 60s to the video between the target frame be featured videos section.
Assuming that this feature information is in prize drawing class video, characterization starts the information announced the winners in a lottery, it is contemplated that duration of announcing the winners in a lottery is general
The corresponding featured videos interception rule of such characteristic information then can be set are as follows: determines that target frame starts to 180s backward in 180s
Between video be featured videos section.
Certainly, in addition to determined above by characteristic information type featured videos end duration and with the time location of target frame
Relationship, so that it is determined that featured videos interception rule is outer, there are also the methods that other determine featured videos interception rule.For example, may be used also
Multiple characteristic informations are arranged, using the video between certain corresponding target frame of two characteristic informations as featured videos section.Citing
For, it is assumed that prize drawing class video, it is provided with characterization and starts the characteristic information A to announce the winners in a lottery and characterize the characteristic information B for end of announcing the winners in a lottery,
It matches characteristic information A and corresponds to target frame A, characteristic information B corresponds to target frame B, then corresponding featured videos interception can be set
Rule are as follows: determine that the video between target frame A and target frame B is featured videos section.
After determining featured videos section, step S104 is executed, the featured videos section is extracted from the target video.
In the specific implementation process, the featured videos section is determined, it can determine rising for the featured videos section
Beginning time stamp and end time stamp, terminate the featured videos between time stamp to extract the starting time stamp and this from the target video
Section.
In view of the extraction of featured videos section needs to consume more calculating and process resource, the present embodiment additionally provides one
The extracting method of kind low consumption of resources, is described in detail as follows:
Referring to FIG. 2, due to target video be broadcast live or history live streaming live video, transmission of video be by
It is interspersed with transmission according to video unit and audio unit, each video unit and audio unit have its corresponding time stab information,
Therefore the present embodiment is not decoded the target video section, directly pulls live stream, Xie Liufu by step S201~S204
With rear, the immediate video of time stab information of corresponding time stab information Yu the target frame is searched in not decoded target video
Unit according to the immediate video unit determination and extracts the featured videos section, then is flowed by step S205~S206
The featured videos section that multiplexing synthesis and preservation extract.For example, as illustrated in fig. 2, it is assumed that video unit 3 and video unit 4
Time stamp and the featured videos section determined time stab information it is closest, then solve stream multiplexing and extract the video unit 3 and video
Unit 4, and after extracting time stab information audio unit corresponding with video unit 3 and video unit 4, then stream multiplexing is carried out, it closes
At video unit and audio unit, to form the complete featured videos section extracted.
Using this featured videos section extracting method, due to not needing to be decoded to entire video, can save more
Calculating and process resource improve processing speed.
Further, it is contemplated that some featured videos sections have strict requirements to the time, can also be arranged and carry out excellent view again
Before frequency extraction, the attribute information of the target video is first obtained;Whether the target video is judged according to the attribute information
It needs precisely to extract mode using time stamp;If it is required, then carrying out video decoding to the target video;And according to described excellent
Video intercepting rule, according to the time stab information of decoded target video, is extracted described excellent from decoded target video
Video-frequency band;If it is not required, then searching the time stamp of corresponding time stab information Yu the target frame in not decoded target video
The immediate video unit of information, wherein the target video includes N number of video unit, and N is the positive integer greater than 1;According to institute
It states immediate video unit determination and extracts the featured videos section.
I.e. featured videos section corresponding according to every category feature information by staff the case where, in advance in the category of target video
Property information in setting characterization whether need precisely to extract the extraction information of mode using time stamp, for example, it is desired to accurate using time stamp
Extracting number behind the Ti mark for the information that then sets a property is 1, does not need precisely to extract using time stamp, set a property information
Number is 0 to Ti mark below.Before subsequent extract, first judge that target is regarded according to extraction information preset in attribute information
Whether frequency needs precisely to extract mode using time stamp, is first decoded to target video if needing, then carries out by every frame time stamp
It is accurate to extract, as do not needed, target video is not decoded, directly carries out resource consumption by the time stamp of each video unit
Extraction.
It, can be in such a way that following two shows featured videos section for the ease of user's viewing after completing to extract:
The first, carries out video-splicing.
I.e. if being extracted multiple featured videos sections, splice the multiple featured videos section, forms splicing video;When connecing
When being used for the request of splicing video described in request of client transmission is received, sends the splicing video to the client
It plays out.
In the specific implementation process, splice the multiple featured videos section method can there are many, be set forth below three kinds
For:
N number of featured videos section can be spliced into a video, and be inserted into prompt before each featured videos section
Video, the featured videos section that the prompt video is used to describe to play, forms splicing video.I.e. in each featured videos section
It is inserted into one section of pre-prepd prompt video before, may include: that next to be played section is excellent in the prompt video
Play time information of the video in former target video, the content for that section of featured videos next to be played description or next
The video content types etc. of that be played section featured videos.
N number of featured videos section can also be spliced into a video, and be inserted between every two featured videos section
It is spaced video, the last period featured videos segment is played to be terminated and next section of featured videos segment the interval video for characterizing
It will play, form splicing video.One section of pre-prepd interval video is inserted into i.e. before each featured videos section, between described
Every can be for one section of blank video, one section of default credit video or self-introduction video of main broadcaster etc. in video.
N number of featured videos section can also be spliced into a video, and regarded in the initial segment of each featured videos section
Play cuing information is superimposed in frequency, the prompt information forms splicing video for describing featured videos section being played on.I.e.
In order not to increase extra video playback time, synthesizes in one frame of starting or multiframe of each featured videos section and mentioned into preset
Show information, which can be prompt picture, be also possible to suggestion voice, and this is not restricted.If the prompt information
It is prompt picture, can be by the way of picture-in-picture, can also be by the way of translucent covering, this is not restricted.
Certainly, in the specific implementation process, the mode for splicing video is not limited to above-mentioned three kinds, can also be by the multiple essence
Color frequency is according to time stamp sequencing, and progress is seamless spliced to play and handle the time to reduce, and this is not restricted, also not another
One enumerates.
Second, featured videos section is marked in playing progress bar.
I.e. after featured videos section has been determined, play time letter of the featured videos section in the target video is obtained
Breath;It is corresponding with the play time information in the playing progress bar of the target video according to the play time information
Target position marks the featured videos section.
Specifically, can use trichobothrium mode, in the playing progress bar of the target video with the play time
The corresponding target position of information marks the featured videos section.It can be by the corresponding mesh of play time information in playing progress bar
Cursor position carries out color change, or carries out the change of progress bar width, or addition marks lines to be labeled.
When receiving the operation for acting on the target position, display can characterize the picture or view of the featured videos section
Frequently.It specifically can be other pictures in the image for showing the target frame, or the display featured videos section, or triggering is played and is somebody's turn to do
Featured videos section, or display is preset introduces picture for describe this section of featured videos section.
In the specific implementation process, the picture or video that can characterize the featured videos section of display, can be list
It opens a window solely come what is shown, is also possible to be also possible to directly in the broadcast window of target video come what is shown
Superposition is in the broadcast window of target video come what is shown, and this is not restricted.Wherein, Overlapping display can be using picture-in-picture
Mode show, be also possible to be arranged it is translucent come Overlapping display, herein also with no restriction.
In the embodiment of the present application, after extracting featured videos section, the unlatching of the featured videos section can also be linked
It is placed into webcast website on main broadcaster corresponding with the target video room page, in order to which spectators can directly trigger the unlatching
Chain fetches selection and plays the featured videos section.
Further, in the embodiment of the present application, it is contemplated that carry out characteristic information matching and carry out featured videos section to extract
Resource consumption can be occupied, interfering with each other when in order to avoid each task execution is seized with resource, can also be arranged described using special
Sign matching determines that featured videos section is implemented at GCR-Work layers from target video;It is described to be extracted from the target video
N number of featured videos section is implemented at Media-Worker layers;The video-splicing and according to the play time information, in institute
Target position mark corresponding with the play time information featured videos section in the playing progress bar of target video is stated to exist
Media-Worker layers of implementation.
Specifically, method, apparatus provided by the embodiments of the present application, equipment and medium, according to the video class of target video
, characteristic information is not set, and by characteristic matching, determine in target video with the matched target frame of characteristic information, further according to
Target frame and preset featured videos interception rule, determine the featured videos section that spectators most pay close attention to, and mention in target video
Featured videos section is taken out, so that spectators can not have to completely watch entire target video, it is only necessary to which this is excellent for direct viewing
It is seen that most excellent, the video clip that do not want to miss most is effectively saved spectators' viewing time, makes spectators video-frequency band
Featured videos segment can be got within a short period of time.
Based on the same inventive concept, the embodiment of the invention also provides the corresponding dresses of method for processing video frequency in embodiment one
It sets, sees embodiment two.
Embodiment two
A kind of video process apparatus is present embodiments provided, as shown in figure 3, the device includes:
Characteristic information is arranged for the video classification according to target video in setting unit 301;
Matching unit 302, for the target video carry out characteristic matching, with determine in the target video with institute
State the matched target frame of characteristic information;
Determination unit 303, for intercepting rule according to the target frame and preset featured videos, in the target video
Middle determining featured videos section, the featured videos section include the target frame, the featured videos interception rule and the feature
Information is corresponding;
Extraction unit 304, for extracting the featured videos section from the target video.
In the embodiment of the present application, described device can be the electricity such as smart phone, desktop computer, notebook or tablet computer
Sub- equipment, this is not restricted.
In the embodiment of the present application, described device can be android system, IOS system or Windows system, herein
With no restriction.
By the device that the embodiment of the present invention two is introduced, filled used by the method to implement the embodiment of the present invention one
It sets, so based on the method that the embodiment of the present invention one is introduced, the affiliated personnel in this field can understand the specific structure of the device
And deformation, so details are not described herein.Device used by the method for all embodiment of the present invention one belongs to the present invention and is intended to
The range of protection.
Based on the same inventive concept, this application provides the corresponding electronic equipment embodiment of embodiment one, detailed in Example
Three.
Embodiment three
The present embodiment provides a kind of electronic equipment, as shown in figure 4, including memory 410, processor 420 and being stored in
On reservoir 410 and the computer program 411 that can run on processor 420 can when processor 420 executes computer program 411
To realize any embodiment in embodiment one.
Since the electronic equipment that the present embodiment is introduced is equipment used by method in implementation the embodiment of the present application one, therefore
And based on method described in the embodiment of the present application one, the electronics that those skilled in the art can understand the present embodiment is set
Standby specific embodiment and its various change form, so how to realize the embodiment of the present application for the electronic equipment herein
In method be no longer discussed in detail.As long as those skilled in the art implement to set used by the method in the embodiment of the present application
It is standby, belong to the range to be protected of the application.
Based on the same inventive concept, this application provides the corresponding storage medium of embodiment one, detailed in Example four.
Example IV
The present embodiment provides a kind of computer readable storage mediums 500, as shown in figure 5, being stored thereon with computer program
511, when which is executed by processor, any embodiment in embodiment one may be implemented.
The technical solution provided in the embodiment of the present application, has at least the following technical effects or advantages:
Method, apparatus, equipment and medium provided by the embodiments of the present application, according to the video classification of target video, setting is special
Reference breath, and by characteristic matching, determine in target video with the matched target frame of characteristic information, further according to target frame and pre-
If featured videos intercept rule, the featured videos section that spectators most pay close attention to is determined in target video, and extract excellent view
Frequency range, so that spectators can not have to completely watch entire target video, it is only necessary to which the direct viewing featured videos section can
To see most excellent, not want to miss most video clip, it is effectively saved spectators' viewing time, makes spectators in the short period
It is interior to get featured videos segment.
Further, by judging whether the target video needs using time stamp essence according to the attribute information of target video
Quasi- extraction mode, and when needed, video decoding is carried out to the target video, further according to the time stamp of decoded target video
Featured videos section described in information extraction directly searches corresponding time stab information in not decoded target video when not needed
It effectively reduces and does not need to extract the featured videos section with the immediate video unit of time stab information of the target frame
The video extraction time precisely extracted also ensures the extraction accuracy for the video that part needs precisely to extract.
Further, the multiple featured videos sections of splicing are taken to form splicing video to play after being extracted video, and/or will be smart
The play time information of color frequency range, is labeled in the playing progress bar of target video, to be convenient for user more easily to obtain
Featured videos are got, viewing Discussing Convenience is further improved, has saved the operating time.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, those skilled in the art can carry out various modification and variations without departing from this hair to the embodiment of the present invention
The spirit and scope of bright embodiment.In this way, if these modifications and variations of the embodiment of the present invention belong to the claims in the present invention
And its within the scope of equivalent technologies, then the present invention is also intended to include these modifications and variations.
Claims (10)
1. a kind of method for processing video frequency characterized by comprising
According to the video classification of target video, characteristic information is set;
To the target video carry out characteristic matching, with determine in the target video with the matched target of the characteristic information
Frame;
According to the target frame and preset featured videos interception rule, featured videos section, institute are determined in the target video
Stating featured videos section includes the target frame, and the featured videos interception rule is corresponding with the characteristic information;
The featured videos section is extracted from the target video.
2. the method as described in claim 1, which is characterized in that the video classification according to target video, setting feature are believed
Breath, comprising:
According to the video classification of the target video, the corresponding spy of the video classification is determined from preset characteristic information library
Reference breath, the characteristic information is the information extracted from excellent image, and the excellent image is that the video classification is corresponding
Image in video.
3. method according to claim 2, it is characterised in that:
When the target video be include reaching the game video of target plot when, it is from reaching target that the characteristic information, which is arranged,
The information extracted in success picture;
When the target video be include acquiring the game video of plot when, it is from acquiring successfully picture that the characteristic information, which is arranged,
The information of middle extraction.
4. the method as described in claim 1, which is characterized in that described to be intercepted according to the target frame and preset featured videos
Rule determines featured videos section in the target video, comprising:
According to the preset featured videos interception rule, start frame the broadcasting away from the target frame of the featured videos section is determined
The end frame of duration and the featured videos section is put away from the playing duration of the target frame, wherein in the target video,
The play position of the start frame is located at before the target frame or is equal to the target frame, the play position position of the end frame
After the target frame or equal to the target frame.
5. the method as described in claim 1, which is characterized in that described to extract the featured videos from the target video
Section, comprising:
Obtain the attribute information of the target video;
Judge whether the target video needs precisely to extract mode using time stamp according to the attribute information;
If it is required, then carrying out video decoding to the target video;And rule is intercepted according to the featured videos, according to decoding
The time stab information of target video afterwards extracts the featured videos section from decoded target video;
If it is not required, then searching the time stab information of corresponding time stab information Yu the target frame in not decoded target video
Immediate video unit, wherein the target video includes N number of video unit, and N is the positive integer greater than 1;According to it is described most
Close video unit is determining and extracts the featured videos section.
6. the method as described in claim 1, which is characterized in that if extracted from the target video multiple described excellent
Video-frequency band, then after extracting the featured videos section in the target video, further includes:
Splice the multiple featured videos section, forms splicing video;
When receiving when being used for the request of splicing video described in request of client transmission, the splicing video is sent to institute
Client is stated to play out.
7. the method as described in claim 1, which is characterized in that after determining featured videos section in the target video, also
Include:
Obtain play time information of the featured videos section in the target video;
It is corresponding with the play time information in the playing progress bar of the target video according to the play time information
Target position marks the featured videos section.
8. a kind of video process apparatus characterized by comprising
Characteristic information is arranged for the video classification according to target video in setting unit;
Matching unit, for the target video carry out characteristic matching, with determine in the target video with the feature
The target frame of information matches;
Determination unit determines in the target video for intercepting rule according to the target frame and preset featured videos
Featured videos section, the featured videos section include the target frame, the featured videos interception rule and the characteristic information pair
It answers;
Extraction unit, for extracting the featured videos section from the target video.
9. a kind of electronic equipment including memory, processor and stores the calculating that can be run on a memory and on a processor
Machine program, which is characterized in that the processor realizes method as claimed in claim 1 to 7 when executing described program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
Method as claimed in claim 1 to 7 is realized when execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810752194.9A CN109089128A (en) | 2018-07-10 | 2018-07-10 | A kind of method for processing video frequency, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810752194.9A CN109089128A (en) | 2018-07-10 | 2018-07-10 | A kind of method for processing video frequency, device, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109089128A true CN109089128A (en) | 2018-12-25 |
Family
ID=64837473
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810752194.9A Pending CN109089128A (en) | 2018-07-10 | 2018-07-10 | A kind of method for processing video frequency, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109089128A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109714644A (en) * | 2019-01-22 | 2019-05-03 | 广州虎牙信息科技有限公司 | A kind of processing method of video data, device, computer equipment and storage medium |
CN110855904A (en) * | 2019-11-26 | 2020-02-28 | Oppo广东移动通信有限公司 | Video processing method, electronic device and storage medium |
WO2020199303A1 (en) * | 2019-04-02 | 2020-10-08 | 网宿科技股份有限公司 | Live stream video highlight generation method and apparatus, server, and storage medium |
CN112580613A (en) * | 2021-02-24 | 2021-03-30 | 深圳华声医疗技术股份有限公司 | Ultrasonic video image processing method, system, equipment and storage medium |
CN112632329A (en) * | 2020-12-18 | 2021-04-09 | 咪咕互动娱乐有限公司 | Video extraction method and device, electronic equipment and storage medium |
US11025964B2 (en) | 2019-04-02 | 2021-06-01 | Wangsu Science & Technology Co., Ltd. | Method, apparatus, server, and storage medium for generating live broadcast video of highlight collection |
CN113301385A (en) * | 2021-05-21 | 2021-08-24 | 北京大米科技有限公司 | Video data processing method and device, electronic equipment and readable storage medium |
CN113691864A (en) * | 2021-07-13 | 2021-11-23 | 北京百度网讯科技有限公司 | Video clipping method, video clipping device, electronic equipment and readable storage medium |
CN114339075A (en) * | 2021-12-20 | 2022-04-12 | 北京达佳互联信息技术有限公司 | Video editing method and device, electronic equipment and storage medium |
CN115134631A (en) * | 2022-07-25 | 2022-09-30 | 北京达佳互联信息技术有限公司 | Video processing method and video processing device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1377047A2 (en) * | 2002-06-19 | 2004-01-02 | Microsoft Corporation | Computer user interface for interacting with short segments of video (cliplets) generated from digital video |
CN102290082A (en) * | 2011-07-05 | 2011-12-21 | 央视国际网络有限公司 | Method and device for processing brilliant video replay clip |
CN107147959A (en) * | 2017-05-05 | 2017-09-08 | 中广热点云科技有限公司 | A kind of INVENTIONBroadcast video editing acquisition methods and system |
CN107154264A (en) * | 2017-05-18 | 2017-09-12 | 北京大生在线科技有限公司 | The method that online teaching wonderful is extracted |
-
2018
- 2018-07-10 CN CN201810752194.9A patent/CN109089128A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1377047A2 (en) * | 2002-06-19 | 2004-01-02 | Microsoft Corporation | Computer user interface for interacting with short segments of video (cliplets) generated from digital video |
CN102290082A (en) * | 2011-07-05 | 2011-12-21 | 央视国际网络有限公司 | Method and device for processing brilliant video replay clip |
CN107147959A (en) * | 2017-05-05 | 2017-09-08 | 中广热点云科技有限公司 | A kind of INVENTIONBroadcast video editing acquisition methods and system |
CN107154264A (en) * | 2017-05-18 | 2017-09-12 | 北京大生在线科技有限公司 | The method that online teaching wonderful is extracted |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109714644B (en) * | 2019-01-22 | 2022-02-25 | 广州虎牙信息科技有限公司 | Video data processing method and device, computer equipment and storage medium |
CN109714644A (en) * | 2019-01-22 | 2019-05-03 | 广州虎牙信息科技有限公司 | A kind of processing method of video data, device, computer equipment and storage medium |
WO2020199303A1 (en) * | 2019-04-02 | 2020-10-08 | 网宿科技股份有限公司 | Live stream video highlight generation method and apparatus, server, and storage medium |
US11025964B2 (en) | 2019-04-02 | 2021-06-01 | Wangsu Science & Technology Co., Ltd. | Method, apparatus, server, and storage medium for generating live broadcast video of highlight collection |
CN110855904B (en) * | 2019-11-26 | 2021-10-01 | Oppo广东移动通信有限公司 | Video processing method, electronic device and storage medium |
CN110855904A (en) * | 2019-11-26 | 2020-02-28 | Oppo广东移动通信有限公司 | Video processing method, electronic device and storage medium |
CN112632329A (en) * | 2020-12-18 | 2021-04-09 | 咪咕互动娱乐有限公司 | Video extraction method and device, electronic equipment and storage medium |
CN112580613B (en) * | 2021-02-24 | 2021-06-04 | 深圳华声医疗技术股份有限公司 | Ultrasonic video image processing method, system, equipment and storage medium |
CN112580613A (en) * | 2021-02-24 | 2021-03-30 | 深圳华声医疗技术股份有限公司 | Ultrasonic video image processing method, system, equipment and storage medium |
CN113301385A (en) * | 2021-05-21 | 2021-08-24 | 北京大米科技有限公司 | Video data processing method and device, electronic equipment and readable storage medium |
CN113301385B (en) * | 2021-05-21 | 2023-02-28 | 北京大米科技有限公司 | Video data processing method and device, electronic equipment and readable storage medium |
CN113691864A (en) * | 2021-07-13 | 2021-11-23 | 北京百度网讯科技有限公司 | Video clipping method, video clipping device, electronic equipment and readable storage medium |
CN114339075A (en) * | 2021-12-20 | 2022-04-12 | 北京达佳互联信息技术有限公司 | Video editing method and device, electronic equipment and storage medium |
CN115134631A (en) * | 2022-07-25 | 2022-09-30 | 北京达佳互联信息技术有限公司 | Video processing method and video processing device |
CN115134631B (en) * | 2022-07-25 | 2024-01-30 | 北京达佳互联信息技术有限公司 | Video processing method and video processing device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108924576A (en) | A kind of video labeling method, device, equipment and medium | |
CN109089154A (en) | A kind of video extraction method, apparatus, equipment and medium | |
CN109089128A (en) | A kind of method for processing video frequency, device, equipment and medium | |
CN109089127A (en) | A kind of video-splicing method, apparatus, equipment and medium | |
US10182095B2 (en) | Method and system for video call using two-way communication of visual or auditory effect | |
US9786326B2 (en) | Method and device of playing multimedia and medium | |
US20160316233A1 (en) | System and method for inserting, delivering and tracking advertisements in a media program | |
CN104811814B (en) | Information processing method and system, client and server based on video playing | |
US10469902B2 (en) | Apparatus and method for confirming content viewing | |
KR102246305B1 (en) | Augmented media service providing method, apparatus thereof, and system thereof | |
CN107147939A (en) | Method and apparatus for adjusting net cast front cover | |
CN109640129B (en) | Video recommendation method and device, client device, server and storage medium | |
CN109040773A (en) | A kind of video improvement method, apparatus, equipment and medium | |
US10981056B2 (en) | Methods and systems for determining a reaction time for a response and synchronizing user interface(s) with content being rendered | |
CN108989883B (en) | Live broadcast advertisement method, device, equipment and medium | |
CN105872786B (en) | A kind of method and device for launching advertisement by barrage in a program | |
GB2503878A (en) | Generating interstitial scripts for video content, based on metadata related to the video content | |
CN112753227A (en) | Audio processing for detecting the occurrence of crowd noise in a sporting event television program | |
CN106851326B (en) | Playing method and device | |
CN113490004B (en) | Live broadcast interaction method and related device | |
CN107635153B (en) | Interaction method and system based on image data | |
WO2019114330A1 (en) | Video playback method and apparatus, and terminal device | |
CN112637670B (en) | Video generation method and device | |
CN110958470A (en) | Multimedia content processing method, device, medium and electronic equipment | |
US20170311009A1 (en) | Promotion information processing method, device and apparatus, and non-volatile computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181225 |
|
RJ01 | Rejection of invention patent application after publication |