CN109756751A - Multimedia data processing method and device, electronic equipment, storage medium - Google Patents

Multimedia data processing method and device, electronic equipment, storage medium Download PDF

Info

Publication number
CN109756751A
CN109756751A CN201711084918.9A CN201711084918A CN109756751A CN 109756751 A CN109756751 A CN 109756751A CN 201711084918 A CN201711084918 A CN 201711084918A CN 109756751 A CN109756751 A CN 109756751A
Authority
CN
China
Prior art keywords
text
segment
medium data
data
data segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711084918.9A
Other languages
Chinese (zh)
Other versions
CN109756751B (en
Inventor
熊章俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201711084918.9A priority Critical patent/CN109756751B/en
Publication of CN109756751A publication Critical patent/CN109756751A/en
Application granted granted Critical
Publication of CN109756751B publication Critical patent/CN109756751B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The disclosure discloses a kind of multimedia data processing method and device, electronic equipment, computer readable storage medium, this method comprises: carrying out the processing of text material, obtains text fragments and the text meta-data corresponding to text fragments;From the multi-medium data segment of marked content, label content and the matched multi-medium data segment of text meta-data are identified, multi-medium data segment is as the destination multimedia segment converted by text fragments;The multimedia file converted by text material is generated by destination multimedia segment.Technical solution provided by the present disclosure realizes conversion of the text to audio-video, due to saving a large amount of human and material resources without the artificial selection for carrying out audio-video segment.

Description

Multimedia data processing method and device, electronic equipment, storage medium
Technical field
This disclosure relates to multimedia application technical field, in particular to a kind of multimedia data processing method and device, electricity Sub- equipment, computer readable storage medium.
Background technique
Text material and video material, are interrelated, while also having difference.Association aspect, often has movie and television play Director is some literary works shooting and producings at film and TV play;In turn, it for some films and television programs, can also expedite the emergence of Derivative literary works.In terms of difference between textual materials and video material, then it is embodied in what the two was not perfectly matched to, text Mutual inversion of phases between grapheme material and video material needs the various resources such as a large amount of human and material resources, carries out conversion and creates again Make.
It is the Deductive Implementation by directing and performer's progress is professional in the direction from text to Video Quality Metric;In network In the world, there is also some people according to the word content of video material, and video clip is intercepted from massive video material and carries out group It closes to express oneself understanding for word content.
It will be apparent that manually analyzing the word content of massive video material, larger workload, according to itself to video The understanding of word content in material, the video clip of interception is combined, and needs to expend longer time.
Summary of the invention
In order to solve manually to analyze the word content of massive video present in the relevant technologies, then according to itself Different video segment is combined by the understanding for text in video content, needs the problem of consuming a longer time, the disclosure Provide a kind of multimedia data processing method.
On the one hand, present disclose provides a kind of multimedia data processing methods, this method comprises:
The processing of text material is carried out, text fragments and the text meta-data corresponding to the text fragments are obtained;
From the multi-medium data segment of marked content, identify that label content and the text meta-data are matched more Media data segment, the multi-medium data segment is as the destination multimedia segment converted by the text fragments;
The multimedia file converted by the text material is generated by the destination multimedia segment.
On the other hand, the disclosure additionally provides a kind of apparatus for processing multimedia data, and described device includes:
Text processing module, for carrying out the processing of text material, obtaining text fragments and corresponding to the text fragments Text meta-data;
Data match module, for from the multi-medium data segment of marked content, identify label content with it is described The matched multi-medium data segment of text meta-data, the multi-medium data segment is as the target converted by the text fragments Multi-media segment;
File generating module, for generating the multimedia converted by the text material by the destination multimedia segment File.
In addition, the disclosure additionally provides a kind of electronic equipment, the electronic equipment includes:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to executing above-mentioned multimedia data processing method.
Further, the disclosure additionally provides a kind of computer readable storage medium, which is characterized in that the computer can It reads storage medium and is stored with computer program, the computer program can be executed by processor and complete above-mentioned multimedia-data procession Method.
The technical scheme provided by this disclosed embodiment can include the following benefits:
Technical solution provided by the present disclosure, by from the multi-medium data segment of marked content, identifying and text The matched multi-medium data segment of the text meta-data of segment is turned so that the multi-medium data segment can be used as by text fragments The destination multimedia segment of change, and then the multimedia file converted by text material is generated by destination multimedia segment, pass through Conversion of the text to audio-video may be implemented in this mode, due to without manually being carried out according to the word content of massive video material The selection of audio-video segment saves a large amount of human and material resources.
It should be understood that the above general description and the following detailed description are merely exemplary, this can not be limited It is open.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention Example, and in specification together principle for explaining the present invention.
Fig. 1 is the schematic diagram of the implementation environment according to involved in the disclosure;
Fig. 2 is a kind of block diagram of server shown according to an exemplary embodiment;
Fig. 3 is a kind of flow chart of multimedia data processing method shown according to an exemplary embodiment;
Fig. 4 is the flow chart of the details of the step 310 of Fig. 3 corresponding embodiment;
Fig. 5 is the schematic diagram of the details of the step 330 of Fig. 3 corresponding embodiment;
Fig. 6 is a kind of flow chart of multimedia data processing method on the basis of Fig. 5 corresponding embodiment;
Fig. 7 is the process schematic that text information processing module handles history text segment;
Fig. 8 is that text matches module shown according to an exemplary embodiment carries out model training and the signal of matched principle Figure;
Fig. 9 is a kind of flow chart of multimedia data processing method on the basis of Fig. 3 corresponding embodiment;
Figure 10 is that multimedia signal processing module carries out processing acquisition multi-medium data piece segment mark to multi-medium data segment Remember the process schematic of content;
The functional block diagram that Figure 11 is configured by the server shown in a kind of exemplary embodiment of the disclosure;
Figure 12 is the functional schematic of editor's authoring module shown according to an exemplary embodiment;
Figure 13 is a kind of block diagram of apparatus for processing multimedia data shown according to an exemplary embodiment;
Figure 14 is the details block diagram of the text processing module in Figure 13 corresponding embodiment;
Figure 15 is the details block diagram of the data match module in Figure 13 corresponding embodiment.
Specific embodiment
Here will the description is performed on the exemplary embodiment in detail, the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all embodiments consistented with the present invention.On the contrary, they be only with it is such as appended The example of device and method being described in detail in claims, some aspects of the invention are consistent.
Fig. 1 is the schematic diagram of the implementation environment according to involved in the disclosure.The implementation environment includes: multiple mobile terminals 110 and at least one server 120.
Interrelational form between mobile terminal 110 and server 120, network associate mode and/or agreement including hardware, And the data correlation mode come and gone therebetween.Mobile terminal 110 provides existing text material to server 120, and right Existing text material request server 12 is translated into multimedia file.The processing that server 120 carries out text material obtains Obtain text fragments and the corresponding text meta-data of text fragments.It can store multimedia number in the database of server 120 itself According to segment, each multi-medium data segment has label, for marking the content of multi-medium data segment.120 basis of server The label content of multi-medium data segment and the text meta-data of text fragments search matched multimedia number for text fragments According to segment, and then multimedia file can be generated by the multi-medium data segment found out, realize text and regarded to multimedia sound The conversion of frequency.
As needed, the multimedia data processing method that the disclosure provides can also be applied to intelligent display device, intelligence Display equipment can be smart television, Intelligent set top box etc..The intelligent display device can input user under off-line state Text material handled, obtain text fragments and the corresponding text meta-data of text fragments, and store from local data base Multi-medium data segment in find out with the matched multi-medium data segment of text meta-data, pass through the multi-medium data piece found out The multimedia file that Duan Shengcheng is converted by text material realizes conversion of the text to multimedia audio-video.
Referring to fig. 2, Fig. 2 is a kind of server architecture schematic diagram provided in an embodiment of the present invention.The server 200 can be because matching It sets or performance is different and generate bigger difference, may include one or more central processing units (central Processing units, CPU) 222 (for example, one or more processors) and memory 232, one or more Store the storage medium 230 (such as one or more mass memory units) of application program 242 or data 244.Wherein, it deposits Reservoir 232 and storage medium 230 can be of short duration storage or persistent storage.The program for being stored in storage medium 230 may include One or more modules (diagram is not shown), each module may include to the series of instructions operation in server 200. Further, central processing unit 222 can be set to communicate with storage medium 230, execute storage medium on server 200 Series of instructions operation in 230.Server 200 can also include one or more power supplys 226, one or more Wired or wireless network interface 250, one or more input/output interfaces 258, and/or, one or more operations System 241, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM etc..Following Fig. 3- The step as performed by server described in Fig. 6, embodiment illustrated in fig. 8 can be based on the server architecture shown in Fig. 2.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of following embodiments can pass through hardware It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
Fig. 3 is a kind of flow chart of multimedia data processing method shown according to an exemplary embodiment.The multimedia The scope of application and executing subject of data processing method, for example, this method for implementation environment shown in Fig. 1 server 120 or Intelligent display device.As shown in figure 3, the multi-medium data method may comprise steps of.
In the step 310, the processing of text material is carried out, text fragments and the text corresponding to the text fragments are obtained Metadata.
Wherein, text material is the works comprising word content, such as novel.According to text material length length, text element Material may include one or more text fragments.One text fragments can be a paragragh or a chapters and sections.
Therefore, the processing of text material, including text fragments are extracted from text material, and extract each text fragments pair The text meta-data answered.Text meta-data is to carry out key message extraction to content of text in the processing of text material and obtain 's.Text meta-data, that is, text fragments main contents, such as may include the contents such as time, place, personage, movement.One A exemplary embodiment in the specific implementation, text meta-data by building context long-term memory model (CLSTM) obtain It arrives.Each text fragments have unique corresponding text meta-data.For a text material, all text fragments are corresponding Text meta-data just constitutes the key message of text material, for describing the main contents of text material.
It should be noted that document has all shown a kind of sequential structure (for example, sentence, section in a variety of abstract levels It falls, chapters and sections).These abstract levels constitute a kind of natural, characterization content hierarchical structure, can be used to in text Word or bigger segment carry out meaning reasoning.And CLSTM is exactly a kind of model for carrying out the meaning reasoning of natural language document. Text fragments are inputted into CLSTM model, it can be with the text meta-data of export structure.
Optionally, as shown in figure 4, above-mentioned steps 310 specifically include:
In step 311, paragraph division is carried out to the text material, obtains several text fragments.
When text material length is longer, the text information processing module of server 120 itself configuration can pass through nature Language processing techniques carry out paragraph division to text material, obtain several text fragments.Each text fragments may include one Or multiple paragraghs.
In general, can use first trip is retracted 2 characters or fullstop to divide to the paragraph of text material.Often go out An existing fullstop, represents the beginning an of sentence and the end of another sentence, obtains several sentences whereby, each sentence can Using as a text fragments.Alternatively, 2 characters are retracted using first trip, using each paragragh as a text fragments.
Each paragragh is provided with specific document meaning, the document meaning of possible certain natural paragraphs in multiple paragraghs Deviation is identical.Whether the document meaning deviation of natural paragraph is identical, can be biased to by presetting characterization document meaning Characteristic key words or other rules specified.It is biased to, document can be anticipated using the document meaning of each paragragh Justice is biased to close paragragh and is grouped together as a text fragments.
In step 312, it is extracted by the key message that context shot and long term memory models carry out the text fragments, according to The key message extracted obtains the corresponding text meta-data of the text fragments.
Specifically can by the document main contents (i.e. key message) of each text fragments of CLSTM model extraction, such as Theme, character relation, figure action including each text fragments etc., the corresponding text meta-data of each text fragments can To be the key message of each text fragments extracted.By the way that text fragments are inputted CLSTM model, output text segment Main contents, the main contents of output are the corresponding text meta-data of text segment.
In a step 330, from the multi-medium data segment of marked content, label content and the text element are identified The multi-medium data segment of Data Matching, the multi-medium data segment is as the destination multimedia converted by the text fragments Segment.
Wherein, multi-medium data segment can refer to audio fragment, video clip or sound view with certain play time Frequency segment.The multi-medium data segment of marked content indicates more to this according to the main contents of the multi-medium data segment Media data segment is marked, so that the multi-medium data segment be made to have label content.Mark content, that is, multimedia number According to the main contents of segment.It is hereafter illustrated so that multi-medium data segment is video clip as an example, audio fragment or audio-video The case where segment, is referred to video clip realization.
It should be noted that can store several video clips in the storage medium of server 120, each video clip The formal notation that main contents have passed through label comes out.For any text fragments, server 120 is according to the text of text segment The label content of this metadata and each video clip, text meta-data and each video clip by calculating text fragments Mark the matching degree between content, filter out with the highest label content of text meta-data matching degree, the label content is corresponding Video clip as the matched video clip of text segment institute.It will be as by text with the matched video clip of text fragments The target video segment of segment conversion.
When there are multiple text fragments, it is referred to the above process, according to the text meta-data of each text fragments, from In the video clip of marked content, matched video clip is identified one by one for each text fragments, is obtained by each text The target video segment of segment conversion.
In step 350, the multimedia file converted by the text material is generated by the destination multimedia segment.
Multimedia file can be video file, audio file or audio-video document.Wherein, video file can be according to upper What text was found out generates with the matched video clip of text fragments.Equally, audio file can by with the matched sound of text fragments Frequency segment generates, and audio-video document can be by generating with the matched audio-video segment of text fragments.
Text material belongs to text, in the direction from text to Video Quality Metric, it is main at present by director and performer into The Deductive Implementation of row profession can identify in a kind of exemplary embodiment of the disclosure according to the word content of text material With the matched video clip of text fragments, the video clip identified may be considered deduces to obtain according to text fragments, i.e., in fact Conversion of the text to video is showed.
Under normal conditions, text material may include multiple text fragments, for only including the text of a text fragments Material, then the video clip of text fragment match will be used as target video segment, as needed, the target that can will be obtained Video clip is directly as the video file being transformed by text material.Editor can also be carried out to target video segment to repair Change, modified target video segment saves as video file.Further, multiple correspondingly for multiple text fragments Multiple video clips can be developed sequencing according to the time and carry out splicing editing, obtain video file by video clip.
For the text material comprising multiple text fragments, above-mentioned steps 350 are specifically included:
According to the sequencing that each text fragments occur in the text material, it is corresponding to splice each text fragments Destination multimedia segment obtains the multimedia file of the text material conversion.
When there are more than one text fragments, can occur in text material according to multiple text fragments successive suitable Sequence also sequentially splices the corresponding destination multimedia segment of text fragments according to this.Specifically, can be plain by text When material cutting is multiple text fragments, sequentially it is numbered for each text fragments, is obtained according to the number of each text fragments The sequencing that each text fragments occur in text material.
For example, the sequencing of text fragments is as follows in text material: text fragments 1, text fragments 2, text piece Section 3.The matched video clip of text fragments 1 is video clip X, and the matched video clip of text fragments 2 is video clip Y, text The matched video clip of this segment 3 is video clip Z, thus, it is possible to sequentially will according to the sequencing of text fragments 1,2,3 Video clip X, Y, Z are spliced, and the video file that will be transformed by text material is obtained.
The technical solution that disclosure the above exemplary embodiments provide, passes through the multi-medium data segment from marked content In, identify with the matched multi-medium data segment of the text meta-data of text fragments, so that the multi-medium data segment can be with As the destination multimedia segment converted by text segment, so it is available by text material by destination multimedia segment Multimedia video frequency file is converted, conversion of the text to audio-video may be implemented in this way, due to without manually according to sea The word content of amount video carries out splicing editing, saves a large amount of human and material resources.
Further, as shown in figure 5, above-mentioned steps 330 specifically include:
It is the text by text matches model according to the multi-medium data segment of marked content in step 331 Metadata obtains itself matching degree between each multi-medium data fragment label content.
Assuming that multi-medium data segment is video clip, for any text fragments, according to the label of each video clip Content combines the text meta-data of the text segment input text matches mould with the label content of each video clip respectively Type exports the matching angle value between the text meta-data of text segment and the label content of each video clip.For example, Assuming that the text meta-data of text fragments is Ax, (b represents the label content of video clip to Ax, exists with b1, b2 ... bn respectively N video clip) it is combined one by one, each combination is then inputted into text matches model, exports each combined probability value, That is matching degree.
In step 332, according to the matching between the text meta-data and each multi-medium data fragment label content Degree obtains label content and the matched multi-medium data segment of the text meta-data.
Specifically, selecting matching degree most according to the matching degree between text meta-data and each video clip label content High text meta-data and video clip label content combination (Ax, bx), obtain the text element number of label content and text fragments According to matched video clip.
Wherein, before above-mentioned steps 331, as shown in fig. 6, the multimedia data processing method that the disclosure provides can be with The following steps are included:
In step 601, the processing for carrying out Multi-media Material obtains the multi-medium data piece of the Multi-media Material output Section and corresponding to the multi-medium data segment label content.
Multi-media Material can be video material, audio material or audio-video material.It for example with video material, can be with It is the video clip according to prefixed time interval by the video material cutting of certain period of time according to prefixed time interval.For example, It is 5 video clips by the video material cutting of 5 hours, the play time of each video clip is 1 hour.Wherein, depending on Label content, that is, video clip main contents of frequency segment, can be according to the corresponding subtitle file of video clip, using CLSTM The each video clip of model extraction corresponds to the main contents of subtitle file, obtains the label content of video clip.Wherein, server The multimedia signal processing module of 120 itself configuration can be used for carrying out the processing of video material, obtain video material output Video clip and label content corresponding to video clip.
In step 602, in the label of the history text metadata and multi-medium data segment that are mutually matched known to acquisition Hold.
Wherein, history text metadata is a relative concept, is to obtain the text meta-data of text fragments in step 310 Already existing text meta-data before.History text metadata refers to the main contents of history text segment.Equally, history text This segment refers to the already existing text fragments before step 310 obtains text fragments.
Optionally, the text information processing module of server 120 itself configuration can in advance carry out history text material Processing, obtains the history text metadata of history text segment.It may then pass through artificial matched mode, filter out mutual The label content of the history text metadata and multi-medium data segment matched.
Further, before step 602, can with the following steps are included:
Key message extraction is carried out by history text segment of the context shot and long term memory models to acquisition, is gone through described in acquisition The corresponding history text key message of history text fragments;
The corresponding history text key message of the history text segment is modified, the history text segment is obtained Corresponding history text metadata.
Fig. 7 is that the process that text information processing module carries out processing acquisition history text metadata to history text segment is shown It is intended to, as shown in fig. 7, history text segment can recorde in the original library that server 120 is configured, passes through CLSTM model Equal natural language processing techniques carry out key message extraction to history text segment, obtain the corresponding history text of history text segment This key message.History text segment and corresponding history text key message are stored in the intelligence that server 120 is configured It handles in library.Furthermore it is also possible to the history text key message in Intelligent treatment library is modified using human-edited's mode, And amendment record is stored in the editor that server 120 is configured and is intervened in library, what is stored in editor's intervention library is revised Front and back mapping relations between history text segment and history text key message.
Later by the history text segment stored in Intelligent treatment library and corresponding history text key message and volume History text key message carries out pool fitting after collecting the history text segment and corresponding amendment intervened and stored in library, obtains History text segment and corresponding history text metadata, and store it in the warehouse for finished product that server 120 is configured.It can It is modified with the history text metadata again to warehouse for finished product, amendment record is stored in editor and is intervened in library, and again will Data and editor in Intelligent treatment library intervene the data in library and carry out planning as a whole to be fitted to be formed constantly updating perfect history text Segment and corresponding history text metadata, and the history text metadata of finally obtained history text segment is stored in In the warehouse for finished product that server 120 is configured.Warehouse for finished product has the time first as the data structure in Intelligent treatment library, by multiple The history text segment composition of sequence afterwards, the corresponding history text metadata of each history text segment mainly have event scenarios, Event type, emotional climate etc..
In step 603, by the label of known the history text metadata being mutually matched and multi-medium data segment Content generates model as sample training collection, by sample training collection input document subject matter, obtains the document through overfitting Theme generates the optimized parameter of model, to obtain the text matches model.
It should be noted that before obtaining the matched multi-medium data segment of text fragments using text matches model, It can also include text matches modeling process.By taking multi-medium data segment is video clip as an example, as shown in figure 8, will History text metadata and video clip the label content being mutually matched import LDA model (document subject matter as training sample set Generate model) parameter training is carried out, after the basis of mass data study, LDA model is provided with matching capacity, obtains text Matching Model.To carry out LDA matching, be looked for from the video clip of marked content according to given text meta-data To label content and the matched video clip of text meta-data.
As shown in figure 9, before above-mentioned steps 330, the multimedia data processing method that the disclosure provides can also include Following steps:
In step 901, segment cutting is carried out to the Multi-media Material of acquisition, obtains several multi-medium data segments;
For example with video material, can be by the video material cutting of certain period of time according to prefixed time interval According to the video clip of prefixed time interval.Wherein, the segment slit mode of audio material or audio-video material is referred to regard The segment slit mode of frequency material is realized.
In step 902, subtitle recognition processing is carried out to each multi-medium data segment, obtains each multi-medium data piece The corresponding caption data of section;
Subtitle recognition processing refers to the subtitle identified from each frame image of multi-medium data segment Tie He in the picture Content or the audio-frequency information for being included according to multi-medium data segment obtain the word of multi-medium data segment by audio identification Curtain information or the subtitle file of acquisition and the mating generation of multi-medium data segment.Wherein, caption content in image, audio Caption information and subtitle file may be considered the corresponding caption data of multi-medium data segment.
Specifically, step 902 may include following procedure:
Image caption information is extracted from each multi-medium data segment using picture character identification technology and using sound Frequency identification technology extracts audio caption information from each multi-medium data segment;
The corresponding image caption information of each multi-medium data segment, audio caption information and with the multi-medium data The subtitle file of the mating generation of segment constitutes the caption data of the multi-medium data segment together.
Wherein, picture character identification technology can be OCR (optical character identification) technology.OCR technique refers to electronic equipment (such as scanner or digital camera) checks the character printed on paper, determines its shape by the mode for detecting dark, bright, then uses Shape is translated into the process of computword by character identifying method.Carrying out subtitle recognition processing using OCR technique is for more The not independent subtitle file of subtitle in media materials, and the case where video image fits together.For this feelings Condition is handled using each frame image of the OCR technique to multi-medium data segment, extracts image caption information.
As shown in Figure 10, following three can be used by carrying out subtitle recognition processing acquisition caption data to multi-medium data segment Kind mode, OCR (optical character identification) identifying processing, audio identification processing and mating subtitle obtain.Audio identification processing is needle There is no the case where subtitle to some multi-medium data segments, by audio frequency identification technique, by the audio in multi-medium data segment Information is identified, audio-frequency information is converted into audio caption information.Wherein, original subtitle refers to matches with multi-medium data segment Cover generate subtitle file, subtitle file usually by professional according to the video content editing of multi-medium data segment make and At quality is higher.Wherein, image caption information (such as OCR subtitle), audio caption information, subtitle file (i.e. original subtitle) three The caption data of person's composition multi-medium data segment.
In step 903, each multi-medium data segment is extracted by context shot and long term memory models correspond to caption data Content information;
It should be noted that the process object for the multimedia signal processing module that server 120 is configured is multimedia number According to segment, since the processing of multi-medium data segment is relative complex, so as to by the subtitle in multi-medium data segment It is analyzed, the processing that relative complex multi-medium data segment processing is converted into text category information is come up.To use The natural language processing techniques such as CLSTM model can analyze the caption data of multi-medium data segment, extract each The main contents of multi-medium data segment.
Wherein, the content information of caption data refers to the main contents of the caption data.It is obtained by above-mentioned three kinds of modes The OCR subtitle of each multi-medium data segment, audio subtitle, three kinds of sources of original subtitle caption data, can be used from Caption data is inputted CLSTM model by right language processing techniques, carries out key message extraction, output multi-medium data segment Content information.The content information of multi-medium data segment is as intelligent label.
In step 904, it is fitted the label information of corresponding content information and input for each multi-medium data segment, obtains Obtain the corresponding label content of each multi-medium data segment.
As shown in Figure 10, the label information for the input of each multi-medium data segment can be operation label and creation mark Label.Operation label is the higher multi-medium data segment label information of quality manually runed, and this kind of label can be opened to appointing Who includes the audio-video producer of profession, the maintenance personnel of system, amateurish audio-video fan, and this kind of people can be directed to more Some wonderful segements in media materials carry out theme induction and conclusion, extract corresponding content information as operation label.Wound Make label, be creator during carrying out audio-video flim editing and making, for oneself thinking repairing for some unreasonable label contents Breath (i.e. creation label) is converted to, the quality for creating label is also very high.
For the intelligent label of each multi-medium data segment, operation label and creation label, CLSTM model can be used Plan as a whole to obtain the label content of each multi-medium data segment after process of fitting treatment, i.e., comprehensive label.Label is runed by changing Or creation label, it can constantly update and improve the synthesis label.
For the history text metadata and multimedia of the history text segment that text information processing resume module obtains The multi-medium data fragment label content that message processing module processing obtains, can carry out artificial matched packet, can be obtained Know the label content of the history text metadata and multi-medium data segment that are mutually matched.
The functional module that Figure 11 is configured by the server 120 shown in a kind of exemplary embodiment of the disclosure.Such as Figure 11 institute Show, text information processing module obtains the text meta-data of text fragments for handling text material.Multimedia messages Processing module obtains the label content of multi-medium data segment for handling Multi-media Material.And text matches module For establishing text matches model according to the label content of the history text metadata and multi-medium data segment that match each other, and Label content and the matched multi-medium data segment of text meta-data are identified using text matches model, thus by each text The multi-medium data segment of fragment match, which carries out splicing, can obtain the multimedia file being converted to by text material.
As shown in figure 11, which can also include editor's authoring module.Editing authoring module can be in text On the basis of the multimedia file obtained with resume module, some amendments are carried out, some individualized contents is added, obtains finished product.
By taking multi-medium data segment is video clip as an example, as shown in figure 12, the basic operation for editing authoring module is to work as When the text meta-data of the text fragments of acquisition and the undesirable label content of video clip, the text element number of text fragments is modified According to or modification video clip label content, or modification text fragments and video clip incidence relation, these modification it is interior Hold, text matches model can be optimized in reverse as study material,
Further, not high for text fragments and video clip matching degree or in the case of expression effect is bad, it can be with By editing the function of search of authoring module, by given some information, go in piece of video phase library to search for more suitable video Segment.
Furthermore it is possible to add some periphery decorations on video, such as a by the personalization tools of editor's authoring module The subtitle of property, some headwears of role, the mask etc. of eyebrow etc in video.
By editing the processing of authoring module, it is higher to obtain quality, more associated view before and after each video clip Frequency file carries out processing creation by personalization tools, can export more ideal editing finished product video.
Following is embodiment of the present disclosure, can be used for executing the multimedia number that the above-mentioned server 120 of the disclosure executes According to processing method embodiment.For those undisclosed details in the apparatus embodiments, it please refers at disclosure multi-medium data Manage embodiment of the method.
Figure 13 is a kind of block diagram of apparatus for processing multimedia data shown according to an exemplary embodiment, the multimedia number Can be used in the server 120 of implementation environment shown in Fig. 1 according to device, execute Fig. 3-Fig. 6, Fig. 9 it is any shown in multimedia number According to all or part of step of processing method.As shown in figure 13, which includes but is not limited to: text Processing module 1310, data match module 1330 and file generating module 1350.
Wherein, text processing module 1310, for carrying out the processing of text material, obtaining text fragments and corresponding to described The text meta-data of text fragments;
Data match module 1330, for from the multi-medium data segment of marked content, identify label content with The matched multi-medium data segment of text meta-data, the multi-medium data segment by the text fragments as being converted Destination multimedia segment;
File generating module 1350 is more for being converted by destination multimedia segment generation by the text material Media file.
The function of modules and the realization process of effect are specifically detailed in above-mentioned multimedia-data procession side in above-mentioned apparatus The realization process of step is corresponded in method, details are not described herein.
Text processing module 1310 such as can be some physical structure central processing unit 222 in Fig. 2.
Data match module 1330 and file generating module 1350 are also possible to functional module, for executing above-mentioned multimedia Correspondence step in data processing method.It is appreciated that these modules can by hardware, software, or a combination of both realize. When realizing in hardware, these modules may be embodied as one or more hardware modules, such as one or more dedicated collection At circuit.When being realized with software mode, these modules may be embodied as execute on the one or more processors one or Multiple computer programs, such as the program being stored in performed by the central processing unit 222 of Fig. 2 in memory 232.
Optionally, as shown in figure 14, the text processing module 1310 includes but is not limited to:
Segment cutting unit 1311 obtains several text fragments for carrying out paragraph division to the text material;
Data extracting unit 1312, for carrying out the key message of the text fragments by context shot and long term memory models It extracts, the corresponding text meta-data of the text fragments is obtained according to the key message of extraction.
Optionally, the file generating module 1350 includes but is not limited to:
Fragment assembly unit, the sequencing for occurring in the text material according to each text fragments, splicing The corresponding destination multimedia segment of each text fragments obtains the multimedia file of the text material conversion.
Optionally, as shown in figure 15, the data match module 1330 includes but is not limited to:
Data matching unit 1331 passes through text matches model for the multi-medium data segment according to marked content Itself matching degree between each multi-medium data fragment label content is obtained for the text meta-data;
Segment obtaining unit 1332, for according to the text meta-data and each multi-medium data fragment label content it Between matching degree, obtain the label content and matched multi-medium data segment of the text meta-data.
Optionally, the data match module 1330 can also include but is not limited to:
Multi-media processing unit obtains more matchmakers of the Multi-media Material output for carrying out the processing of Multi-media Material Volume data segment and label content corresponding to the multi-medium data segment;
Sample acquisition unit, for obtaining the mark of the known history text metadata and multi-medium data segment being mutually matched Remember content;
Sample training unit, for by the known history text metadata being mutually matched and multi-medium data segment It marks content as sample training collection, sample training collection input document subject matter is generated into model, is obtained through overfitting described Document subject matter generates the optimized parameter of model, to obtain the text matches model.
Optionally, the disclosure also provides a kind of electronic equipment, which can be used for the clothes of implementation environment shown in Fig. 1 Be engaged in device 120, execute Fig. 3-Fig. 6, Fig. 9 it is any shown in multimedia data processing method all or part of step.It is described Electronic equipment includes:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to executing multimedia data processing method described in the above exemplary embodiments.
The processor of electronic equipment in the embodiment executes the concrete mode of operation in the related multi-medium data Detailed description is performed in the embodiment of processing method, no detailed explanation will be given here.
In the exemplary embodiment, a kind of storage medium is additionally provided, which is computer readable storage medium, It such as can be the provisional and non-transitorycomputer readable storage medium for including instruction.The storage medium is stored with computer Program, the computer program can be executed by the central processing unit 222 of server 200 to complete above-mentioned multimedia-data procession side Method.
It should be understood that the present invention is not limited to the precise structure already described above and shown in the accompanying drawings, and And various modifications and change can executed without departing from the scope.The scope of the present invention is limited only by the attached claims.

Claims (15)

1. a kind of multimedia data processing method characterized by comprising
The processing of text material is carried out, text fragments and the text meta-data corresponding to the text fragments are obtained;
From the multi-medium data segment of marked content, label content and the matched multimedia of the text meta-data are identified Data slot, the multi-medium data segment is as the destination multimedia segment converted by the text fragments;
The multimedia file converted by the text material is generated by the destination multimedia segment.
2. the method according to claim 1, wherein the processing for carrying out text material, obtains text fragments With the text meta-data for corresponding to the text fragments, comprising:
Paragraph division is carried out to the text material, obtains several text fragments;
It is extracted by the key message that context shot and long term memory models carry out the text fragments, according to the crucial letter of extraction Breath obtains the corresponding text meta-data of the text fragments.
3. the method according to claim 1, wherein described generated by destination multimedia segment by the text The multimedia file of material conversion, comprising:
According to the sequencing that each text fragments occur in the text material, splice the corresponding target of each text fragments Multi-media segment obtains the multimedia file of the text material conversion.
4. the method according to claim 1, wherein in the multi-medium data segment from marked content, Identify that label content and the matched multi-medium data segment of the text meta-data, the multi-medium data segment are used as by institute State the destination multimedia segment of text fragments conversion, comprising:
According to the multi-medium data segment of marked content, by text matches model be the text meta-data obtain itself with Matching degree between each multi-medium data fragment label content;
According to the matching degree between the text meta-data and each multi-medium data fragment label content, obtain label content with The matched multi-medium data segment of text meta-data.
5. according to the method described in claim 4, it is characterized in that, the multi-medium data segment according to marked content, It is that the text meta-data obtains itself between each multi-medium data fragment label content by text matches model Before degree, the method also includes:
The processing for carrying out Multi-media Material obtains the multi-medium data segment of the Multi-media Material output and corresponds to described more The label content of media data segment;
The label content of the history text metadata and multi-medium data segment that are mutually matched known to acquisition;
Using the label content of the known history text metadata being mutually matched and multi-medium data segment as sample training Sample training collection input document subject matter is generated model, obtains the document subject matter through overfitting and generate model most by collection Excellent parameter, to obtain the text matches model.
6. according to the method described in claim 5, it is characterized in that, the history text metadata being mutually matched known to the acquisition Before the label content of multi-medium data segment, the method also includes:
Key message extraction is carried out by history text segment of the context shot and long term memory models to acquisition, obtains the history text The corresponding history text key message of this segment;
The corresponding history text key message of the history text segment is modified, it is corresponding to obtain the history text segment History text metadata.
7. the method according to claim 1, wherein in the multi-medium data segment from marked content, Before identifying label content and the matched multi-medium data segment of the text meta-data, the method also includes:
Segment cutting is carried out to the Multi-media Material of acquisition, obtains several multi-medium data segments;
Subtitle recognition processing is carried out to each multi-medium data segment, obtains the corresponding subtitle number of each multi-medium data segment According to;
The content information that each multi-medium data segment corresponds to caption data is extracted by context shot and long term memory models;
It is fitted the label information of corresponding content information and input for each multi-medium data segment, obtains each multi-medium data The corresponding label content of segment.
8. the method according to the description of claim 7 is characterized in that described carry out subtitle recognition to each multi-medium data segment Processing, obtains the corresponding caption data of each multi-medium data segment, comprising:
Image caption information is extracted from each multi-medium data segment using picture character identification technology and is known using audio Other technology extracts audio caption information from each multi-medium data segment;
The corresponding image caption information of each multi-medium data segment, audio caption information and with the multi-medium data segment The subtitle file of mating generation constitutes the caption data of the multi-medium data segment together.
9. a kind of apparatus for processing multimedia data, which is characterized in that described device includes:
Text processing module obtains text fragments and the text corresponding to the text fragments for carrying out the processing of text material This metadata;
Data match module, for from the multi-medium data segment of marked content, identifying label content and the text The multi-medium data segment of meta data match, the multi-medium data segment is as the more matchmakers of target converted by the text fragments Body segment;
File generating module, for generating the multimedia text converted by the text material by the destination multimedia segment Part.
10. device according to claim 9, which is characterized in that the text processing module includes:
Segment cutting unit obtains several text fragments for carrying out paragraph division to the text material;
Data extracting unit, the key message for carrying out the text fragments by context shot and long term memory models extract, root The corresponding text meta-data of the text fragments is obtained according to the key message of extraction.
11. device according to claim 9, which is characterized in that the file generating module includes:
Fragment assembly unit, the sequencing for occurring in the text material according to each text fragments, splicing are each The corresponding destination multimedia segment of text fragments obtains the multimedia file of the text material conversion.
12. device according to claim 9, which is characterized in that the data match module includes:
Data matching unit is the text by text matches model for the multi-medium data segment according to marked content This metadata obtains itself matching degree between each multi-medium data fragment label content;
Segment obtaining unit, for according to the matching between the text meta-data and each multi-medium data fragment label content Degree obtains label content and the matched multi-medium data segment of the text meta-data.
13. device according to claim 12, which is characterized in that the data match module further include:
Multi-media processing unit obtains the multimedia number of the Multi-media Material output for carrying out the processing of Multi-media Material According to segment and corresponding to the label content of the multi-medium data segment;
Sample acquisition unit, in the label for obtaining the known history text metadata and multi-medium data segment being mutually matched Hold;
Sample training unit, for by the label of known the history text metadata being mutually matched and multi-medium data segment Content generates model as sample training collection, by sample training collection input document subject matter, obtains the document through overfitting Theme generates the optimized parameter of model, to obtain the text matches model.
14. a kind of electronic equipment, which is characterized in that the electronic equipment includes:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to perform claim requires multimedia data processing method described in 1-8 any one.
15. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage has computer journey Sequence, the computer program can be executed as processor and complete multimedia-data procession side described in claim 1-8 any one Method.
CN201711084918.9A 2017-11-07 2017-11-07 Multimedia data processing method and device, electronic equipment and storage medium Active CN109756751B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711084918.9A CN109756751B (en) 2017-11-07 2017-11-07 Multimedia data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711084918.9A CN109756751B (en) 2017-11-07 2017-11-07 Multimedia data processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109756751A true CN109756751A (en) 2019-05-14
CN109756751B CN109756751B (en) 2023-02-03

Family

ID=66401039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711084918.9A Active CN109756751B (en) 2017-11-07 2017-11-07 Multimedia data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109756751B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110324709A (en) * 2019-07-24 2019-10-11 新华智云科技有限公司 A kind of processing method, device, terminal device and storage medium that video generates
CN110532426A (en) * 2019-08-27 2019-12-03 新华智云科技有限公司 It is a kind of to extract the method and system that Multi-media Material generates video based on template
CN110996017A (en) * 2019-10-08 2020-04-10 清华大学 Method and device for generating clip video
CN111711855A (en) * 2020-05-27 2020-09-25 北京奇艺世纪科技有限公司 Video generation method and device
CN112188311A (en) * 2019-07-02 2021-01-05 百度(美国)有限责任公司 Method and apparatus for determining video material of news
CN112312189A (en) * 2019-08-02 2021-02-02 百度在线网络技术(北京)有限公司 Video generation method and video generation system
CN112423023A (en) * 2020-12-09 2021-02-26 珠海九松科技有限公司 Intelligent automatic video mixed-cutting method
CN114598893A (en) * 2020-11-19 2022-06-07 京东方科技集团股份有限公司 Text video implementation method and system, electronic equipment and storage medium
WO2022121626A1 (en) * 2020-12-07 2022-06-16 北京字节跳动网络技术有限公司 Video display method and apparatus, video processing method, apparatus, and system, device, and medium
CN115190356A (en) * 2022-06-10 2022-10-14 北京达佳互联信息技术有限公司 Multimedia data processing method and device, electronic equipment and storage medium
CN115457557A (en) * 2022-09-21 2022-12-09 深圳市学之友科技有限公司 Scanning type translation pen control method and device
WO2023040743A1 (en) * 2021-09-15 2023-03-23 北京字跳网络技术有限公司 Video processing method, apparatus, and device, and storage medium
WO2023217155A1 (en) * 2022-05-10 2023-11-16 北京字跳网络技术有限公司 Video generation method, apparatus, and device, storage medium, and program product

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100030744A1 (en) * 2004-02-27 2010-02-04 Deshan Jay Brent Method and system for managing digital content including streaming media
CN103324760A (en) * 2013-07-11 2013-09-25 中国农业大学 Method and system for automatically generating nutrition health education video through commentary file
CN103559214A (en) * 2013-10-11 2014-02-05 中国农业大学 Method and device for automatically generating video
US20140115622A1 (en) * 2012-10-18 2014-04-24 Chi-Hsiang Chang Interactive Video/Image-relevant Information Embedding Technology
CN104731959A (en) * 2015-04-03 2015-06-24 北京威扬科技有限公司 Video abstraction generating method, device and system based on text webpage content
CN105183739A (en) * 2014-04-04 2015-12-23 卡姆芬德公司 Image Processing Server
US20160034754A1 (en) * 2012-01-20 2016-02-04 Elwha Llc Autogenerating video from text
CN105389326A (en) * 2015-09-16 2016-03-09 中国科学院计算技术研究所 Image annotation method based on weak matching probability canonical correlation model
CN105868176A (en) * 2016-03-02 2016-08-17 北京同尘世纪科技有限公司 Text based video synthesis method and system
CN106528588A (en) * 2016-09-14 2017-03-22 厦门幻世网络科技有限公司 Method and apparatus for matching resources for text information
CN106899879A (en) * 2015-12-18 2017-06-27 北京奇虎科技有限公司 A kind for the treatment of method and apparatus of multi-medium data
US20170185690A1 (en) * 2005-10-26 2017-06-29 Cortica, Ltd. System and method for providing content recommendations based on personalized multimedia content element clusters
CN107027060A (en) * 2017-04-18 2017-08-08 腾讯科技(深圳)有限公司 The determination method and apparatus of video segment
CN107071542A (en) * 2017-04-18 2017-08-18 百度在线网络技术(北京)有限公司 Video segment player method and device

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100030744A1 (en) * 2004-02-27 2010-02-04 Deshan Jay Brent Method and system for managing digital content including streaming media
US20170185690A1 (en) * 2005-10-26 2017-06-29 Cortica, Ltd. System and method for providing content recommendations based on personalized multimedia content element clusters
US20160034754A1 (en) * 2012-01-20 2016-02-04 Elwha Llc Autogenerating video from text
US20140115622A1 (en) * 2012-10-18 2014-04-24 Chi-Hsiang Chang Interactive Video/Image-relevant Information Embedding Technology
CN103324760A (en) * 2013-07-11 2013-09-25 中国农业大学 Method and system for automatically generating nutrition health education video through commentary file
CN103559214A (en) * 2013-10-11 2014-02-05 中国农业大学 Method and device for automatically generating video
CN105183739A (en) * 2014-04-04 2015-12-23 卡姆芬德公司 Image Processing Server
CN104731959A (en) * 2015-04-03 2015-06-24 北京威扬科技有限公司 Video abstraction generating method, device and system based on text webpage content
CN105389326A (en) * 2015-09-16 2016-03-09 中国科学院计算技术研究所 Image annotation method based on weak matching probability canonical correlation model
CN106899879A (en) * 2015-12-18 2017-06-27 北京奇虎科技有限公司 A kind for the treatment of method and apparatus of multi-medium data
CN105868176A (en) * 2016-03-02 2016-08-17 北京同尘世纪科技有限公司 Text based video synthesis method and system
CN106528588A (en) * 2016-09-14 2017-03-22 厦门幻世网络科技有限公司 Method and apparatus for matching resources for text information
CN107027060A (en) * 2017-04-18 2017-08-08 腾讯科技(深圳)有限公司 The determination method and apparatus of video segment
CN107071542A (en) * 2017-04-18 2017-08-18 百度在线网络技术(北京)有限公司 Video segment player method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刁月华: ""网络视频字幕提取识别系统的设计与实现"", 《中国优秀硕士学位论文全文数据库》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188311A (en) * 2019-07-02 2021-01-05 百度(美国)有限责任公司 Method and apparatus for determining video material of news
CN110324709A (en) * 2019-07-24 2019-10-11 新华智云科技有限公司 A kind of processing method, device, terminal device and storage medium that video generates
CN112312189A (en) * 2019-08-02 2021-02-02 百度在线网络技术(北京)有限公司 Video generation method and video generation system
CN110532426A (en) * 2019-08-27 2019-12-03 新华智云科技有限公司 It is a kind of to extract the method and system that Multi-media Material generates video based on template
CN110996017A (en) * 2019-10-08 2020-04-10 清华大学 Method and device for generating clip video
CN110996017B (en) * 2019-10-08 2020-12-15 清华大学 Method and device for generating clip video
CN111711855A (en) * 2020-05-27 2020-09-25 北京奇艺世纪科技有限公司 Video generation method and device
CN114598893A (en) * 2020-11-19 2022-06-07 京东方科技集团股份有限公司 Text video implementation method and system, electronic equipment and storage medium
WO2022121626A1 (en) * 2020-12-07 2022-06-16 北京字节跳动网络技术有限公司 Video display method and apparatus, video processing method, apparatus, and system, device, and medium
CN112423023A (en) * 2020-12-09 2021-02-26 珠海九松科技有限公司 Intelligent automatic video mixed-cutting method
WO2023040743A1 (en) * 2021-09-15 2023-03-23 北京字跳网络技术有限公司 Video processing method, apparatus, and device, and storage medium
WO2023217155A1 (en) * 2022-05-10 2023-11-16 北京字跳网络技术有限公司 Video generation method, apparatus, and device, storage medium, and program product
CN115190356A (en) * 2022-06-10 2022-10-14 北京达佳互联信息技术有限公司 Multimedia data processing method and device, electronic equipment and storage medium
CN115190356B (en) * 2022-06-10 2023-12-19 北京达佳互联信息技术有限公司 Multimedia data processing method and device, electronic equipment and storage medium
CN115457557A (en) * 2022-09-21 2022-12-09 深圳市学之友科技有限公司 Scanning type translation pen control method and device
CN115457557B (en) * 2022-09-21 2024-03-05 惠州市学之友电子有限公司 Scanning translation pen control method and device

Also Published As

Publication number Publication date
CN109756751B (en) 2023-02-03

Similar Documents

Publication Publication Date Title
CN109756751A (en) Multimedia data processing method and device, electronic equipment, storage medium
Hong et al. Dynamic captioning: video accessibility enhancement for hearing impairment
CN109803180B (en) Video preview generation method and device, computer equipment and storage medium
CN106878632B (en) Video data processing method and device
Hong et al. Video accessibility enhancement for hearing-impaired users
CN112287914B (en) PPT video segment extraction method, device, equipment and medium
CN108595477B (en) Video data processing method and device
CN113486833B (en) Multi-modal feature extraction model training method and device and electronic equipment
CN108121715B (en) Character labeling method and character labeling device
CN110309360B (en) Short video label labeling method and system
CN107515934A (en) A kind of film semanteme personalized labels optimization method based on big data
US20150213793A1 (en) Methods and systems for converting text to video
Choi et al. Effective fake news video detection using domain knowledge and multimodal data fusion on youtube
CN112289347A (en) Stylized intelligent video editing method based on machine learning
CN109800435A (en) A kind of training method and device of language model
CN113779345B (en) Teaching material generation method and device, computer equipment and storage medium
CN113259763B (en) Teaching video processing method and device and electronic equipment
CN104504104B (en) Picture material processing method, device and search engine for search engine
CN112468754B (en) Method and device for acquiring pen-recorded data based on audio and video recognition technology
CN113806574A (en) Software and hardware integrated artificial intelligent image recognition data processing method
CN112188311B (en) Method and apparatus for determining video material of news
CN110555117B (en) Data processing method and device and electronic equipment
Golshani et al. A multimedia information repository for cross cultural dance studies
US20220375223A1 (en) Information generation method and apparatus
CN116485943A (en) Image generation method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant