CN112261314B - Video description data generation system, method, storage medium and equipment - Google Patents

Video description data generation system, method, storage medium and equipment Download PDF

Info

Publication number
CN112261314B
CN112261314B CN202011020291.2A CN202011020291A CN112261314B CN 112261314 B CN112261314 B CN 112261314B CN 202011020291 A CN202011020291 A CN 202011020291A CN 112261314 B CN112261314 B CN 112261314B
Authority
CN
China
Prior art keywords
video
identified
video frame
task pool
identification code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011020291.2A
Other languages
Chinese (zh)
Other versions
CN112261314A (en
Inventor
刘路伟
闫亚军
刘东旭
曹志超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Meishe Network Technology Co ltd
Original Assignee
Beijing Meishe Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Meishe Network Technology Co ltd filed Critical Beijing Meishe Network Technology Co ltd
Priority to CN202011020291.2A priority Critical patent/CN112261314B/en
Publication of CN112261314A publication Critical patent/CN112261314A/en
Application granted granted Critical
Publication of CN112261314B publication Critical patent/CN112261314B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched

Abstract

The application provides a video description data generation system, a method, a storage medium and equipment. The system comprises: the reading device is used for reading the source video, generating an identification code, identifying video frames in the source video by the identification code and caching the video frames into the task pool; the scheduling device is used for searching whether an idle identification unit exists, and responding to the searching result, the scheduling device is used for scheduling the video frames from the task pool and distributing the video frames to the execution device; the execution device is used for receiving the video frames, distributing the video frames to the idle recognition unit for recognition, and generating description data; and the integrating device is used for packaging the description data of all the video frames identified by the same identification code into a data file according to the time sequence of the video frames in the corresponding source video, and storing the packaged data file into a data set. The devices of the system cooperate with each other, video is taken as input, description data corresponding to the video is generated for video packaging, and the identification of the video description data is orderly completed.

Description

Video description data generation system, method, storage medium and equipment
Technical Field
The present application relates to the field of video editing, and in particular, to a system, a method, a storage medium, and an apparatus for generating video description data.
Background
In the related art, most of video packaging is done by professional film and television makers in later profession. With the gradual heating of short videos, video packaging is becoming a requirement of more and more users, but in reality, many users who shoot short videos are ordinary users, do not have professional film and television making capability, and cannot simply and rapidly complete video packaging.
With the development of artificial intelligence (Artificial Intelligence, abbreviated as AI, which is a new technical science for researching, developing, simulating, extending and expanding human intelligence, a new technology science for technology and application systems), some schemes for video packaging by using artificial intelligence technology are also proposed to assist the common users without professional film and television making capability to complete video packaging. However, when the video packaging is performed by using the artificial intelligence technology, description data of the video, which refers to description data describing attributes (such as time stamp, brightness, resolution, exposure, etc.) of video frames in the video and elements (such as picture content, physical objects, position information, etc.) in the video frames, is required.
However, in the related art, there is no related technical solution capable of automatically generating description data of a video.
Disclosure of Invention
The application provides a system, a method, a storage medium and equipment for generating video description data, which are used for automatically identifying video frames in video and automatically generating the description data of the video according to an identification result.
A first aspect of an embodiment of the present application provides a video description data generating system, including: the device comprises a reading device, a dispatching device, an executing device and an integrating device; wherein, the liquid crystal display device comprises a liquid crystal display device,
the reading device is used for reading a source video, generating an identification code of the source video, identifying video frames of the source video based on the identification code, and caching the identified video frames into a task pool;
the scheduling device is used for searching whether an idle identification unit exists in the execution device, and responding to the searching result, the identified video frame is retrieved from the task pool and distributed to the execution device;
the execution device is used for receiving the identified video frames dispatched by the dispatching device, distributing the identified video frames to the idle identification unit for identification, and generating description data of the identified video frames;
The integrating device is used for packaging the description data of all video frames identified by the same identification code into a data file according to the time sequence of all video frames identified by the same identification code in the corresponding source video, identifying the packaged data file by using the same identification code, and storing the packaged data file into the data set based on the preset address.
Optionally, the system further comprises: a query device and a judgment device; wherein, the liquid crystal display device comprises a liquid crystal display device,
the inquiring device is used for inquiring whether the data files which are identified by the identification codes and used for describing the source video exist in the data set identified by the preset address before the step of caching the identified video frames into the task pool by the reading device;
the judging device is used for judging whether the inquired data file contains an end mark or not in response to the fact that the inquiring result of the inquiring device is yes;
the reading device is further configured to perform any of the following steps:
responding to the query result of the query device is no, starting from the first frame of video frame of the source video, and caching the identified video frame into a task pool;
Responding to the judging result of the judging device to be no, determining a video frame corresponding to a breakpoint of the inquired data file, starting from the video frame corresponding to the breakpoint of the source video, which is identical to the identification code of the inquired video file, and caching the identified video frame into a task pool;
and discarding the source video which is the same as the identification code of the queried video file in response to the judging result of the judging device being yes.
Optionally, the reading device is further configured to check whether the task pool is saturated, and in response to a negative check result, perform any of the following steps:
responding to the query result of the query device is no, starting from the first frame of video frame of the source video, and caching the identified video frame into a task pool;
and responding to the judging result of the judging device to be no, determining the video frame corresponding to the breakpoint of the inquired data file, starting from the video frame corresponding to the breakpoint of the source video, and caching the identified video frame into a task pool.
Optionally, the reading device is further configured to monitor a capacity state of the task pool in response to the saturation of the task pool, and when it is monitored that a margin exists in the task pool, execute any of the following steps:
Responding to the query result of the query device is no, starting from the first frame of video frame of the source video, and caching the identified video frame into a task pool;
and responding to the judging result of the judging device to be no, determining the video frame corresponding to the breakpoint of the inquired data file, starting from the video frame corresponding to the breakpoint of the source video, and caching the identified video frame into a task pool.
Optionally, the scheduling device is further configured to monitor a resource state of the execution device in response to a result of searching whether the execution device has the idle recognition unit being no, and when the execution device is monitored that the idle recognition unit is present, retrieve the identified video frame from the task pool and send the video frame to the execution device.
Optionally, the integrating device is further configured to determine whether the encapsulated data file identified by the identification code is the complete description data of the source video, and add an end identifier to the encapsulated data file identified by the identification code in response to the determination result being yes.
A second aspect of an embodiment of the present application provides a method for generating video description data, the method including:
Reading a source video, generating an identification code of the source video, identifying video frames of the source video based on the identification code, and caching the identified video frames into a task pool;
searching whether an idle recognition unit exists, and in response to the searching result being yes, calling the identified video frame from the task pool and sending the identified video frame to the idle recognition unit for recognition, and generating description data of the identified video frame;
and encapsulating the description data of all the video frames identified by the same identification code into a data file according to the time sequence of all the video frames identified by the same identification code in the corresponding source video, identifying the encapsulated data file by using the same identification code, and storing the encapsulated data file into the data set based on the preset address.
Optionally, the method further comprises:
before the step of caching the identified video frames into a task pool, inquiring whether a data file which is identified by the identification code and used for describing the source video exists in a data set identified by a preset address;
responding to the query result as no, starting from the first frame of video frame of the source video, and caching the identified video frame into a task pool; or, in response to the query result being yes, judging whether the queried data file contains an end identifier;
Responding to the judging result, determining a video frame corresponding to a breakpoint of the inquired data file, starting from the video frame corresponding to the breakpoint of the source video, which is identical to the identification code of the inquired video file, and caching the identified video frame into a task pool; or discarding the source video which is the same as the identification code of the queried video file in response to the judgment result being yes.
Optionally, before the step of buffering the identified video frames into a task pool, the method further comprises:
checking whether the task pool is saturated, and executing any one of the following steps in response to the fact that the checking result is negative:
responding to the query result as no, starting from the first frame of video frame of the source video, and caching the identified video frame into a task pool;
and responding to the judging result to be no, determining a video frame corresponding to the breakpoint of the inquired data file, starting from the video frame corresponding to the breakpoint of the source video, and caching the identified video frame into a task pool.
Optionally, the method further comprises:
in response to the saturation of the task pool, monitoring the capacity state of the task pool, and executing any one of the following steps when monitoring that the allowance appears in the task pool:
Responding to the query result as no, starting from the first frame of video frame of the source video, and caching the identified video frame into a task pool;
and responding to the judging result to be no, determining a video frame corresponding to the breakpoint of the inquired data file, starting from the video frame corresponding to the breakpoint of the source video, and caching the identified video frame into a task pool.
Optionally, the method further comprises:
and responding to the result of searching whether the idle recognition unit exists or not, monitoring the resource state, and when the idle recognition unit is monitored to appear, calling the identified video frame from the task pool and sending the video frame to the idle recognition unit.
Optionally, the method further comprises:
and judging whether the packaged data file identified by the identification code is the complete description data of the source video, and adding an ending identification for the packaged data file identified by the identification code in response to the judgment result being yes.
A third aspect of the embodiments of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs steps in a method according to the second aspect of the present application.
A fourth aspect of the embodiments of the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor performs steps in a method according to the second aspect of the application.
Compared with the prior art, the application has at least the following technical effects:
by adopting the video description data generation system provided by the application, the reading device reads the video frames of the source video into the task pool, the scheduling device distributes the video frames in the task pool to the idle identification unit in the execution device for identification, and outputs the description data of the video frames, and the integration device sorts the description data of the video frames of the same video into the description data of the corresponding video. The devices of the system cooperate with each other, video is taken as input, and description data corresponding to the video is automatically generated for video packaging.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a video description data generating system according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating steps for generating video description data according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a video description data generating system according to another embodiment of the present application;
FIG. 4 is a flowchart illustrating steps for generating video description data according to another embodiment of the present application;
FIG. 5 is a schematic diagram illustrating the operation of a video description data generating system according to an embodiment of the present application;
FIG. 6 is a diagram illustrating a definition of a data file according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
An embodiment of the application provides a video description data generation system. Referring to fig. 1, a video description data generating system of the present application is shown, in which a video is taken as an input, and attribute (basic attribute information of the video frame itself, such as time stamp, brightness, resolution, exposure, etc.) and element (various elements in an image shown in the video frame, such as screen content, entity object, position information, etc.) of a video frame are identified, and description data corresponding to the video is output. As shown in fig. 1, the video description data generating system 100 includes: a reading device 101, a scheduling device 102, an executing device 103 and an integrating device 104.
Referring to fig. 2, a video description data generation method by the video description data generation system 100 shown in fig. 1 of the present application is shown. As shown in fig. 2, the method comprises the steps of:
s201, reading a source video, generating an identification code of the source video, identifying video frames of the source video based on the identification code, and caching the identified video frames into a task pool.
The source video refers to a video input to the video description data generation system 100 for recognition.
When the video description data generating system 100 receives a plurality of source videos, that is, there are a plurality of source videos to be identified, the received plurality of source videos are loaded into a buffer memory and hung up in the form of a video queue, and the reading device 101 waits for reading the source videos from the video queue to identify. When reading the source video in the video queue into the video description data generation system 100, the reading device 101 may read the source video in the queue sequentially or may read the source video in parallel in no queue order.
The identification code is used to identify the source video and the video frames therein. When the video description data generating system 100 receives multiple source videos to be identified simultaneously, the executing device 103 usually identifies multiple video frames in parallel, and the multiple video frames identified in parallel are also usually from different source videos, and basically does not identify only a certain video frame in one video (see later related description for details). Each source video and its video frame are identified by the identification code, so as to distinguish each source video, determine the source of each video frame, and identify the description data corresponding to each video frame, so that the integrating device 104 encapsulates the description data of the video frames from the same source video into an integral data file, i.e. the description data of the source video, according to the identification code.
When the reading device 101 reads the source videos from the video queue, the reading device 101 generates a unique identification code for each of the read source videos based on a fixed algorithm to identify each of the source videos. The method for generating the identification code for the source video may be any algorithm that generates a fixed and unique identification code according to a determined input variable, for example, in the embodiment of the present application, the unique identification code corresponding to the source video is generated mainly according to the video name of the source video. Such an algorithm is, for example, an Md5 encryption algorithm, an SHA1 algorithm, or the like, and assuming that there is a Video test.mp4, the storage path of the Video is/storage/scheduled/0/Video/test.mp4, and the unique identification code 804d6ed1c00cfe713bd4bb73c6b8edbc of the Video is obtained by encrypting the test.mp4 Video with the Md5 encryption algorithm; of course, the fixing algorithm is not limited to the above example, and only the unique identification code corresponding to the video may be generated, for example, the unique identification code of the source video may be obtained by splicing the character strings of each source video according to the format "file name+file size+file creation time". However, it should be noted that, no matter what algorithm is used to generate the identification code, it should be noted that for the video description data generating system 100, only a fixed algorithm is used throughout and embedded in the video description data generating system 100, and no arbitrary modification is allowed. The reading device 101 identifies all video frames of each source video by the identification code of the source video, and then buffers the video frames identified by the identification code into the task pool to wait for the scheduling device 102 to schedule the video frames for identification to the executing device 103.
S202, searching whether an idle recognition unit exists, and in response to the searching result being yes, calling the identified video frame from the task pool and sending the identified video frame to the idle recognition unit for recognition, and generating description data of the identified video frame.
The recognition unit is a neural network model which is obtained by training with an artificial intelligence technology and is used for carrying out image recognition on the video frame to acquire attribute and element information of the video frame, for example, the recognition unit can be a convolution neural network model in the related technology, and can be any other neural network model capable of carrying out image recognition. The method for obtaining the identification unit through training by selecting the seed model can be supervised learning in machine learning. More specifically, please refer to the related art, and a detailed description thereof is omitted herein.
Each recognition unit is a thread. In this embodiment, a thread pool is created in advance, where the thread pool is controlled by the executing device 103, and multiple threads are created in the thread pool in advance according to actual requirements or device performance, that is, multiple identification units are created, for example, mobile devices in calendar life are created, and in general, four to eight identification units are supported to be created; of course, other electronic devices are also possible, and the present application is not limited in particular. The recognition unit supports operations of newly adding (meaning that when a thread corresponding to a recognition unit created in advance in a thread pool cannot meet a demand, a new thread is continuously created), deleting (meaning that a thread corresponding to a created recognition unit can be deleted from the thread pool), suspending (meaning that a thread corresponding to a recognition unit is in a waiting state and is temporarily not subjected to task processing), or waking (meaning that a thread corresponding to a dormant recognition unit is waken up to perform task processing).
The idle recognition unit is a recognition unit which recognizes that no video frame is currently allocated and is waiting.
After the reading device 101 reads the video frame of the source video into the task pool, the scheduling device 102 searches whether an idle recognition unit exists in the executing device 103, that is, searches whether an idle thread exists in the thread pool, when the idle thread exists in the thread pool, that is, indicates that the idle recognition unit exists in the executing device 103, the scheduling device 102 calls the video frame from the task pool to be distributed to the executing device 103, instructs the executing device 103 to distribute the scheduled video frame to the idle recognition unit which is just searched for recognition, and after the executing device 103 receives the video frame scheduled by the scheduling device 102, distributes the video frame to the idle recognition unit for recognition, and obtains the description data of the video frame after the recognition is completed.
And S203, encapsulating the description data of all the video frames identified by the same identification code into a data file according to the time sequence of all the video frames identified by the same identification code in the corresponding source video, identifying the encapsulated data file by using the same identification code, and storing the encapsulated data file into the data set based on the preset address.
All video frame descriptions identified by the same identification code are from the source video identified by the same identification code. Thus, when all video frames identified by the same identification code have been identified, it can be determined that the video frames from the source video identified by the identification code and read into the video description data generation system 100 by the reading device 101 have been all identified.
In addition, when the video is played, each frame of image in the video is presented at a designated time according to a predetermined time on a time axis of the video, for example, a frame rate of 60 and a duration of 120 minutes are displayed, and 432000 frames of images are added, wherein the 60 th frame of image is necessarily presented at a joint time of the first second and the second of the video. In video packaging, it is also necessary to ensure that the video frames in the packaged video are ordered, so that the generated description data of the source video should also be ordered to facilitate reading during video packaging. Therefore, after determining that all the video frames identified by the same identification code in the video description data generating system 100 are identified, the integrating device 104 encapsulates the description data of the video frames identified by the same identification code into a data file according to the time sequence of all the video frames identified by the identification code in the corresponding source video.
For example, the source video "cocoa-li" is identified by the identification code a, and after all the video frames in "cocoa-li" are read into the video description data generating system 100 and identified, the data integrating device 104 encapsulates the description data corresponding to each of the video frames identified by the identification code a into a data file according to the presentation sequence of each of the video frames in the playing time axis of "cocoa-li". Thereby obtaining a description data file of the source video.
The integrating device 104 also hangs up in the form of a data queue when receiving the data output by the executing device 103, and gradually distributes the data to the data sorting and writing unit in the integrating device 104 to be sorted and written into individual data texts, and also hangs up in the form of a text queue, and finally encapsulates the text texts into a file.
In order to reduce the overhead of storage resources, all data sets are stored in binary files, so that a predefined data format is required, and each data file contains two parts of a data header and a data source and corresponds to a video file one by one. Fig. 6 shows the definition of a data file on the basis of which it can be freely extended. In addition, in order to avoid the exception of the data file caused by accidents in the process of data writing and enable the file to support breakpoint continuous writing, a data file buffer area is also needed to be used for temporary writing of the data. And after the data file is normally written, storing the file into a formal data set.
The data set stored in the description data file of the source video can be preset according to actual requirements, and the storage address of the data set is identified by a preset address. After the description data file of the source video is obtained, the description data file of the video is updated to the data set through the preset address so as to be used for video packaging.
By adopting the video description data generation system provided by the invention, the reading device reads the video frames of the source video into the task pool, the scheduling device distributes the video frames in the task pool to the idle identification unit in the execution device for identification, and outputs the description data of the video frames, and the integration device sorts the description data of the video frames of the same video into the description data of the corresponding video. The devices of the system cooperate with each other, video is taken as input, and description data corresponding to the video is automatically generated for video packaging.
In an alternative embodiment, the method further includes step S204, determining whether the encapsulated data file identified by the identification code is the complete description data of the source video, and adding an end identifier to the encapsulated data file identified by the identification code in response to determining that the determination is yes.
When the description data of the video identified by the same identifier is encapsulated into a data file, the data integrating device 104 further determines whether the encapsulated data file is the complete description data of the source video, that is, determines whether the description data of all video frames in the source video (all video frames herein are excluding video frames that are skipped and are not important, for example, two video frames that are identical in a first and a second) exist in the data file.
In the judging, each frame of video frame may be sequentially judged in series according to the time axis sequence of the video, or each frame of video frame may be judged in parallel by a removing method (one frame of video frame is removed when one frame of video frame is judged until all video frames in the source video are traversed). If the video frame is not recognized as a result of the judgment, determining that the source video is not completely recognized, wherein the data file is not the complete description data of the source video; if the judging result is that the unrecognized video frame does not exist, the source video is determined to be completely recognized, the data file is the complete description data of the source video, and an ending mark is added on the data file to indicate that the source video with the same identification code corresponding to the data file with the identification code is completely recognized.
Referring to fig. 3, another video description data generating system of the present application is shown, which is a further improvement of the video description data generating system shown in fig. 1, and also takes video as input, performs attribute and element recognition on video frames in the video, and outputs description data corresponding to the video. As shown in fig. 3, the improved video description data generation system 100 includes, in addition to: the reading device 101, the scheduling device 102, the executing device 103, and the integrating device 104 further include: a query means 105 and a determination means 106.
Referring to fig. 4, a method of generating video description data by the improved video description data generating system 100 shown in fig. 3 of the present application is correspondingly shown, and the method of generating is described in more detail below with reference to fig. 4 and 5. As shown in fig. 4, the method comprises the steps of:
s401, reading a source video, generating an identification code of the source video, and identifying a video frame of the source video based on the identification code.
Similar to step S201, please refer to the description of step S201, and the detailed description is omitted here.
S402, before the step of caching the identified video frames into a task pool by the reading device 101, inquiring whether a data file which is identified by the identification code and used for describing the source video exists in a data set identified by a preset address; step S403 is performed in response to whether the query result is negative, or step S406 is performed in response to whether the query result is positive.
Stored in the data set are data files of description data after the video description data generation system 100 recognizes the source video.
The preset address is a storage address set by the video description data generating system 100 to store the data set.
Based on the description of the foregoing step S201, when each source video is read by the reading device 101, a corresponding identification code is generated for each source video that is read based on a fixed algorithm, that is, the identification codes generated when the source video is repeatedly read always coincide with each other as long as the source video is the same video.
After the reading device 101 identifies the video frame of the same source video by the identification code of the source video, the querying device 105 queries whether the set data set already has the data file identified by the identification code. If not, it is indicated that the description data of the source video does not exist in the data set, that is, the video description data generating system 100 has not recognized the source video; conversely, if any, descriptive data for the source video already exists in the description dataset, i.e., the video descriptive data generation system 100 has identified the source video, which is repeatedly read.
S403, checking whether the task pool is saturated; step S403 is executed in response to the result of the check being no, or step S404 is executed in response to the result of the check being yes.
The task pool is used for caching video frames to be identified, which are identified by the identification codes. When the checking task pool is saturated, the fact that too many incomplete tasks to be identified exist in the task pool currently is indicated, the current task pressure is high, and the video description data generation system 100 is in high-load operation; otherwise, the current task is less stressed, which means that the task to be identified in the task pool is still within the loadable range of the video description data generating system 100.
S404, starting from the first frame of video frame of the source video, caching the identified video frame into a task pool.
Since the source video has not been identified, and there is no description data of the source video in the data set, the source video needs to be identified from the beginning, so the reading device 101 identifies the video frame of the source video by the identification code of the source video from the first video frame of the source video, and caches the video frame of the source video in the task pool, so that the to-be-scheduled device 102 schedules the video frame to the executing device 103 for identification.
S405, monitoring the capacity state of the task pool, and executing step S404 when the allowance of the task pool is monitored.
If the task pool is saturated, the reading device 101 monitors the capacity state of the task pool, and when part of video frames to be identified in the saturated task pool are scheduled by the scheduling device 102 to be identified by the executing device 103 at a subsequent moment, so that the task pool has a margin, namely, when the margin of the task pool is monitored, the video frames of the source video identified by the identification code, which are read, can be continuously cached into the task pool with the margin.
S406, judging whether the queried data file contains an end mark; step S407 is executed in response to the determination result being no, or step S410 is executed in response to the determination result being yes.
Based on the foregoing description of step S204, the integrating device 104 determines each data file to be packaged, and determines whether to add an end identifier to each data file according to the determination result.
Therefore, the determining device 106 may determine whether the data file is the complete description data of the corresponding source video by determining whether the end identifier is included in each data file, so as to determine whether the source video is completely identified, and obviously determine that the source video is completely identified when the end identifier is included in the data file; otherwise, when it is judged that the end identifier is not included in the data file, it is determined that the source video is interrupted in the middle of recognition.
S407, checking whether the task pool is saturated; step S408 is performed in response to the result of the check being no, or step S409 is performed in response to the result of the check being yes.
The task pool is used for caching video frames to be identified, which are identified by the identification codes. When the checking task pool is saturated, it is indicated that there are too many incomplete tasks to be identified in the task pool, and the video description data generating system 100 is in high-load operation; conversely, it is illustrated that the task to be identified in the task pool is still within the loadable scope of the video description data generation system 100.
S408, determining a video frame corresponding to the breakpoint of the queried data file, starting from the video frame corresponding to the breakpoint of the source video, which is identical to the identification code of the queried video file, and caching the identified video frame into a task pool.
Based on the foregoing description of step S203, the integrating device 104 encapsulates the description data of all the video frames identified by the same identifier into a data file according to the time sequence of all the video frames identified by the same identifier in the corresponding source video; when the queried data file is not the complete description data of the corresponding source video, determining that the video frame corresponding to the breakpoint occurs in the data file from the queried data file, starting from the video frame corresponding to the breakpoint of the source video which is the same as the identification code of the queried video file, caching the video frame identified in the source video into a task pool, and scheduling the video frame to an execution device for identification by a device to be scheduled.
S409, monitoring the capacity state of the task pool, and executing step S408 when the allowance of the task pool is monitored.
If the task pool is saturated, the reading device 101 monitors the capacity state of the task pool, and when part of video frames to be identified in the saturated task pool are scheduled by the scheduling device 102 to be identified by the executing device 103 at a subsequent moment, so that the task pool has a margin, namely, when the margin of the task pool is monitored, the video frames of the source video identified by the identification code, which are read, can be continuously cached into the task pool with the margin.
S410, discarding the source video which is the same as the identification code of the queried video file.
The description data of the source video already exists in the dataset, i.e., the video description data generation system 100 has identified the source video, which was repeatedly read and therefore discarded to reduce repeated invalidation.
S411, searching whether an idle identification unit exists; step S412 is performed in response to the search result being yes, or step S413 is performed in response to the search result being no.
Similarly, before the scheduling device 102 schedules the video frame identified by the identification code in the task pool to the execution device 103 for identification, it is also necessary to find whether there is a free identification unit in the execution device 103. If so, it is indicated that the executing device 103 in the video description data generating system 100 is in low-load operation, and still can carry more recognition tasks; otherwise, if the video description data is not present, it indicates that the executing device 103 in the video description data generating system 100 is in high-load operation, and all the identification sheets in the executing device are in busy state, and the video frames are being identified.
S412, the identified video frames are called from the task pool and distributed to the idle identification unit for identification, and description data of the identified video frames are generated.
Similar to step S202, please refer to the description of step S202, and the detailed description is omitted here.
S413, the resource status of the execution device 103 is monitored, and when the occurrence of the idle identification unit is monitored, step S412 is executed.
If there is no idle recognition unit in the execution device 103, the scheduling device 102 monitors the resource status of the execution device 103, and when the recognition of the video frame being recognized by the execution device 103 is completed, the idle recognition unit appears in the execution device 103, the video frame to be recognized can be continuously fetched from the task pool to the execution device 103, so that the execution device 103 sends the video frame to be recognized to the idle recognition unit for recognition.
And S414, encapsulating the description data of all the video frames identified by the same identification code into a data file according to the time sequence of all the video frames identified by the same identification code in the corresponding source video, identifying the encapsulated data file by using the same identification code, and storing the encapsulated data file into the data set based on the preset address.
Similar to step S203, please refer to the description of step S203, and the detailed description is omitted here.
S415, judging whether the packaged data file identified by the identification code is the complete description data of the source video, and adding an end identification to the packaged data file identified by the identification code in response to the judgment result being yes.
Similar to step S204, please refer to the description of step S203, and the detailed description is omitted here.
The source video queue pool, the video frame task pool and the thread pool are arranged in the application to control the memory overhead, and avoid the infinite rising of the memory caused by task accumulation, thereby causing the dead halt phenomenon of the equipment.
Based on the same inventive concept, an embodiment of the present application provides a video description data generating system. As shown in fig. 1, the video description data generating system includes: a reading device 101, a scheduling device 102, an executing device 103 and an integrating device 104;
the reading device 101 is configured to read a source video, generate an identification code of the source video, identify a video frame of the source video based on the identification code, and cache the identified video frame into a task pool;
the scheduling device 102 is configured to search whether an idle identifying unit exists in the executing device 103, and in response to the search result being yes, retrieve the identified video frame from the task pool and send the identified video frame to the executing device;
the executing device 103 is configured to receive the identified video frame dispatched by the scheduling device 102, distribute the identified video frame to the idle identifying unit for identification, and generate description data of the identified video frame;
The integrating device 104 is configured to package the description data of all video frames identified by the same identifier into a data file according to the time sequence of all video frames identified by the same identifier in the corresponding source video, identify the packaged data file by using the same identifier, and store the packaged data file into the data set based on the preset address.
Optionally, the system further comprises: a query means 105 and a judgment means 106; wherein, the liquid crystal display device comprises a liquid crystal display device,
the querying means 105 is configured to query, before the step of caching the identified video frame in the task pool by the reading means 101, whether a data file for describing the source video identified by the identification code exists in a data set identified by a preset address;
the judging device 106 is configured to judge whether the queried data file contains an end identifier in response to the query result of the querying device 101 being yes;
the reading device 101 is further configured to perform any of the following steps:
responsive to a query result of the querying device 105 being no, starting from a first frame of video frame of the source video, buffering the identified video frame into a task pool;
In response to the judging result of the judging device 106 being no, determining a video frame corresponding to a breakpoint of the queried data file, starting from a video frame corresponding to the breakpoint of the source video, which is identical to an identification code of the queried video file, and caching the identified video frame into a task pool;
and discarding the source video which is identical to the identification code of the queried video file in response to the judging result of the judging device 106 being yes.
Optionally, the reading device 101 is further configured to check whether the task pool is saturated, and in response to a negative check result, perform any of the following steps:
responsive to a query result of the querying device 105 being no, starting from a first frame of video frame of the source video, buffering the identified video frame into a task pool;
and in response to the judging result of the judging device 106 being no, determining a video frame corresponding to the breakpoint of the queried data file, starting from the video frame corresponding to the breakpoint of the source video, and caching the identified video frame into a task pool.
Optionally, the reading device 101 is further configured to monitor a capacity state of the task pool in response to the saturation of the task pool as a result of the detection that the task pool is saturated, and when it is monitored that a margin appears in the task pool, perform any of the following steps:
Responsive to a query result of the querying device 105 being no, starting from a first frame of video frame of the source video, buffering the identified video frame into a task pool;
and in response to the judging result of the judging device 106 being no, determining a video frame corresponding to the breakpoint of the queried data file, starting from the video frame corresponding to the breakpoint of the source video, and caching the identified video frame into a task pool.
Optionally, the scheduling device 102 is further configured to monitor a resource status of the executing device 103 in response to a no result of searching whether the executing device 103 has a idle identification unit, and when it is monitored that the executing device 103 has an idle identification unit, retrieve the identified video frame from the task pool and dispatch the video frame to the executing device.
Optionally, the integrating device 104 is further configured to determine whether the encapsulated data file identified by the identified code is the complete description data of the source video, and add an end identifier to the encapsulated data file identified by the identified code in response to the determination result being yes.
For system embodiments, the description is relatively simple as it is substantially similar to method embodiments, and reference is made to the description of method embodiments for relevant points.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
Based on the same inventive concept, another embodiment of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method according to any of the above embodiments of the present application.
Based on the same inventive concept, another embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the steps in the method according to any one of the foregoing embodiments of the present application.
It will be apparent to those skilled in the art that embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the application. Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The video description data generating system, method, storage medium and device provided by the application are described in detail, and specific examples are applied to illustrate the principles and embodiments of the application, and the description of the above examples is only used to help understand the method and core idea of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (12)

1. A video description data generating system, the system comprising: the device comprises a reading device, a dispatching device, an executing device and an integrating device; wherein, the liquid crystal display device comprises a liquid crystal display device,
the reading device is used for reading a source video, generating an identification code of the source video, identifying video frames of the source video based on the identification code, and caching the identified video frames into a task pool;
the scheduling device is used for searching whether an idle identification unit exists in the execution device, and responding to the searching result, the identified video frame is retrieved from the task pool and distributed to the execution device;
The execution device is used for receiving the identified video frames dispatched by the dispatching device, distributing the identified video frames to the idle identification unit for identification, and generating description data of the identified video frames;
the integrating device is used for encapsulating the description data of all video frames identified by the same identification code into a data file according to the time sequence of all video frames identified by the same identification code in the corresponding source video, identifying the encapsulated data file by the same identification code, and storing the encapsulated data file into a data set based on a preset address;
the system further comprises: a query device and a judgment device; wherein, the liquid crystal display device comprises a liquid crystal display device,
the inquiring device is used for inquiring whether the data files which are identified by the identification codes and used for describing the source video exist in the data set identified by the preset address before the step of caching the identified video frames into the task pool by the reading device;
the judging device is used for judging whether the inquired data file contains an end mark or not in response to the fact that the inquiring result of the inquiring device is yes;
the reading device is further configured to perform any of the following steps:
Responding to the query result of the query device is no, starting from the first frame of video frame of the source video, and caching the identified video frame into a task pool;
responding to the judging result of the judging device to be no, determining a video frame corresponding to a breakpoint of the inquired data file, starting from the video frame corresponding to the breakpoint of the source video, which is identical to the identification code of the inquired video file, and caching the identified video frame into a task pool;
and discarding the source video which is the same as the identification code of the queried video file in response to the judging result of the judging device being yes.
2. The system of claim 1, wherein the system further comprises a controller configured to control the controller,
the reading device is further used for checking whether the task pool is saturated or not, and executing any one of the following steps in response to the fact that the checking result is negative:
responding to the query result of the query device is no, starting from the first frame of video frame of the source video, and caching the identified video frame into a task pool;
and responding to the judging result of the judging device to be no, determining the video frame corresponding to the breakpoint of the inquired data file, starting from the video frame corresponding to the breakpoint of the source video, and caching the identified video frame into a task pool.
3. The system of claim 2, wherein the system further comprises a controller configured to control the controller,
the reading device is further used for monitoring the capacity state of the task pool in response to the fact that the checking result of checking whether the task pool is saturated or not is saturated, and executing any one of the following steps when the situation that the task pool is left is monitored:
responding to the query result of the query device is no, starting from the first frame of video frame of the source video, and caching the identified video frame into a task pool;
and responding to the judging result of the judging device to be no, determining the video frame corresponding to the breakpoint of the inquired data file, starting from the video frame corresponding to the breakpoint of the source video, and caching the identified video frame into a task pool.
4. A system according to any one of claim 1 to 3,
the scheduling device is further configured to monitor a resource state of the execution device in response to a search result of whether the execution device has the idle recognition unit being no, and when the execution device is monitored to have the idle recognition unit, retrieve the identified video frame from the task pool and send the video frame to the execution device.
5. A system according to any one of claim 1 to 3,
the integrating device is further used for judging whether the packaged data file identified by the identification code is the complete description data of the source video, and adding an ending identification for the packaged data file identified by the identification code in response to the judgment result being yes.
6. A method of generating video description data, the method comprising:
reading a source video, generating an identification code of the source video, identifying video frames of the source video based on the identification code, and caching the identified video frames into a task pool;
searching whether an idle recognition unit exists, and in response to the searching result being yes, calling the identified video frame from the task pool and sending the identified video frame to the idle recognition unit for recognition, and generating description data of the identified video frame;
encapsulating the description data of all video frames identified by the same identification code into a data file according to the time sequence of all video frames identified by the same identification code in the corresponding source video, identifying the encapsulated data file by using the same identification code, and storing the encapsulated data file into a data set based on a preset address;
The method further comprises the steps of:
before the step of caching the identified video frames into a task pool, inquiring whether a data file which is identified by the identification code and used for describing the source video exists in a data set identified by a preset address;
responding to the query result as no, starting from the first frame of video frame of the source video, and caching the identified video frame into a task pool; or, in response to the query result being yes, judging whether the queried data file contains an end identifier;
responding to the judging result, determining a video frame corresponding to a breakpoint of the inquired data file, starting from the video frame corresponding to the breakpoint of the source video, which is identical to the identification code of the inquired video file, and caching the identified video frame into a task pool; or discarding the source video which is the same as the identification code of the queried video file in response to the judgment result being yes.
7. The method of claim 6, wherein prior to the step of buffering the identified video frames into a task pool, the method further comprises:
checking whether the task pool is saturated, and executing any one of the following steps in response to the fact that the checking result is negative:
Responding to the query result as no, starting from the first frame of video frame of the source video, and caching the identified video frame into a task pool;
and responding to the judging result to be no, determining a video frame corresponding to the breakpoint of the inquired data file, starting from the video frame corresponding to the breakpoint of the source video, and caching the identified video frame into a task pool.
8. The method of claim 7, wherein the method further comprises:
in response to the saturation of the task pool, monitoring the capacity state of the task pool, and executing any one of the following steps when monitoring that the allowance appears in the task pool:
responding to the query result as no, starting from the first frame of video frame of the source video, and caching the identified video frame into a task pool;
and responding to the judging result to be no, determining a video frame corresponding to the breakpoint of the inquired data file, starting from the video frame corresponding to the breakpoint of the source video, and caching the identified video frame into a task pool.
9. The method according to any one of claims 6 to 8, further comprising:
And responding to the result of searching whether the idle recognition unit exists or not, monitoring the resource state, and when the idle recognition unit is monitored to appear, calling the identified video frame from the task pool and sending the video frame to the idle recognition unit.
10. The method according to any one of claims 6 to 8, further comprising:
and judging whether the packaged data file identified by the identification code is the complete description data of the source video, and adding an ending identification for the packaged data file identified by the identification code in response to the judgment result being yes.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, carries out the steps of the method according to any of claims 6-10.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor performs the steps of the method according to any of claims 6-10.
CN202011020291.2A 2020-09-24 2020-09-24 Video description data generation system, method, storage medium and equipment Active CN112261314B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011020291.2A CN112261314B (en) 2020-09-24 2020-09-24 Video description data generation system, method, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011020291.2A CN112261314B (en) 2020-09-24 2020-09-24 Video description data generation system, method, storage medium and equipment

Publications (2)

Publication Number Publication Date
CN112261314A CN112261314A (en) 2021-01-22
CN112261314B true CN112261314B (en) 2023-09-15

Family

ID=74233146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011020291.2A Active CN112261314B (en) 2020-09-24 2020-09-24 Video description data generation system, method, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN112261314B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008146503A (en) * 2006-12-12 2008-06-26 Sony Computer Entertainment Inc Distributed processing method, operating system, and multiprocessor system
CN105302645A (en) * 2015-10-29 2016-02-03 无锡天脉聚源传媒科技有限公司 Task distribution method and apparatus
CN109040779A (en) * 2018-07-16 2018-12-18 腾讯科技(深圳)有限公司 Caption content generation method, device, computer equipment and storage medium
CN111314741A (en) * 2020-05-15 2020-06-19 腾讯科技(深圳)有限公司 Video super-resolution processing method and device, electronic equipment and storage medium
CN111464865A (en) * 2020-06-18 2020-07-28 北京美摄网络科技有限公司 Video generation method and device, electronic equipment and computer readable storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080184245A1 (en) * 2007-01-30 2008-07-31 March Networks Corporation Method and system for task-based video analytics processing
US8896708B2 (en) * 2008-10-31 2014-11-25 Adobe Systems Incorporated Systems and methods for determining, storing, and using metadata for video media content
US20110274178A1 (en) * 2010-05-06 2011-11-10 Canon Kabushiki Kaisha Method and device for parallel decoding of video data units
WO2013181756A1 (en) * 2012-06-08 2013-12-12 Jugnoo Inc. System and method for generating and disseminating digital video
KR20140039920A (en) * 2012-09-25 2014-04-02 삼성전자주식회사 Image data processing method and apparatus, and electronic device including the same
US20140188978A1 (en) * 2012-12-31 2014-07-03 Microsoft Corporation Cloud-based media processing pipeline
CN104980685B (en) * 2014-04-14 2018-09-14 纬创资通股份有限公司 Video service providing method and Video service provide system
US10192583B2 (en) * 2014-10-10 2019-01-29 Samsung Electronics Co., Ltd. Video editing using contextual data and content discovery using clusters

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008146503A (en) * 2006-12-12 2008-06-26 Sony Computer Entertainment Inc Distributed processing method, operating system, and multiprocessor system
CN105302645A (en) * 2015-10-29 2016-02-03 无锡天脉聚源传媒科技有限公司 Task distribution method and apparatus
CN109040779A (en) * 2018-07-16 2018-12-18 腾讯科技(深圳)有限公司 Caption content generation method, device, computer equipment and storage medium
CN111314741A (en) * 2020-05-15 2020-06-19 腾讯科技(深圳)有限公司 Video super-resolution processing method and device, electronic equipment and storage medium
CN111464865A (en) * 2020-06-18 2020-07-28 北京美摄网络科技有限公司 Video generation method and device, electronic equipment and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向智能视频监控网络的数据中心资源调度方法研究;高一鸿;《中国优秀硕士论文全文数据库信息科技辑》;全文 *

Also Published As

Publication number Publication date
CN112261314A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
JP5088234B2 (en) Message association processing apparatus, method, and program
US7987160B2 (en) Status tool to expose metadata read and write queues
CN103760966A (en) Picture buffering implementation method
CN105589783A (en) Application program lag problem data obtaining method and device
CN110716848A (en) Data collection method and device, electronic equipment and storage medium
CN113312161A (en) Application scheduling method, platform and storage medium
CN110971939A (en) Illegal picture identification method and related device
WO2021252043A1 (en) Running pbs jobs in kubernets
CN103530313A (en) Searching method and device of application information
CN105786539A (en) File downloading method and device
CN115185679A (en) Task processing method and device for artificial intelligence algorithm, server and storage medium
CN110120965A (en) Method for down loading, tutoring system and the storage medium of courseware
CN112261314B (en) Video description data generation system, method, storage medium and equipment
US20170091206A1 (en) Information Processing Method and Electronic Apparatus
CN112363980A (en) Data processing method and device for distributed system
CN115981822A (en) Task processing method, medium, device and computing equipment
CN106294709B (en) Cloud storage file display method and device
CN112543354B (en) Service-aware distributed video cluster efficient telescoping method and system
US11055217B2 (en) Using additional intermediate buffer queues to identify interleaved media data to be read together
CN115344610A (en) Two-level cache data acquisition method and device
CN110968406B (en) Method, device, storage medium and processor for processing task
CN112860416A (en) Annotating task assignment strategy method and device
CN110673962A (en) Content stream processing method, device, equipment and medium
CN105912480A (en) Cache management method, device and mobile terminal
CN111966557A (en) Method and device for monitoring browser frame rate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant