CN111385646A - Multimedia video teaching platform based on video feature center recognition - Google Patents
Multimedia video teaching platform based on video feature center recognition Download PDFInfo
- Publication number
- CN111385646A CN111385646A CN201811650071.0A CN201811650071A CN111385646A CN 111385646 A CN111385646 A CN 111385646A CN 201811650071 A CN201811650071 A CN 201811650071A CN 111385646 A CN111385646 A CN 111385646A
- Authority
- CN
- China
- Prior art keywords
- video
- feature center
- feature
- matching
- recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 claims abstract description 28
- 238000011156 evaluation Methods 0.000 claims abstract description 23
- 238000006243 chemical reaction Methods 0.000 claims description 12
- 150000001875 compounds Chemical class 0.000 claims description 9
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
Abstract
The invention provides a multimedia video teaching platform based on video feature center identification, which comprises a video input unit, a video feature center identification and authentication unit and a video output unit, wherein the video input unit is used for inputting a multimedia teaching video; the video feature center identification authentication unit comprises a video feature center identification extraction module, a video feature center identification matching module and a feature center identification performance evaluation module, wherein the video feature center identification extraction module is used for extracting the features of videos and generating video feature center identification features, and the video feature center identification matching module is used for comparing whether the contents of the two videos are consistent or not according to the video feature center identification features to obtain a video matched with the query video; the characteristic center identification performance evaluation module is used for evaluating the performance of the video characteristic center identification matching module. The invention realizes high-efficiency multimedia video teaching.
Description
Technical Field
The invention relates to the technical field of multimedia teaching, in particular to an efficient multimedia teaching management system.
Background
The existing multimedia video teaching platform based on video feature center recognition cannot realize effective authentication of teaching videos and cannot play the teaching videos in time, so that the teaching system is low in efficiency.
The identification of the video feature center is used as a new means for managing and protecting video resources, and the feature center can be uniquely identified like the identification of a human feature center. The video feature center identification is a simplified digital representation form of digital video content, and is a unique identifier formed by analyzing, extracting and calculating videos.
Disclosure of Invention
In view of the above problems, the present invention aims to provide an efficient multimedia teaching management system.
The purpose of the invention is realized by adopting the following technical scheme:
a multimedia video teaching platform based on video feature center recognition comprises a video input unit, a video feature center recognition and authentication unit and a video output unit, wherein the video input unit is used for inputting a multimedia teaching video, the video feature center recognition and authentication unit is used for performing video feature center recognition and authentication on the input multimedia teaching video to obtain a matching video of the input video, and the video output unit is used for playing the matching video; the video feature center identification authentication unit comprises a video feature center identification extraction module, a video feature center identification matching module and a feature center identification performance evaluation module, wherein the video feature center identification extraction module is used for extracting the features of videos and generating video feature center identification features, and the video feature center identification matching module is used for comparing whether the contents of the two videos are consistent or not according to the video feature center identification features to obtain a video matched with the query video; the characteristic center identification performance evaluation module is used for evaluating the performance of the video characteristic center identification matching module.
The multimedia video teaching platform based on video feature center recognition further comprises a video decoding unit, a feature extraction unit and a feature center recognition modeling unit, wherein the video decoding unit is used for decoding an original video sequence to obtain a YUV sequence, the feature extraction unit is used for extracting features of a video according to the YUV sequence, and the feature center recognition modeling unit is used for establishing a feature center recognition model according to the extracted video features to obtain video feature center recognition.
Further according to the multimedia video teaching platform based on video feature center recognition, the feature extraction unit comprises a feature extraction subunit and a frame rate conversion subunit, the feature extraction subunit is used for extracting features of a video at an original frame rate, and the frame rate conversion subunit is used for converting the video at the original frame rate into a fixed frame rate; the feature extraction subunit is configured to extract features of the video at the original frame rate, and specifically includes:
a. extracting brightness information Y from the YUV sequence to form a new video sequence;
b. assuming that the video pixel is M × N, the geometric center of each video frame is (M/2, N/2), and the geometric center of the video frame is taken as the coordinate origin O, fk(x, y) is the brightness value at the position (x, y) of the k-th video frame with the origin of O, and the brightness value fk(x, y) is in the range of [0, 255 ]]According to the brightness value fk(x, y) computing the feature center (c) of each video framexk,cyk):
c. Based on the feature center, calculating a feature center angle:in the formula, βkRepresenting the characteristic central angle of the kth video frame, calculating the characteristic central angles of all video frames of the whole video sequence, and constructing a one-dimensional characteristic vector β: β [ β ] by using all the characteristic central angles1,β2,…,βK]Where K represents the number of video frames contained in the video sequence.
Further according to the multimedia video teaching platform based on video feature center recognition of the present invention, the frame rate conversion subunit is configured to convert a video with an original frame rate into a fixed frame rate, and specifically includes: a. setting the frame rate of the original video sequence as Q, the converted fixed frame rate as P, and the characteristic central angle theta of the f-th frame under the frame rate as PiFrom succession at frame rate QCharacteristic center angle β of two frameskAnd βk+1The conversion formula is as follows:wherein
b. And (3) constructing a one-dimensional feature vector theta by using feature central angles of all the converted video frames: theta is ═ theta1,θ2,…,θM]In the formula, M is the number of video frames included in the video at the frame rate P, and the feature vector θ is the extracted video feature.
Further, according to the multimedia video teaching platform based on video feature center recognition of the present invention, the feature center recognition modeling unit is configured to establish a feature center recognition model according to the extracted video features, specifically:
a. the feature center relative angle γ is calculated in the following manneri,γi=θi+2+θi+1-θi;
b. Calculating the relative angle of the feature center of the whole video sequence, and establishing a video feature center identification model according to the relative angle of the feature center: identifying the video feature center as gamma-gamma1,γ2,…,γM-2]。
Further according to the multimedia video teaching platform based on video feature center recognition of the present invention, the video feature center recognition matching module includes a first matching unit, a second matching unit and a comprehensive matching unit, the first matching unit is configured to calculate a first matching value between the video feature center recognition, the second matching unit is configured to calculate a second matching value between the video feature center recognition, and the comprehensive matching unit is configured to determine a video matching degree according to the first matching value and the second matching value; the first match value is determined using the following equation:
in the formula (I), the compound is shown in the specification,representing a first match value between the video feature center identifications,video feature center identification, ω ═ ω, representing queries1,ω2,…,ωM-2]Representing any video feature center identification in a video database;
the second match value is determined using the following equation:
in the formula (I), the compound is shown in the specification,representing a second match value between the video feature center identifications;
the determination of the video matching degree is performed by using a matching factor, and the matching factor is determined by using the following formula:
in the formula (I), the compound is shown in the specification,representing a matching factor between videos; and if the matching factor is smaller than the set threshold value, the two videos are considered to be matched, otherwise, the videos are not matched, and the searching in the video database is continued.
Further according to the multimedia video teaching platform based on video feature center recognition of the present invention, the feature center recognition performance evaluation module is configured to evaluate the performance of the video feature center recognition matching module, specifically by using an evaluation factor, where the evaluation factor is determined by using the following formula:wherein ZC represents the value of an evaluation factor, T1Indicating the number of videos queried to be consistent with the contents of the query video, T2Representing the number of videos in the video database that are consistent with the contents of the query video, T3Indicating the number of videos, T, that are not identical to the queried and queried video content4The number of videos which are inconsistent with the contents of the inquired videos in the video database is represented, and the larger the evaluation factor is, the better the performance of the video feature center recognition matching module is.
The invention has the beneficial effects that: and high-efficiency multimedia video teaching is realized.
Detailed Description
The invention is further described with reference to the following examples.
The high-efficiency multimedia video teaching platform based on video feature center identification comprises a video input unit, a video feature center identification and authentication unit and a video output unit, wherein the video input unit is used for inputting a multimedia teaching video, the video feature center identification and authentication unit is used for carrying out video feature center identification and authentication on the input multimedia teaching video to obtain a matching video of the input video, and the video output unit is used for playing the matching video; the video feature center identification authentication unit comprises a video feature center identification extraction module, a video feature center identification matching module and a feature center identification performance evaluation module, wherein the video feature center identification extraction module is used for extracting the features of videos and generating video feature center identification features, and the video feature center identification matching module is used for comparing whether the contents of the two videos are consistent or not according to the video feature center identification features to obtain a video matched with the query video; the characteristic center identification performance evaluation module is used for evaluating the performance of the video characteristic center identification matching module.
Further germany, wherein the video feature center recognition and extraction module comprises a video decoding unit, a feature extraction unit and a feature center recognition modeling unit, the video decoding unit is used for decoding an original video sequence to obtain a YUV sequence, the feature extraction unit is used for extracting features of a video according to the YUV sequence, and the feature center recognition modeling unit is used for establishing a feature center recognition model according to the extracted video features to obtain video feature center recognition.
The feature extraction unit comprises a feature extraction subunit and a frame rate conversion subunit, wherein the feature extraction subunit is used for extracting features of the video at the original frame rate, and the frame rate conversion subunit is used for converting the video at the original frame rate into a fixed frame rate; the feature extraction subunit is configured to extract features of the video at the original frame rate, and specifically includes:
a. extracting brightness information Y from the YUV sequence to form a new video sequence;
b. assuming that the video pixel is M × N, the geometric center of each video frame is (M/2, N/2), and the geometric center of the video frame is taken as the coordinate origin O, fk(x, y) is the brightness value at the position (x, y) of the k-th video frame with the origin of O, and the brightness value fk(x, y) is in the range of [0, 255 ]]According to the brightness value fk(x, y) computing the feature center (c) of each video framexk,cyk):
c. Based on the feature center, calculating a feature center angle:in the formula, βkRepresenting the characteristic central angle of the kth video frame, calculating the characteristic central angles of all video frames of the whole video sequence, and constructing a one-dimensional characteristic vector β: β [ β ] by using all the characteristic central angles1,β2,…,βK]Where K represents the number of video frames contained in the video sequence.
Further, the frame rate conversion subunit is configured to convert a video with an original frame rate into a fixed frame rate, specifically:
a. setting the frame rate of original video sequence as Q, and converting the fixed frameThe characteristic central angle theta of the f-th frame under the rate P and the frame rate PiFrom the characteristic central angle β of two consecutive frames at frame rate QkAnd βk+1The conversion formula is as follows:wherein
b. And (3) constructing a one-dimensional feature vector theta by using feature central angles of all the converted video frames: theta is ═ theta1,θ2,…,θM]In the formula, M is the number of video frames included in the video at the frame rate P, and the feature vector θ is the extracted video feature.
Further, the feature center identification modeling unit is configured to establish a feature center identification model according to the extracted video features, specifically:
a. the feature center relative angle γ is calculated in the following manneri,γi=θi+2+θi+1-θi;
b. Calculating the relative angle of the feature center of the whole video sequence, and establishing a video feature center identification model according to the relative angle of the feature center: identifying the video feature center as gamma-gamma1,γ2,…,γM-2]。
The video feature center identification matching module comprises a first matching unit, a second matching unit and a comprehensive matching unit, wherein the first matching unit is used for calculating a first matching value among video feature center identifications, the second matching unit is used for calculating a second matching value among the video feature center identifications, and the comprehensive matching unit is used for determining the video matching degree according to the first matching value and the second matching value; the first match value is determined using the following equation:
in the formula (I), the compound is shown in the specification,representing a first match value between the video feature center identifications,video feature center identification, ω ═ ω, representing queries1,ω2,…,ωM-2]Representing any video feature center identification in a video database;
the second match value is determined using the following equation:
in the formula (I), the compound is shown in the specification,representing a second match value between the video feature center identifications;
the determination of the video matching degree is performed by using a matching factor, and the matching factor is determined by using the following formula:
in the formula (I), the compound is shown in the specification,representing a matching factor between videos; and if the matching factor is smaller than the set threshold value, the two videos are considered to be matched, otherwise, the videos are not matched, and the searching in the video database is continued.
Further, the feature center identification performance evaluation module is configured to evaluate the performance of the video feature center identification matching module, specifically by using an evaluation factor, where the evaluation factor is determined by using the following formula:wherein ZC represents the value of an evaluation factor, T1Represents a query toNumber of videos consistent with the contents of the query video, T2Representing the number of videos in the video database that are consistent with the contents of the query video, T3Indicating the number of videos, T, that are not identical to the queried and queried video content4The number of videos which are inconsistent with the contents of the inquired videos in the video database is represented, and the larger the evaluation factor is, the better the performance of the video feature center recognition matching module is.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (7)
1. A multimedia video teaching platform based on video feature center recognition is characterized by comprising a video input unit, a video feature center recognition and authentication unit and a video output unit, wherein the video input unit is used for inputting a multimedia teaching video, the video feature center recognition and authentication unit is used for carrying out video feature center recognition and authentication on the input multimedia teaching video to obtain a matching video of the input video, and the video output unit is used for playing the matching video; the video feature center identification authentication unit comprises a video feature center identification extraction module, a video feature center identification matching module and a feature center identification performance evaluation module, wherein the video feature center identification extraction module is used for extracting the features of videos and generating video feature center identification features, and the video feature center identification matching module is used for comparing whether the contents of the two videos are consistent or not according to the video feature center identification features to obtain a video matched with the query video; the characteristic center identification performance evaluation module is used for evaluating the performance of the video characteristic center identification matching module.
2. The multimedia video teaching platform according to claim 1, wherein the video feature center recognition and extraction module comprises a video decoding unit, a feature extraction unit, and a feature center recognition modeling unit, the video decoding unit is configured to decode an original video sequence to obtain a YUV sequence, the feature extraction unit is configured to extract features of a video according to the YUV sequence, and the feature center recognition modeling unit is configured to build a feature center recognition model according to the extracted video features to obtain video feature center recognition.
3. The multimedia video teaching platform based on video feature center recognition as claimed in claim 2, wherein the feature extraction unit comprises a feature extraction subunit and a frame rate conversion subunit, the feature extraction subunit is configured to extract features of the video at the original frame rate, and the frame rate conversion subunit is configured to convert the video at the original frame rate into a fixed frame rate; the feature extraction subunit is configured to extract features of the video at the original frame rate, and specifically includes:
a. extracting brightness information Y from the YUV sequence to form a new video sequence;
b. assuming that the video pixel is M × N, the geometric center of each video frame is (M/2, N/2), and the geometric center of the video frame is taken as the coordinate origin O, fk(x, y) is the brightness value at the position (x, y) of the k-th video frame with the origin of O, and the brightness value fk(x, y) is in the range of [0, 255 ]]According to the brightness value fk(x, y) computing the feature center (c) of each video framexk,cyk):
c. Based on the feature center, calculating a feature center angle:in the formula, βkRepresenting the characteristic central angle of the kth video frame, calculating the characteristic central angles of all video frames of the whole video sequence, and constructing a one-dimensional characteristic vector β: β [ β ] by using all the characteristic central angles1,β2,…,βK]Where K represents the number of video frames contained in the video sequence.
4. The multimedia video teaching platform based on video feature center recognition as claimed in claim 3, wherein the frame rate conversion subunit is configured to convert the video with the original frame rate into a fixed frame rate, specifically: a. setting the frame rate of the original video sequence as Q, the converted fixed frame rate as P, and the characteristic central angle theta of the f-th frame under the frame rate as PiFrom the characteristic central angle β of two consecutive frames at frame rate QkAnd βk+1The conversion formula is as follows:wherein
b. And (3) constructing a one-dimensional feature vector theta by using feature central angles of all the converted video frames: theta is ═ theta1,θ2,…,θM]In the formula, M is the number of video frames included in the video at the frame rate P, and the feature vector θ is the extracted video feature.
5. The multimedia video teaching platform based on video feature center recognition according to claim 4, wherein the feature center recognition modeling unit is configured to establish a feature center recognition model according to the extracted video features, specifically:
a. the feature center relative angle γ is calculated in the following manneri,γi=θi+2+θi+1-θi;
b. Calculating the relative angle of the feature center of the whole video sequence, and establishing a video feature center identification model according to the relative angle of the feature center: identifying the video feature center as gamma-gamma1,γ2,…,γM-2]。
6. The multimedia video teaching platform based on video feature center recognition according to claim 5, wherein the video feature center recognition matching module comprises a first matching unit, a second matching unit and a comprehensive matching unit, the first matching unit is used for calculating a first matching value between the video feature center recognition, the second matching unit is used for calculating a second matching value between the video feature center recognition, and the comprehensive matching unit is used for determining a video matching degree according to the first matching value and the second matching value; the first match value is determined using the following equation:
in the formula (I), the compound is shown in the specification,representing a first match value between the video feature center identifications,video feature center identification, ω ═ ω, representing queries1,ω2,…,ωM-2]Representing any video feature center identification in a video database;
the second match value is determined using the following equation:
in the formula (I), the compound is shown in the specification,representing a second match value between the video feature center identifications;
the determination of the video matching degree is performed by using a matching factor, and the matching factor is determined by using the following formula:
in the formula (I), the compound is shown in the specification,representing a matching factor between videos; and if the matching factor is smaller than the set threshold value, the two videos are considered to be matched, otherwise, the videos are not matched, and the searching in the video database is continued.
7. The multimedia video teaching platform based on video feature center recognition according to claim 6, wherein the feature center recognition performance evaluation module is configured to evaluate the performance of the video feature center recognition matching module, specifically by an evaluation factor, and the evaluation factor is determined by the following formula:wherein ZC represents the value of an evaluation factor, T1Indicating the number of videos queried to be consistent with the contents of the query video, T2Representing the number of videos in the video database that are consistent with the contents of the query video, T3Indicating the number of videos, T, that are not identical to the queried and queried video content4The number of videos which are inconsistent with the contents of the inquired videos in the video database is represented, and the larger the evaluation factor is, the better the performance of the video feature center recognition matching module is.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811650071.0A CN111385646A (en) | 2018-12-31 | 2018-12-31 | Multimedia video teaching platform based on video feature center recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811650071.0A CN111385646A (en) | 2018-12-31 | 2018-12-31 | Multimedia video teaching platform based on video feature center recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111385646A true CN111385646A (en) | 2020-07-07 |
Family
ID=71216898
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811650071.0A Pending CN111385646A (en) | 2018-12-31 | 2018-12-31 | Multimedia video teaching platform based on video feature center recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111385646A (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107704570A (en) * | 2017-09-30 | 2018-02-16 | 韦彩霞 | A kind of efficient multimedia teaching management system |
-
2018
- 2018-12-31 CN CN201811650071.0A patent/CN111385646A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107704570A (en) * | 2017-09-30 | 2018-02-16 | 韦彩霞 | A kind of efficient multimedia teaching management system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Chinese sign language recognition with adaptive HMM | |
WO2016127883A1 (en) | Image area detection method and device | |
CN112597866B (en) | Knowledge distillation-based visible light-infrared cross-modal pedestrian re-identification method | |
CN105468781A (en) | Video query method and device | |
CN102881160B (en) | Outdoor traffic sign identification method under low-illumination scene | |
CN106529394A (en) | Indoor scene and object simultaneous recognition and modeling method | |
CN109871845B (en) | Certificate image extraction method and terminal equipment | |
CN107423689B (en) | Intelligent interactive face key point marking method | |
CN104036023A (en) | Method for creating context fusion tree video semantic indexes | |
CN111753923A (en) | Intelligent photo album clustering method, system, equipment and storage medium based on human face | |
CN104063701A (en) | Rapid television station caption recognition system based on SURF vocabulary tree and template matching and implementation method of rapid television station caption recognition system | |
Jiang et al. | Video copy detection using a soft cascade of multimodal features | |
Wu et al. | An end-to-end heterogeneous restraint network for RGB-D cross-modal person re-identification | |
CN104067295A (en) | A gesture recognition method, an apparatus and a computer program for the same | |
CN114582011A (en) | Pedestrian tracking method based on federal learning and edge calculation | |
CN111385646A (en) | Multimedia video teaching platform based on video feature center recognition | |
CN109325467A (en) | A kind of wireless vehicle tracking based on video detection result | |
CN111383486A (en) | Multimedia video teaching system | |
CN106611043A (en) | Video searching method and system | |
CN102004795A (en) | Hand language searching method | |
CN109325487B (en) | Full-category license plate recognition method based on target detection | |
JP6365117B2 (en) | Information processing apparatus, image determination method, and program | |
CN104463864A (en) | Multistage parallel key frame cloud extraction method and system | |
CN109685112A (en) | It is a kind of based on color difference algorithm determination method similar with the image of DHash | |
CN105989063A (en) | Video retrieval method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200707 |
|
WD01 | Invention patent application deemed withdrawn after publication |