EP2894850A1 - Video playback system and method - Google Patents

Video playback system and method Download PDF

Info

Publication number
EP2894850A1
EP2894850A1 EP13834970.9A EP13834970A EP2894850A1 EP 2894850 A1 EP2894850 A1 EP 2894850A1 EP 13834970 A EP13834970 A EP 13834970A EP 2894850 A1 EP2894850 A1 EP 2894850A1
Authority
EP
European Patent Office
Prior art keywords
video
text
frame
stream
format
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP13834970.9A
Other languages
German (de)
French (fr)
Other versions
EP2894850A4 (en
EP2894850B1 (en
Inventor
Wei Cheng
Wei Zhang
Jie Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Publication of EP2894850A1 publication Critical patent/EP2894850A1/en
Publication of EP2894850A4 publication Critical patent/EP2894850A4/en
Application granted granted Critical
Publication of EP2894850B1 publication Critical patent/EP2894850B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • H04N5/44504Circuit details of the additional information generator, e.g. details of the character or graphics signal generator, overlay mixing circuits
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/19682Graphic User Interface [GUI] presenting system data to the user, e.g. information on a screen helping a user interacting with an alarm system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to video playback field, and particularly, to a system and method for playing videos.
  • the accuracy refers to that the labeling location in video tracking is accurate
  • the stability refers to that the labeling information is consecutive in video image displaying.
  • the embodiments of the present invention provide a system and a method for playing videos, which at least solves the problem that the object being tracked cannot be labeled accurately and stably after the video surveillance system analyzes the real-time videos.
  • the embodiments of present invention provide a system for playing videos, comprising:
  • the video surveillance center comprises:
  • the matching unit is further configured to, when a quantity of the buffered video frames received by the video surveillance center reaches a preset threshold, match the video frame in the video stream with the text frame having a timestamp equal to the timestamp of the video frame in the text stream; wherein, if there are corresponding text frames matching with all of three consecutive video frames, the matching is successful, otherwise, the matching is failed.
  • the video frame overlaying format determination unit is further configured to, when the video frame in the video stream fails to match with the text frame having equal timestamp to that of the video frame in the text stream, determine that the video frame overlaying format is a non-text overlaying format or a motion compensation format according to a corresponding relationship between the video frame and the text frame; wherein, if none of the three consecutive video frames have corresponding text frames, the video frame overlaying format is the non-text overlaying format, otherwise, the video frame overlaying format is the motion compensation format.
  • the text frame compensation unit is further configured to, when a first two frames in the three consecutive video frames have corresponding text frames, compensate for the text frame corresponding to the third video frame; when the first frame and the third frame in the three consecutive video frames exist, compensate for the text frame corresponding to the second video frame in the middle.
  • the system comprises a plurality of video surveillance centers.
  • the embodiments of the present invention provide a method for playing videos, comprising:
  • the step of the video surveillance center compensating for a lost text frame in the text stream according to the video stream comprises:
  • the step of matching the video stream received from the streaming media server with the text stream corresponding to the video stream comprises:
  • the step of determining a video frame overlaying format according to a matched result comprises:
  • the step of compensating for a lost text frame according to a result of determining the overlaying format comprises: when a first two frames in the three consecutive video frames have corresponding text frames, compensating for the text frame corresponding to the third video frame; when the first frame and the third frame in the three consecutive video frames have corresponding text frames, compensating for the text frame corresponding to the second video frame in the middle.
  • the text stream labeling the video stream is acquired by analyzing the video stream in real time
  • the complete text stream is acquired by compensating for the lost text frame in the text stream
  • the video stream and the text stream are overlaid and the mixed video is output at last, thereby the problem that the object being tracked cannot be labeled accurately and stably after the real-time video is analyzed in the video surveillance system in related art is solved.
  • the embodiments of the present invention provide a system and a method for playing videos, which can analyze videos in the video surveillance system accurately in real time and label the object being tracked accurately and stably.
  • the system for playing videos provided by the embodiment of the present invention mainly comprises:
  • the video surveillance center 103 comprises:
  • the matching unit 201 is configured to, when the quantity of the buffered video frames received by the video surveillance center 103 reaches a preset threshold, match the video frame in the video stream with the text frame in the text stream having equal timestamp to that of the video frame; wherein, if three consecutive video frames f n-1 , f n , and f n+1 all match successfully with the corresponding text frames I n-1 , I n and I n+1 , the matching is successful, otherwise, the matching is failed.
  • f represents video frame
  • I represents text frame
  • the video frame overlaying format determination unit 202 is configured to, when the video frame in the video stream fails to match with the text frame in the text stream having equal timestamp to that of the video frame, determine that the video frame overlaying format is a non-text overlaying format or a motion compensation format according to a corresponding relationship between the video frame and the text frame; wherein, if none of three consecutive video frames f n-1 , f n , and f n+1 have corresponding text frames I n-1 , I n and I n+1 , the video frame overlaying format is the non-text overlaying format, otherwise, refer to FIG. 3 , the video frame overlaying format is the motion compensation format.
  • the text frame compensation unit 203 is also configured to: when the first two frames in the three consecutive video frames have corresponding text frames, perform external expansion compensation on the text frame corresponding to the third video frame, and acquire a compensated text frame.
  • the external expansion compensation is:
  • the interpolation compensation is:
  • the video surveillance center may simultaneously process a plurality of pairs of different video streams and text streams, for instance, the video surveillance center may simultaneously process 16 pairs of different video streams and corresponding text streams, and perform compensation and output respectively at the same time.
  • the quantity of the video surveillance center may be multiple according to the demand of the user, so as to be convenient for the subscribers to process a great deal of videos.
  • the matching unit 201 may be actualized by the Central Processing Unit (CPU), the Digital Signal Processor (DSP) or the Field-Programmable Gate Array (FPGA) in the video surveillance center 103;
  • the video frame overlaying format determination unit 202 may be actualized by the CUP, the DSP or the FPGA in the video surveillance center 103;
  • the text frame compensation unit 203 may be actualized by the CUP, the DSP or the FPGA in the video surveillance center 103.
  • step 301 start to play videos, and the video surveillance center receives a video stream and a text stream requested from the streaming media server.
  • step 302 buffer the video stream.
  • the text stream got by the video surveillance center always lags behind the video stream, therefore it needs to buffer the video stream.
  • step 303 buffer the text stream. Whenever the video frame arrives, it is required to match a text frame having a timestamp equal to that of the arrived video frame from the text stream queue], therefore, it also needs to buffer the text stream.
  • step 304 determine whether the buffering is completed. The determination is made based on whether the number of the buffered video frames is larger than n, where n is required to be adjusted according to parameters of an intelligent analysis system in the system.
  • step 305 If the number of the buffered video frames is larger than n, the buffering is completed and it proceeds to step 305.
  • step 305 extract the video frame, and match the video frame with the text frame.
  • step 306 determine whether the matching is successful, if yes, proceed to step 313, otherwise, proceed to step 307.
  • step 307 determine the overlaying format of the unmatched video frames.
  • step 308 enter into motion compensation format.
  • step 309 determine the motion compensation format.
  • I n-2 and I n-1 exist, while I n+1 does not exist, then it is external expansion compensation format, and proceed to step 310; if both I n-1 and I n+1 exist, it is interpolation format, and proceed to step 311.
  • step 310 perform external expansion compensation, and calculate compensation information S d :
  • S d S I n -1 -S I n -2 .
  • S represents the location where the text frame I is to be labeled in the video frame f, including location information of a plurality of points required to determine the location for labeling the text frame I.
  • step 311 perform interpolation compensation, and calculate the compensation information S d :
  • S d ( S I n +1 -S I n -1 )/2.
  • step 313 perform video overlaying processing, label the information of the text frame into the corresponding video frame, and play the video.
  • step 313 After step 313 is completed, return back to step 305 and start a new round of video overlaying.
  • the text stream labeling the video stream is acquired by analyzing the video stream in real time, the complete text stream is acquired by compensating the lost text frame, and the video stream and the text stream are overlaid and the mixed video is output at last; after the video surveillance system analyzes the real-time video, it can label the object being tracked accurately and stably.
  • the embodiments of the present invention may be provided by a method, a system, or a computer program product. Therefore, the present invention may adopt the form of complete hardware embodiments, complete software embodiments, or the embodiments combining software with hardware. Moreover, the present invention may adopt the form of computer program product which includes computer available program codes and is carried out on one or more computer available storage media (including but not limited to magnetic disk storage and optical storage and so on).
  • the present invention is described by referring to the flowcharts and/or block diagrams of the method, the device (system) and the computer program product according to the embodiments of the present invention.
  • the computer program instructions may be used to implement each flow and/or block of the flowcharts and/or block diagrams, and the combination of the flow and/or block of the flowcharts and/or block diagrams.
  • These computer program instructions may be provided to a general-purpose computer, a dedicated computer, an embedded processor or the processor of other programmable data processing device to generate a machine, so that the instructions carried out by a computer or the processor of other programmable data processing device generate an equipment with a function specified by one or more flows of the flowchart and/or one or more blocks of the block diagram.
  • These computer program instructions may also be stored in a computer readable memory which can guide a computer or other programmable data processing device to work in a certain way, so that the instructions stored in the computer readable memory generate manufactured products including instruction equipment, and the instruction equipment implements a function specified by one or more flows in the flowchart and/or one or more blocks in the block diagram.
  • These computer program instructions may also be loaded on a computer or other programmable data processing device, so that a series of operation steps may be carried out on the computer or other programmable device to generate processing implemented by computers, thereby, the instructions carried out on a computer or other programmable device provide steps for implement a function specified by one or more flow of the flowchart and/or one or more block of the block diagram.
  • the text stream labeling the video stream is acquired by analyzing the video stream in real time
  • the complete text stream is acquired by compensating the lost text frame in the text stream
  • the video stream and the text stream are overlaid and the mixed video is output at last, thereby the problem that the object being tracked cannot be labeled accurately and stably after the real-time video is analyzed in the video surveillance system in related art is solved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The embodiments of the present invention disclose a system and method for playing videos, and the system includes: a streaming media server, configured to send a video stream to a real-time video intelligent analysis system and video surveillance center, and receive and buffer a text stream corresponding to the video stream sent by the real-time video intelligent analysis system, and send the text stream to the video surveillance center; the real-time video intelligent analysis system, configured to perform real-time analysis on the video stream output by the streaming media server, and output the text stream corresponding to the video stream to the streaming media server; the video surveillance center, configured to request for and receive the video stream and the text stream output by the streaming media server, compensate for a lost text frame in the text stream according to the video stream, and output mixed videos after overlaying the video stream and the text stream after the compensation.

Description

    Technical Field
  • The present invention relates to video playback field, and particularly, to a system and method for playing videos.
  • Background of the Related Art
  • In the video surveillance system, basic functions such as real-time video browsing and real-time video recording and so on can be provided. However, if it is required to implement labeling and corresponding alarm for a target video, it is required to perform real-time analysis on the video.
  • Moreover, in some specific application scenarios, it requires pretty high accuracy and stability in video tracking, such as in human face tracking and recognition. Where, the accuracy refers to that the labeling location in video tracking is accurate, while the stability refers to that the labeling information is consecutive in video image displaying.
  • In the related art, there is no solution on how to analyze, track and label a specific video accurately.
  • Summary of the Invention
  • The embodiments of the present invention provide a system and a method for playing videos, which at least solves the problem that the object being tracked cannot be labeled accurately and stably after the video surveillance system analyzes the real-time videos.
  • The embodiments of present invention provide a system for playing videos, comprising:
    • a streaming media server, configured to buffer and send a video stream to a real-time video intelligent analysis system and a video surveillance center, and receive and buffer a text stream corresponding to the video stream sent by the real-time video intelligent analysis system, and send the text stream to the video surveillance center;
    • the real-time video intelligent analysis system, configured to perform real-time analysis on the video stream output by the streaming media server, and output the text stream corresponding to the video stream to the streaming media server;
    • the video surveillance center, configured to request for and receive the video stream and the text stream output by the streaming media server, compensate for a lost text frame in the text stream according to the video stream, and output mixed videos after overlaying the video stream and the text stream after compensation.
  • Preferably, the video surveillance center comprises:
    • a matching unit, configured to match the video stream received from the streaming media server with the text stream corresponding to the video stream;
    • a video frame overlaying format determination unit, configured to determine a video frame overlaying format of the video frame according to a matched result of the matching unit; and
    • a text frame compensation unit, configured to compensate for the lost text frame according to a result of determining the overlaying format.
  • Preferably, the matching unit is further configured to, when a quantity of the buffered video frames received by the video surveillance center reaches a preset threshold, match the video frame in the video stream with the text frame having a timestamp equal to the timestamp of the video frame in the text stream; wherein,
    if there are corresponding text frames matching with all of three consecutive video frames, the matching is successful, otherwise, the matching is failed.
  • Preferably, the video frame overlaying format determination unit is further configured to, when the video frame in the video stream fails to match with the text frame having equal timestamp to that of the video frame in the text stream, determine that the video frame overlaying format is a non-text overlaying format or a motion compensation format according to a corresponding relationship between the video frame and the text frame; wherein,
    if none of the three consecutive video frames have corresponding text frames, the video frame overlaying format is the non-text overlaying format, otherwise, the video frame overlaying format is the motion compensation format.
  • Preferably, when the video frame overlaying format is the motion compensation format, the text frame compensation unit is further configured to, when a first two frames in the three consecutive video frames have corresponding text frames, compensate for the text frame corresponding to the third video frame; when the first frame and the third frame in the three consecutive video frames exist, compensate for the text frame corresponding to the second video frame in the middle.
  • Preferably, the system comprises a plurality of video surveillance centers.
  • The embodiments of the present invention provide a method for playing videos, comprising:
    • a streaming media server sending an original video stream to a real-time video intelligent analysis system;
    • the real-time video intelligent analysis system performing real-time analysis on the video stream output by the streaming media server, and outputting a text stream corresponding to the video stream to the streaming media server;
    • the streaming media server storing the original video stream and the text stream respectively, and when receiving a request for mixing videos sent by a video surveillance center, the streaming media server sending the video stream and the text stream respectively to the video surveillance center;
    • the video surveillance center receiving the video stream and the text stream output by the streaming media server, compensating for a lost text frame in the text stream according to the video stream, and outputting a mixed video after overlaying the video stream and the text stream after compensation.
  • Preferably, the step of the video surveillance center compensating for a lost text frame in the text stream according to the video stream comprises:
    • matching the video stream received from the streaming media server with the text stream corresponding to the video stream;
    • determining a video frame overlaying format according to a matched result; and
    • compensate for the lost text frame according to a result of determining the overlaying format.
  • Preferably, the step of matching the video stream received from the streaming media server with the text stream corresponding to the video stream comprises:
    • when a quantity of the video frames received by the video surveillance center reaches a preset threshold, matching the video frame in the video stream with the text frame in the text stream having equal timestamp to that of the video frame; wherein,
    • if all of three consecutive video frames match successfully with the text frames, the matching is successful, otherwise, the matching is failed.
  • Preferably, the step of determining a video frame overlaying format according to a matched result comprises:
    • when the video frame in the video stream matches successfully with the text frame in the text stream having equal timestamp to that of the video frame, directly overlaying videos and output mixed videos;
    • when the video frame in the video stream fails to match with the text frame in the text stream having equal timestamp to that of the video frame, determining that the video frame overlaying format is a non-text overlaying format or a motion compensation format according to a corresponding relationship between the video frame and the text frame; wherein,
    • if none of the three consecutive video frames have corresponding text frames, the video frame overlaying format is the non-text overlaying format, otherwise, the video frame overlaying format is the motion compensation format.
  • Preferably, when the video frame overlaying format is the non-text overlaying format, determining that there is no lost text frame;
    when the video frame overlaying format is the motion compensation format, the step of compensating for a lost text frame according to a result of determining the overlaying format comprises: when a first two frames in the three consecutive video frames have corresponding text frames, compensating for the text frame corresponding to the third video frame; when the first frame and the third frame in the three consecutive video frames have corresponding text frames, compensating for the text frame corresponding to the second video frame in the middle.
  • According to the system for playing videos provided by the embodiments of the present invention, the text stream labeling the video stream is acquired by analyzing the video stream in real time, the complete text stream is acquired by compensating for the lost text frame in the text stream, and the video stream and the text stream are overlaid and the mixed video is output at last, thereby the problem that the object being tracked cannot be labeled accurately and stably after the real-time video is analyzed in the video surveillance system in related art is solved.
  • Brief Description of Drawings
    • FIG. 1 is a structural diagram of a system for playing videos provided by an embodiment of the present invention;
    • FIG. 2 is a structural diagram of a video surveillance center in the system for playing videos provided by an embodiment of the present invention;
    • FIG. 3 is a schematic diagram of a motion compensation mode provided by an embodiment of the present invention; and
    • FIG. 4 is a flowchart diagram of a method for playing videos provided by an embodiment of the present invention.
    Preferred Embodiments of the Present Invention
  • The embodiments of the present invention provide a system and a method for playing videos, which can analyze videos in the video surveillance system accurately in real time and label the object being tracked accurately and stably.
  • The method and the system for playing videos provided by the embodiments of the present invention are described in the following in combination with the accompanying drawings.
  • Refer to FIG. 1, the system for playing videos provided by the embodiment of the present invention mainly comprises:
    • a streaming media server 101, configured to send a video stream to a real-time video intelligent analysis system and a video surveillance center; and receive and buffer a text stream corresponding to the video stream sent by the real-time video intelligent analysis system, and send the text stream to the video surveillance center;
    • the real-time video intelligent analysis system 102, configured to perform real-time analysis on the video stream output by the streaming media server, and output the text stream corresponding to the video stream to the streaming media server; wherein, the text stream includes location information of the label in the video stream;
    • the video surveillance center 103, configured to receive the video stream and the text stream output by the streaming media server, compensate for a lost text frame in the text stream according to the video stream, and output mixed videos after overlaying the video stream and the text stream after the compensation.
  • Preferably, refer to FIG. 2, the video surveillance center 103 comprises:
    • a matching unit 201, configured to match the video stream received from the streaming media server with the text stream corresponding to the video stream;
    • a video frame overlaying format determination unit 202, configured to determine a video frame overlaying format according to the matching result;
    • a text frame compensation unit 203, configured to compensate for a lost text frame according to the determination result of the overlaying format.
  • Preferably, the matching unit 201 is configured to, when the quantity of the buffered video frames received by the video surveillance center 103 reaches a preset threshold, match the video frame in the video stream with the text frame in the text stream having equal timestamp to that of the video frame; wherein, if three consecutive video frames fn-1, fn, and fn+1 all match successfully with the corresponding text frames In-1, In and In+1, the matching is successful, otherwise, the matching is failed.
  • Wherein, f represents video frame, and I represents text frame.
  • Preferably, the video frame overlaying format determination unit 202 is configured to, when the video frame in the video stream fails to match with the text frame in the text stream having equal timestamp to that of the video frame, determine that the video frame overlaying format is a non-text overlaying format or a motion compensation format according to a corresponding relationship between the video frame and the text frame; wherein,
    if none of three consecutive video frames fn-1, fn, and fn+1 have corresponding text frames In-1, In and In+1, the video frame overlaying format is the non-text overlaying format, otherwise, refer to FIG. 3, the video frame overlaying format is the motion compensation format.
  • Preferably, when the video overlaying format is motion compensation format, the text frame compensation unit 203 is also configured to: when the first two frames in the three consecutive video frames have corresponding text frames, perform external expansion compensation on the text frame corresponding to the third video frame, and acquire a compensated text frame.
  • Wherein, the external expansion compensation is:
    • In-2, In-1 exist, while In+1 does not exist, then external expansion compensation is performed on In, and the compensation information Sd is: s d = s I n - 1 - s I n - 2 ,
      Figure imgb0001
    • and then the compensation text frame is: SIn = S I n-1 +Sd ;
    • wherein, S represents the location where the text frame I is to be labeled in the video frame f, including the location information of a plurality of points required to determine the location where the text frame is labeled.
  • When the first frame and the third frame in the three consecutive video frames exist, perform interpolation compensation on the text frame corresponding to the second video frame in the middle, and acquire the compensated text frame;
    the interpolation compensation is:
    • In-1, In+1 exist, interpolation compensation is performed on In, and the compensation information Sd is: s d = s I n - 1 - s I n - 2 / 2 ,
      Figure imgb0002
    • and then the compensation text frame is: SIn = S I n-1 +Sd .
  • Preferably, the video surveillance center may simultaneously process a plurality of pairs of different video streams and text streams, for instance, the video surveillance center may simultaneously process 16 pairs of different video streams and corresponding text streams, and perform compensation and output respectively at the same time.
  • Preferably, the quantity of the video surveillance center may be multiple according to the demand of the user, so as to be convenient for the subscribers to process a great deal of videos.
  • The method for playing videos provided by an embodiment of the present invention mainly comprises the following steps of:
    • a streaming media server sending an original video stream to a real-time video intelligent analysis system;
    • the real-time video intelligent analysis system performing real-time analysis on the video stream output by the streaming media server, and outputting a text stream corresponding to the video stream to the streaming media server;
    • the streaming media server storing the original video stream and the text stream respectively, and the streaming media server sending the video stream and the text stream respectively to some video surveillance center when receiving a request for mixing videos sent by the video surveillance center;
    • the video surveillance center receiving the video stream and the text stream output by the streaming media server, compensating for lost text frames in the text stream according to the video stream, and outputting the mixed videos after overlaying the video stream and the text stream after the compensation.
  • In practical applications, the matching unit 201 may be actualized by the Central Processing Unit (CPU), the Digital Signal Processor (DSP) or the Field-Programmable Gate Array (FPGA) in the video surveillance center 103;
    the video frame overlaying format determination unit 202 may be actualized by the CUP, the DSP or the FPGA in the video surveillance center 103;
    the text frame compensation unit 203 may be actualized by the CUP, the DSP or the FPGA in the video surveillance center 103.
  • See FIG. 4, the detailed flowchart of the method for playing videos provided by an embodiment of the present invention comprises the following steps.
  • In step 301, start to play videos, and the video surveillance center receives a video stream and a text stream requested from the streaming media server.
  • In step 302, buffer the video stream. As it always takes time to analyze the videos in practical systems, in general, the text stream got by the video surveillance center always lags behind the video stream, therefore it needs to buffer the video stream.
  • In step 303, buffer the text stream. Whenever the video frame arrives, it is required to match a text frame having a timestamp equal to that of the arrived video frame from the text stream queue], therefore, it also needs to buffer the text stream.
  • In step 304, determine whether the buffering is completed. The determination is made based on whether the number of the buffered video frames is larger than n, where n is required to be adjusted according to parameters of an intelligent analysis system in the system.
  • If the number of the buffered video frames is larger than n, the buffering is completed and it proceeds to step 305.
  • If the number of the buffered video frames is not larger than n, the buffering is not completed and it proceeds to step 302.
  • In step 305, extract the video frame, and match the video frame with the text frame.
  • In step 306, determine whether the matching is successful, if yes, proceed to step 313, otherwise, proceed to step 307.
  • In step 307, determine the overlaying format of the unmatched video frames.
  • If all of the video frames fn-1, fn and fn+1 have no corresponding text frames In-1, In, In+1, it indicates that it is non-text frame format now, the overlaying operation is not performed and return to step 305.
  • Otherwise, proceed to step 308.
  • In step 308, enter into motion compensation format.
  • In step 309, determine the motion compensation format.
  • If In-2 and In-1 exist, while In+1 does not exist, then it is external expansion compensation format, and proceed to step 310; if both In-1 and In+1 exist, it is interpolation format, and proceed to step 311.
  • In step 310, perform external expansion compensation, and calculate compensation information Sd: Sd =S I n-1 -S I n-2 .
  • Where S represents the location where the text frame I is to be labeled in the video frame f, including location information of a plurality of points required to determine the location for labeling the text frame I.
  • In step 311, perform interpolation compensation, and calculate the compensation information Sd: Sd =(S I n+1 -S I n-1 )/2.
  • In step 312, acquire compensated text frame SIn , which is: SIn =S I n-1 +Sd .
  • In step 313, perform video overlaying processing, label the information of the text frame into the corresponding video frame, and play the video.
  • After step 313 is completed, return back to step 305 and start a new round of video overlaying.
  • In summary, according to the system for playing videos provided by the embodiments of the present invention, the text stream labeling the video stream is acquired by analyzing the video stream in real time, the complete text stream is acquired by compensating the lost text frame, and the video stream and the text stream are overlaid and the mixed video is output at last; after the video surveillance system analyzes the real-time video, it can label the object being tracked accurately and stably.
  • Those skilled in the art should understand that the embodiments of the present invention may be provided by a method, a system, or a computer program product. Therefore, the present invention may adopt the form of complete hardware embodiments, complete software embodiments, or the embodiments combining software with hardware. Moreover, the present invention may adopt the form of computer program product which includes computer available program codes and is carried out on one or more computer available storage media (including but not limited to magnetic disk storage and optical storage and so on).
  • The present invention is described by referring to the flowcharts and/or block diagrams of the method, the device (system) and the computer program product according to the embodiments of the present invention. It should be understood that the computer program instructions may be used to implement each flow and/or block of the flowcharts and/or block diagrams, and the combination of the flow and/or block of the flowcharts and/or block diagrams. These computer program instructions may be provided to a general-purpose computer, a dedicated computer, an embedded processor or the processor of other programmable data processing device to generate a machine, so that the instructions carried out by a computer or the processor of other programmable data processing device generate an equipment with a function specified by one or more flows of the flowchart and/or one or more blocks of the block diagram.
  • These computer program instructions may also be stored in a computer readable memory which can guide a computer or other programmable data processing device to work in a certain way, so that the instructions stored in the computer readable memory generate manufactured products including instruction equipment, and the instruction equipment implements a function specified by one or more flows in the flowchart and/or one or more blocks in the block diagram.
  • These computer program instructions may also be loaded on a computer or other programmable data processing device, so that a series of operation steps may be carried out on the computer or other programmable device to generate processing implemented by computers, thereby, the instructions carried out on a computer or other programmable device provide steps for implement a function specified by one or more flow of the flowchart and/or one or more block of the block diagram.
  • Obviously, those skilled in the art may make various changes and transformations on the present invention without departing from the spirit and scope of the present invention. As thus, if these modifications and transformations of the present invention belong to the scope of the claims of the present invention and equivalent technologies thereof, the present invention also intends to comprise these changes and transformations.
  • Industrial Applicability
  • According to the system for playing videos provided by the embodiments of the present invention, the text stream labeling the video stream is acquired by analyzing the video stream in real time, the complete text stream is acquired by compensating the lost text frame in the text stream, and the video stream and the text stream are overlaid and the mixed video is output at last, thereby the problem that the object being tracked cannot be labeled accurately and stably after the real-time video is analyzed in the video surveillance system in related art is solved.

Claims (11)

  1. A system for playing videos, comprising:
    a streaming media server, configured to buffer and send a video stream to a real-time video intelligent analysis system and a video surveillance center, and receive and buffer a text stream corresponding to the video stream sent by the real-time video intelligent analysis system, and send the text stream to the video surveillance center;
    the real-time video intelligent analysis system, configured to perform real-time analysis on the video stream output by the streaming media server, and output the text stream corresponding to the video stream to the streaming media server;
    the video surveillance center, configured to request for and receive the video stream and the text stream output by the streaming media server, compensate for a lost text frame in the text stream according to the video stream, and output mixed videos after overlaying the video stream and the text stream after compensation.
  2. The system for playing videos according to claim 1, wherein, the video surveillance center comprises:
    a matching unit, configured to match the video stream received from the streaming media server with the text stream corresponding to the video stream;
    a video frame overlaying format determination unit, configured to determine a video frame overlaying format of the video frame according to a matched result of the matching unit; and
    a text frame compensation unit, configured to compensate for the lost text frame according to a result of determining the overlaying format.
  3. The system for playing videos according to claim 2, wherein,
    the matching unit is further configured to, when a quantity of the buffered video frames received by the video surveillance center reaches a preset threshold, match the video frame in the video stream with the text frame having a timestamp equal to the timestamp of the video frame in the text stream; wherein,
    if there are corresponding text frames matching with all of three consecutive video frames, the matching is successful, otherwise, the matching is failed.
  4. The system for playing videos according to claim 2, wherein,
    the video frame overlaying format determination unit is further configured to, when the video frame in the video stream fails to match with the text frame having equal timestamp to that of the video frame in the text stream, determine that the video frame overlaying format is a non-text overlaying format or a motion compensation format according to a corresponding relationship between the video frame and the text frame; wherein,
    if none of the three consecutive video frames have corresponding text frames, the video frame overlaying format is the non-text overlaying format, otherwise, the video frame overlaying format is the motion compensation format.
  5. The system for playing videos according to claim 4, wherein,
    when the video frame overlaying format is the motion compensation format, the text frame compensation unit is further configured to, when a first two frames in the three consecutive video frames have corresponding text frames, compensate for the text frame corresponding to the third video frame; when the first frame and the third frame in the three consecutive video frames exist, compensate for the text frame corresponding to the second video frame in the middle.
  6. The system for playing videos according to claim 1, wherein, the system comprises a plurality of video surveillance centers.
  7. A method for playing videos, comprising:
    a streaming media server sending an original video stream to a real-time video intelligent analysis system;
    the real-time video intelligent analysis system performing real-time analysis on the video stream output by the streaming media server, and outputting a text stream corresponding to the video stream to the streaming media server;
    the streaming media server storing the original video stream and the text stream respectively, and when receiving a request for mixing videos sent by a video surveillance center, the streaming media server sending the video stream and the text stream respectively to the video surveillance center;
    the video surveillance center receiving the video stream and the text stream output by the streaming media server, compensating for a lost text frame in the text stream according to the video stream, and outputting a mixed video after overlaying the video stream and the text stream after compensation.
  8. The method according to claim 7, wherein, the step of the video surveillance center compensating for a lost text frame in the text stream according to the video stream comprises:
    matching the video stream received from the streaming media server with the text stream corresponding to the video stream;
    determining a video frame overlaying format according to a matched result; and
    compensate for the lost text frame according to a result of determining the overlaying format.
  9. The method according to claim 8, wherein, the step of matching the video stream received from the streaming media server with the text stream corresponding to the video stream comprises:
    when a quantity of the video frames received by the video surveillance center reaches a preset threshold, matching the video frame in the video stream with the text frame in the text stream having equal timestamp to that of the video frame; wherein,
    if all of three consecutive video frames match successfully with the text frames, the matching is successful, otherwise, the matching is failed.
  10. The method according to claim 8, wherein, the step of determining a video frame overlaying format according to a matched result comprises:
    when the video frame in the video stream matches successfully with the text frame in the text stream having equal timestamp to that of the video frame, directly overlaying videos and output mixed videos;
    when the video frame in the video stream fails to match with the text frame in the text stream having equal timestamp to that of the video frame, determining that the video frame overlaying format is a non-text overlaying format or a motion compensation format according to a corresponding relationship between the video frame and the text frame; wherein,
    if none of the three consecutive video frames have corresponding text frames, the video frame overlaying format is the non-text overlaying format, otherwise, the video frame overlaying format is the motion compensation format.
  11. The method according to claim 10, wherein,
    when the video frame overlaying format is the non-text overlaying format, determining that there is no lost text frame;
    when the video frame overlaying format is the motion compensation format, the step of compensating for a lost text frame according to a result of determining the overlaying format comprises: when a first two frames in the three consecutive video frames have corresponding text frames, compensating for the text frame corresponding to the third video frame; when the first frame and the third frame in the three consecutive video frames have corresponding text frames, compensating for the text frame corresponding to the second video frame in the middle.
EP13834970.9A 2012-09-05 2013-08-09 Video playback system and method Active EP2894850B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210324651.7A CN103685975B (en) 2012-09-05 2012-09-05 A kind of audio/video player system and method
PCT/CN2013/081190 WO2014036877A1 (en) 2012-09-05 2013-08-09 Video playback system and method

Publications (3)

Publication Number Publication Date
EP2894850A1 true EP2894850A1 (en) 2015-07-15
EP2894850A4 EP2894850A4 (en) 2015-09-02
EP2894850B1 EP2894850B1 (en) 2018-05-30

Family

ID=50236515

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13834970.9A Active EP2894850B1 (en) 2012-09-05 2013-08-09 Video playback system and method

Country Status (4)

Country Link
US (1) US9426403B2 (en)
EP (1) EP2894850B1 (en)
CN (1) CN103685975B (en)
WO (1) WO2014036877A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105323501A (en) * 2014-07-28 2016-02-10 中兴通讯股份有限公司 Concentrated video moving object marking method, playing method and apparatus thereof
CN111064984B (en) * 2018-10-16 2022-02-08 杭州海康威视数字技术股份有限公司 Intelligent information superposition display method and device for video frame and hard disk video recorder
CN114598893B (en) * 2020-11-19 2024-04-30 京东方科技集团股份有限公司 Text video realization method and system, electronic equipment and storage medium
TWI786694B (en) * 2021-06-23 2022-12-11 中強光電股份有限公司 Data streaming method and data streaming system
CN115550608B (en) * 2022-09-19 2024-09-06 国网智能科技股份有限公司 Multi-user high-concurrency AI video real-time fusion display control method and system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7679649B2 (en) * 2002-04-19 2010-03-16 Ralston John D Methods for deploying video monitoring applications and services across heterogenous networks
WO2004044710A2 (en) * 2002-11-11 2004-05-27 Supracomm, Inc. Multicast videoconferencing
CN100452871C (en) * 2004-10-12 2009-01-14 国际商业机器公司 Video analysis, archiving and alerting methods and apparatus for a video surveillance system
US20060171453A1 (en) * 2005-01-04 2006-08-03 Rohlfing Thomas R Video surveillance system
CN101043507A (en) * 2006-03-22 2007-09-26 英业达股份有限公司 Method and system for processing monitor
US20090244285A1 (en) * 2008-04-01 2009-10-01 Honeywell International, Inc. System and method for providing creation of on-side text for video surveillance
CN102118619B (en) * 2009-12-31 2012-08-29 华为技术有限公司 Video signal compensating method, device and system
CN102204248B (en) * 2011-05-18 2013-08-14 华为技术有限公司 Video data processing method, video image displaying method and device thereof
CN102510478A (en) 2011-10-28 2012-06-20 唐玉勇 Intelligent distribution control system and method used for 'Safe City' project

Also Published As

Publication number Publication date
EP2894850A4 (en) 2015-09-02
US9426403B2 (en) 2016-08-23
US20150229868A1 (en) 2015-08-13
EP2894850B1 (en) 2018-05-30
WO2014036877A1 (en) 2014-03-13
CN103685975B (en) 2017-08-25
CN103685975A (en) 2014-03-26

Similar Documents

Publication Publication Date Title
EP2894850B1 (en) Video playback system and method
US9118886B2 (en) Annotating general objects in video
US20220375225A1 (en) Video Segmentation Method and Apparatus, Device, and Medium
CN106815254B (en) Data processing method and device
CN109144858B (en) Fluency detection method and device, computing equipment and storage medium
EP3049953A1 (en) Multiple data source aggregation for efficient synchronous multi-device media consumption
CN111401228B (en) Video target labeling method and device and electronic equipment
US11037301B2 (en) Target object detection method, readable storage medium, and electronic device
CN107623862A (en) multimedia information push control method, device and server
CN105828179A (en) Video positioning method and device
CN112511818B (en) Video playing quality detection method and device
CN114245229B (en) Short video production method, device, equipment and storage medium
CN112258214A (en) Video delivery method and device and server
JP2024502516A (en) Data annotation methods, apparatus, systems, devices and storage media
US20210051379A1 (en) Selective playback of audio at normal speed during trick play operations
US9934449B2 (en) Methods and systems for detecting topic transitions in a multimedia content
CN108710918B (en) Fusion method and device for multi-mode information of live video
CN114222083A (en) Method and device for multi-channel audio, video and radar mixed synchronous playback
CN118276481A (en) Intelligent driving debugging method, device, system, electronic equipment and storage medium
CN111970560B (en) Video acquisition method and device, electronic equipment and storage medium
CN106878773B (en) Electronic device, video processing method and apparatus, and storage medium
US8306992B2 (en) System for determining content topicality, and method and program thereof
CN112738629B (en) Video display method and device, electronic equipment and storage medium
CN112601129B (en) Video interaction system, method and receiving terminal
CN114760444A (en) Video image processing and determining method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20150305

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

RA4 Supplementary search report drawn up and despatched (corrected)

Effective date: 20150730

RIC1 Information provided on ipc code assigned before grant

Ipc: G08B 13/196 20060101ALI20150724BHEP

Ipc: H04N 7/18 20060101AFI20150724BHEP

17Q First examination report despatched

Effective date: 20150903

DAX Request for extension of the european patent (deleted)
GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20180123

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1004837

Country of ref document: AT

Kind code of ref document: T

Effective date: 20180615

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602013038348

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20180530

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180830

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180830

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180831

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1004837

Country of ref document: AT

Kind code of ref document: T

Effective date: 20180530

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602013038348

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20180830

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180831

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180809

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180831

26N No opposition filed

Effective date: 20190301

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20180831

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180831

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180831

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180830

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180809

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180530

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20130809

Ref country code: MK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180530

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180930

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240612

Year of fee payment: 12