WO2010122489A1 - Displaying video sequences - Google Patents

Displaying video sequences Download PDF

Info

Publication number
WO2010122489A1
WO2010122489A1 PCT/IB2010/051717 IB2010051717W WO2010122489A1 WO 2010122489 A1 WO2010122489 A1 WO 2010122489A1 IB 2010051717 W IB2010051717 W IB 2010051717W WO 2010122489 A1 WO2010122489 A1 WO 2010122489A1
Authority
WO
Grant status
Application
Patent type
Prior art keywords
video sequence
event
device
start
detected
Prior art date
Application number
PCT/IB2010/051717
Other languages
French (fr)
Inventor
Huanhuan Zhang
Xin Chen
Yunqiang Liu
Bei Wang
Jingwei Tan
Jun Shi
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry
    • H04N5/445Receiver circuitry for displaying additional information
    • H04N5/45Picture in picture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television, VOD [Video On Demand]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network, synchronizing decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television, VOD [Video On Demand]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network, synchronizing decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television, VOD [Video On Demand]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data

Abstract

The present invention aims to provide a method, a system and an apparatus for displaying a first video sequence together with a second video sequence. A second video sequence is displayed on the device ( 110 ), a first unit (101) detects the start of an event within the second video sequence, the device (110) displays the first video sequence together with the second video sequence when the start of the event has been detected. By applying the method, the first video sequence having various content can be easily provided to people during the display of the second video sequence, so that people can enjoy two video sequences at the same time.

Description

DISPLAYING VIDEO SEQUENCES

FIELD OF THE INVENTION

The present invention relates to the display of video sequences.

BACKGROUND OF THE INVENTION

Watching TV is becoming a very important activity for relaxation and amusement. By watching TV, people can enjoy themselves and get lots of new information.

People have various requirements as regards watching TV, but the content of TV programs is fixed, consequently, although people could watch several channels simultaneously through picture-in-picture or nine-pictures provided by the TV, it is sometimes not possible to fulfill their requirements. For example, when people want to enjoy personal content or content from other sources, people have to manually switch the TV from receiving and displaying TV programs to receiving and displaying video from other sources.

SUMMARY OF THE INVENTION

The present invention aims to provide a method, a system and an apparatus for displaying a first video sequence together with a second video sequence.

According to an embodiment of the present invention, a second video sequence is displayed on a device, and a method of displaying a first video sequence on the device is proposed. The method comprises the step of detecting the start of an event within said second video sequence; and the step of displaying said first video sequence on said device together with said second video sequence when the start of said event has been detected. According to another embodiment of the present invention, a second video sequence is displayed on said device, and a method of controlling the display of a first video sequence is proposed. The method comprises the step of detecting the start of an event within said second video sequence; and the step of sending first data including said first video sequence to said device when the start of said event has been detected.

According to another embodiment of the present invention, a second video sequence is displayed on a device, and an apparatus for controlling the display of a first video sequence is proposed. The apparatus comprises a first unit for detecting the start of an event within said second video sequence; and a second unit for sending first data including said first video sequence to said device when the start of said event has been detected.

According to another embodiment of the present invention, a system for displaying a first video sequence is proposed. The system comprises a device for displaying a second video sequence; and an apparatus for controlling the display of a first video sequence, the apparatus comprising a first unit for detecting the start of an event within said second video sequence; and a second unit for sending first data including said first video sequence to said device when the start of said event has been detected.

By applying the method, system and apparatus of the present invention, the first video sequence having various content can be easily provided to people during the display of the second video sequence, so that people can enjoy two video sequences at the same time. The first video could be for example: a physical exercise instruction program. By applying the method, system and apparatus of the present invention, people can do physical exercises by following the first video sequence and can watch the second video sequence at the same time.

DESCRIPTION OF THE DRAWINGS

The above and other objects and features of the present invention will become more apparent from the following detailed description considered in connection with the accompanying drawings, in which:

FIG.l is a schematic block diagram of an apparatus 100 according to an embodiment of the present invention;

FIG.2 is a flowchart showing the function of the apparatus 100 in FIG. l;

FIG.3 is a schematic block diagram of another apparatus 300 according to another embodiment of the present invention;

FIG.4 is a flowchart showing another function of the apparatus 100 or 300;

FIG.5 is a schematic block diagram of a system 500 according to another embodiment of the present invention;

The same reference numerals are used to denote similar parts throughout the Figures.

DETAILED DESCRIPTION

Referring to FIG.l, FIG. l shows a schematic block diagram of an apparatus 100 for controlling the display of a first video sequence according to an embodiment of the present invention. It should be understood that FIG. l shows many other alternative units besides two indispensable units: a first unit 101 and a second unit 102. In FIG. l and FIG. 3(FIG3 will be described in the following context), the continuous line boxes represent indispensable units and the dashed line boxes represent alternative units.

FIG.2 is a flowchart showing the function of the apparatus 100 of FIG. l, which will be described in detail in the following. It is to be understood that FIG.2 also shows many other alternative steps besides two indispensable steps S202 and S206 for the apparatus 100. Similarly, in FIG.2, the continuous line boxes represent indispensable steps and the dashed line boxes represent alternative steps.

Referring to FIG.l and FIG.2, the device 110 displays a second video sequence. The second video sequence may be broadcast type data, or multicast type data, or unicast type data originating from a broadcast television station via various telecommunication channels such as cable, satellite, or a global computer network etc. The device 110 may be a TV receiver such as a traditional TV, or a computer etc. In the case that the second video sequence originates from a broadcast television station, the second video sequence refers to a TV program decoded from the multi-channel broadcast stream received by the device 110, i.e. TV receiver.

In step S202, the first unit 101 detects the start of an event within the second video sequence. It is to be understood that the method of detecting the start and/or the end of the event depends on the type of event. What constitutes the event may vary depending on different hobbies of different persons. Advantageously, the event refers to the content that does not need too much attention of a viewer watching the content, such as commercials, video teaching courses, ending songs of TV series, etc. Without loss of generality, in the following, the event refers to a commercial (advertisement) by way of example.

There are lots of commercial detection methods. A simple approach to detect a TV commercial is to check the scene change rate of the video. If the scene change rate is high and exceeds a threshold, the start of a commercial has been detected by the first unit

101. Detection of scene change rate is similar to detection of frame similarity disclosed in U.S Patent 6,100,941. Alternatively, the first unit 101 may detect the metadata of the second video sequence to judge whether a commercial has started. Usually, the second video sequence may include metadata which characterizes the content of the programs and/or commercials.

Alternatively, the first unit 101 may detect the text and/or audio parameters of the second video sequence. U.S Patent Application Serial No. 09/370, 931, filed on August 9, 1999, assigned to U. S. Philips Corporation, discloses a method and application for detecting text locations in a video sequence.

For example, the text is usually located in the centre of the image of a commercial, so the first unit 101 can judge whether the start of a commercial has been detected according to the text region of the image.

The audio parameters derived from the second video sequence may include energy, band energy ratio, pause rate, speech rate, Fourier transform coefficients and Mel Spectrum frequency coefficients etc. For example, the pause rate of a commercial is lower than that of normal programs, the speech rate of a commercial is obviously faster than that of normal programs. The first unit 101 may use a combination of various audio parameters to judge whether a commercial has been started.

Alternatively, the first unit 101 may use any combination of metadata, audio parameters, text parameters and scene change rate of the second video sequence to judge whether the start of a commercial has been detected.

Back to FIG.2, in step S206, when the start of an event has been detected by the first unit 101, the second unit 102 sends first data including the first video sequence to the device 110. Herein, the first data may simply be the first video sequence or may be the first video sequence combined with other content, such as the second video sequence described later.

Then, in step S207, the device 110 displays the first video sequence together with the second video sequence. Advantageously, the first video sequence and the second video sequence are displayed in the form of picture-in-picture, or in the form of two separate pictures, or in the form of two partially- overlapping pictures.

Advantageously, in an embodiment, the apparatus 100 further comprises a third unit 103 for combining the second video sequence and the first video sequence into a combined video sequence in step S205 when the start of the event has been detected. In such a situation, the first data sent by the second unit 102 in step S206 is the combined video sequence.

It is to be understood that the third unit 103 is only an optional unit, i.e. step S205 is an optional step. In an embodiment, the first video sequence and the second video sequence can be displayed together on the device 110 without being combined. This can be implemented in a lot of ways, for example, the device 110 has two screens and correspondingly two display controllers, one controls the first video sequence to be displayed on one screen and the other controls the second video sequence to be displayed on the other screen.

The first video sequence may be stored in a memory 103. Alternatively, in step S201A the receiver 104 may receive the first video sequence from an external memory or from the internet, or from a television broadcast stream. Alternatively, in step S201B, the first video sequence may be generated by application software stored in the memory 103. It is to be understood that step S201A or S201B may be performed after step S202 or S203, alternatively. The function of step S203 will be elaborated later. The content of the first video sequence can be varied. For example, the first video sequence may be an MTV, a series of web pages or a physical exercise instruction program. Especially, when the second video sequence originates from broadcast television, i.e. the device 110 of FIG.l is a TV, the viewer always sits a long time motionlessly in front of the TV, in other words, they need to do physical exercises to stay healthy.

When the exercise instruction program, i.e. the first video sequence, is displayed on TV, the whole screen could be changed into the exercise instruction program while the original TV content, i.e. the second video sequence, is downscaled and displayed at a corner of the screen, which is one form of picture-in picture. When the commercials are over, the viewer can choose to return to the display of the second video sequence in full screen.

Alternatively, in the case that the first video sequence is an exercise instruction program, the apparatus 100 in FIG. l may further comprise a camera 106 for capturing the movement of the person doing exercises, i.e. the viewer; an analyzer 107 for analyzing the postures and the movements of the person doing exercises and evaluating the accomplishment of those movements to give suggestions in real time, which is sent to the TV by the second unit 102 and then displayed on the TV. The block diagram of such an apparatus 300 is shown in FIG.3.

Specifically, when the exercise instruction program is displayed on the TV 110, the camera 106 captures the movements of the person doing exercises, and then the analyzer 107 analyzes the postures and/or the movements of the person doing exercises. After that the analyzer 107 compares the postures and/or the movements of the person doing exercises with the standard postures and/or the movements according to the instruction program to measure the accomplishments. Three or more linked key points representing the body joints of the person doing exercises can be used to represent the postures and the movements of the person doing exercises. Alternatively, 2D or 3D models representing the postures and the movements of the person doing exercises can be used. The postures and the movements of the person doing exercises and the comparison result may be displayed on the screen of the TV. Additionally, suggestions for improvement may be given to help the person doing exercises.

Advantageously, the apparatus 100 or 300 further comprises a first generator (not shown in FIG.l or FIG.3). Referring again to FIG.2, the first generator generates, in step S203, second data when the start of the event has been detected by the first unit 101 in step S202. Then, in step S204, the second data is sent to device 110 by the second unit 102. The second data is used for generating a first user interface on the device 110. The first user interface allows a user input to select at least one of the following items: 1) whether the first video sequence can be displayed; 2)in which form the first video sequence is expected to be displayed; 3)when to stop the display of the first video sequence.

If the user input indicates not to display the first video sequence, the device 110 continues the display of the second video sequence. If the user input indicates to display the first video sequence, then the first video sequence will be displayed on the device 110 together with the second video sequence in the form according to the user input. If the user input does not indicate in which form to display the first video sequence together with the second video sequence, the form is determined according to a default configuration. The default configuration may be any one of picture-in-picture, two separate pictures, or two partially-overlapping pictures. The user can also set the time when the display of the first video sequence should stop, for example, after 5 minutes. FIG. 4 shows the flowchart of another alternative function of the apparatus 100 in FIG l or apparatus 300 in FIG3. In step S401, the first unit 101 detects the end of the event within the second video sequence, then in step S404, the second unit 102 stops sending the first data including the first video sequence, such as the aforesaid combined video sequence, to the device 110 when the end of the event has been detected. The device 110 reverts to displaying the second video only. In such a situation, the user needs not set the time when the display of the first video sequence should stop.

In the case that the event is a commercial, similarly to the detection of the start of the event, lots of methods can be applied to detect the end of the event. A simple approach to detect the end of the TV commercial is to check the scene change rate of the video. If the scene change rate is low and less than a threshold, the first unit 101 will judge that the end of a commercial has been detected. Alternatively, the first unit 101 may detect the metadata of the second video sequence to judge whether a commercial has ended. Alternatively, the first unit 101 may detect the text and/or audio parameters of the second video sequence. Alternatively, the first unit 101 may detect any combination of metadata, audio parameters, text parameters and scene change rate of the second video sequence to judge whether the end of a commercial has been detected.

Advantageously, the apparatus 100 or 300 further comprises a second generator (not shown in FIG l or FIG.3). Referring again to FIG.4, after the end of the event has been detected by the first unit 101 in step S401, in step S402, the second generator generates third data. And in step S403, the third data is sent to the device 110 by the second unit 102. The third data is used for generating a second user interface on the device 110. Via the second interface a user can determine whether to continue or stop the display of the first video sequence. If the user indicates via the second user interface to stop the display of the first video sequence, then the second unit 102 will stop sending the first data including the first video sequence to the device 110. The device 110 reverts to the display of the second video sequence only. If the user indicates to continue the display of the first video sequence, then the second unit 102 will keep sending the first data including the first video sequence to the device 110.

When taking the apparatus 100 or 300 described above and the device 110 as a whole, they form a system 500 for displaying a first video sequence as shown in FIG.5.

Firstly, the device 110 displays a second video sequence. The apparatus 100 or 300 detects the start of an event within the second video sequence. The device 110 displays the first video sequence together with the second video sequence when the start of the event has been detected. Advantageously, the first video sequence and the second video sequence are displayed in the form of picture-in-picture, or in the form of two separate pictures, or in the form of two partially- overlapping pictures.

Advantageously, the apparatus 100 or 300 combines the second video sequence and the first video sequence into a combined video sequence when the start of the event has been detected. Advantageously, the device 110 then displays the combined video sequence in the form of picture-in-picture, or in the form of two separate pictures, or in the form of two partially-overlapping pictures.

The function of the apparatus 100 or 300 and the system 500 are fully described in the above paragraphs. It should be understood that apparatus 100 or 300 or system 500 are described in view of their function only. The apparatus 100 or 300 or each unit therein, such as the first unit 101 and/or the second unit 102 and/or the third unit 103, can be implemented by software or hardware or a combination of software and hard ware. For example, they could be implemented by a processor linked to a memory storing the instruction code for implementing the function of the first unit 101 and/or the second unit 102 and/or the third unit 103.

In addition, the first generator and the second generator can be one and the same generator, which can also be implemented by software or hardware or a combination of software and hardware.

The apparatus 100 or 300 may be separate from the device 110. Alternatively, it may be integrated into the device 110. For example, the device 110 is a TV including additionally the function of the apparatus 100 or 300.

It should be noted that the above-described embodiments are for the purpose of illustration only and are not to be construed as limiting the invention. All such modifications which do not depart from the spirit of the invention fall within the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim or in the description. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. In the apparatus claims enumerating several units, several of these units can be embodied by one and the same item of hardware or software. The usage of the words first, second and third, et cetera, does not indicate any ordering. These words are to be interpreted as names.

Claims

What is claimed is:
1. A method of displaying a first video sequence on a device (110), wherein a second video sequence is displayed on said device ( 110) , said method comprising the steps of:
- detecting (S202) the start of an event within said second video sequence;
- displaying (S207) said first video sequence on said device together with said second video sequence when the start of said event has been detected.
2. A method according to claim 1, wherein prior to said step of displaying(S207), said method further comprises the step of:
- combining (S205) said second video sequence and said first video sequence into a combined video sequence when the start of said event has been detected.
3. A method of controlling the display of a first video sequence on a device (110), wherein a second video sequence is displayed on said device (110), said method comprising the steps of:
- detecting (S202) the start of an event within said second video sequence;
- sending (S206), to said device (110), a first data including said first video sequence when the start of said event has been detected.
4. A method according to claim 3, wherein prior to said step of sending (S206), said method further comprises the step of:
- combining (S205) said first video sequence and second video sequence into a combined video sequence when the start of said event has been detected, wherein said first data is the combined video sequence.
5. A method according to claim 4, wherein said combined video sequence can be displayed in the form of picture-in-picture, or in the form of two separate pictures, or in the form of two partially- overlapping pictures.
6. A method according to claim 3, further comprising one of the following steps:
- receiving (S201A) said first video sequence from a memory, or from the internet, or from a Television Broadcast stream;
- generating (S201B) said first video sequence by means of application software.
7. A method according to claim 3, wherein prior to said step of sending (S206), said method further comprises:
- generating (S203) second data for generating a first user interface on the device (110) when the start of said event has been detected;
- sending (S204) said second data to said device (110), wherein said first user interface allows a user input to select at least one of the following items: whether said first video sequence can be displayed; in which form to display the first video sequence; when to stop the display of the first video sequence.
8. A method according to claim 3, further comprising the steps of:
- detecting (S401) the end of said event within said second video sequence;
- stopping (S404) the sending of said first data to said device (110) when the end of said event has been detected.
9. A method according to claim 3, further comprising the steps of:
- detecting (S401) the end of said event within said second video sequence;
- generating (S402) third data for generating a second user interface on the device (110), said second user interface allowing a user to select whether to continue or stop the display of said first video sequence when the end of said event has been detected; - sending ( S403 ) said third data to said device (110).
10. An apparatus (100) for controlling the display of a first video sequence, wherein a second video sequence is displayed on a device (110), said apparatus (100) comprising: a first unit (101) for detecting the start of an event within said second video sequence; a second unit (102) for sending first data including said first video sequence to said device (110) when the start of said event has been detected.
11. An apparatus (100) according to claim 10, said apparatus further comprising: a third unit (103) for combining said first video sequence and second video sequence into a combined video sequence when the start of said event has been detected, wherein said first data sent by said second unit (102) is the combined video sequence.
12. An apparatus (100) according to claim 10, wherein said combined video sequence can be displayed in the form of picture-in-picture, or in the form of two separate pictures, or in the form of two partially-overlapping pictures.
13. An apparatus (100) according to claim 10, further comprising one of the following: a memory (104) for storing said first video sequence or for storing application software for generating said first video sequence; a receiver (105) for receiving said first video sequence from an external memory, or from the internet, or from a Television Broadcast stream.
14. A system (500) for displaying a first video sequence, comprising: a device (110) for displaying a second video sequence; an apparatus (100) as claimed in any one of claims 10 to 13.
PCT/IB2010/051717 2009-04-23 2010-04-20 Displaying video sequences WO2010122489A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN200910139216 2009-04-23
CN200910139216.5 2009-04-23

Publications (1)

Publication Number Publication Date
WO2010122489A1 true true WO2010122489A1 (en) 2010-10-28

Family

ID=42199410

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2010/051717 WO2010122489A1 (en) 2009-04-23 2010-04-20 Displaying video sequences

Country Status (1)

Country Link
WO (1) WO2010122489A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070115391A1 (en) * 2005-11-22 2007-05-24 Gateway Inc. Automatic launch of picture-in-picture during commercials
US20080297669A1 (en) * 2007-05-31 2008-12-04 Zalewski Gary M System and method for Taking Control of a System During a Commercial Break

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070115391A1 (en) * 2005-11-22 2007-05-24 Gateway Inc. Automatic launch of picture-in-picture during commercials
US20080297669A1 (en) * 2007-05-31 2008-12-04 Zalewski Gary M System and method for Taking Control of a System During a Commercial Break

Similar Documents

Publication Publication Date Title
US7337457B2 (en) Apparatus and method for providing and obtaining product information through a broadcast signal
Knoche et al. Can small be beautiful?: assessing image resolution requirements for mobile TV
US20050289582A1 (en) System and method for capturing and using biometrics to review a product, service, creative work or thing
US20010052133A1 (en) Apparatus and method for providing and obtaining product information through a broadcast signal
US20070214471A1 (en) System, method and computer program product for providing collective interactive television experiences
US20120072936A1 (en) Automatic Customized Advertisement Generation System
US20120233646A1 (en) Synchronous multi-platform content consumption
US20040071209A1 (en) Method for presenting streaming media for an event
US20070086669A1 (en) Regions of interest in video frames
US20090055546A1 (en) Predicted concurrent streaming program selection
Winkler et al. Video quality evaluation for Internet streaming applications
US20130239163A1 (en) Method for receiving enhanced service and display apparatus thereof
US20100169905A1 (en) Information processing apparatus, information processing method, and program
US20100034425A1 (en) Method, apparatus and system for generating regions of interest in video content
US20110321084A1 (en) Apparatus and method for optimizing on-screen location of additional content overlay on video content
US20100100581A1 (en) Method and device for delivering supplemental content associated with audio/visual content to a user
US20070122786A1 (en) Video karaoke system
US8667529B2 (en) Presentation of audiovisual exercise segments between segments of primary audiovisual content
JP2003250146A (en) Program selection support information providing service system, server apparatus, terminal, program selection support information providing method, program, and recording medium
Jumisko-Pyykkö et al. Does context matter in quality evaluation of mobile television?
US20110214141A1 (en) Content playing device
JP2006324809A (en) Information processor, information processing method, and computer program
CN103997691A (en) Method and system for video interaction
US20040071214A1 (en) Method for presenting streaming media
US8990842B2 (en) Presenting content and augmenting a broadcast

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10717823

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct app. not ent. europ. phase

Ref document number: 10717823

Country of ref document: EP

Kind code of ref document: A1