WO2015099407A1 - Client device and method for displaying contents in cloud environment - Google Patents

Client device and method for displaying contents in cloud environment Download PDF

Info

Publication number
WO2015099407A1
WO2015099407A1 PCT/KR2014/012720 KR2014012720W WO2015099407A1 WO 2015099407 A1 WO2015099407 A1 WO 2015099407A1 KR 2014012720 W KR2014012720 W KR 2014012720W WO 2015099407 A1 WO2015099407 A1 WO 2015099407A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
object
pixels
video
displaying
Prior art date
Application number
PCT/KR2014/012720
Other languages
French (fr)
Inventor
Qu Ho Hwang
Original Assignee
Alticast Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to KR1020130162254A priority Critical patent/KR20150074455A/en
Priority to KR10-2013-0162254 priority
Application filed by Alticast Corporation filed Critical Alticast Corporation
Publication of WO2015099407A1 publication Critical patent/WO2015099407A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network, synchronizing decoder's clock; Client middleware
    • H04N21/4302Content synchronization processes, e.g. decoder synchronization
    • H04N21/4307Synchronizing display of multiple content streams, e.g. synchronisation of audio and video output or enabling or disabling interactive icons for a given period of time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0261Improving the quality of display appearance in the context of movement of objects on the screen or movement of the observer relative to the screen
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0492Change of orientation of the displayed image, e.g. upside-down, mirrored
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/14Solving problems related to the presentation of information to be displayed
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2356/00Detection of the display position w.r.t. other display screens
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/04Exchange of auxiliary data, i.e. other than image data, between monitor and graphics controller

Abstract

Provided is a client device for displaying content in a cloud environment, the client device including a display configured to display video data on pixels of a predetermined area; an attribute information creator configured to create video attribute information including photographing data according to a camera direction and a photographing location with respect to the video data to be displayed on the display; a display driver configured to set sequential frames to be displayed along a temporal flow according to a motion of an object with respect to the video data based on the video attribute information, to make the set sequential frames correspond to the pixels, respectively, along a camera movement direction or an object movement direction in a background space, and to sequentially display the set sequential frames; and a video storage configured to store at least one of the video data and the video attribute information.

Description

CLIENT DEVICE AND METHOD FOR DISPLAYING CONTENTS IN CLOUD ENVIRONMENT

The present invention relates to a client device and method for displaying content in a cloud environment, and more particularly, to a client device and method for displaying content that may change a display location of a video according to a movement of an object within the video.

With the development of information technology, a technology for providing contents in a cloud environment using a variety of client devices has recently emerged. Efforts to apply a multi-screen technology that enables a media consumption on various devices and to provide a ultra high definition (UHD) video beyond an HD video are being made.

In general, compared to a general television (TV) or monitor, a client device having a large display may have a much larger screen in terms of a screen size, however, may have a similar resolution in terms of a resolution of a video displayable on the screen.

To increase a resolution of a client device having a large display according to the related art and a resolution of a video, the resolution may be increased by decreasing a size of each luminous pixel and arranging a relatively large number of luminous pixels on the same size of a display screen.

Alternatively, the resolution may be increased by configuring a plurality of low resolution displays in a form of a matrix.

However, when configuring a plurality of displays in a form of a matrix, required is a video output card capable of dividing and thereby outputting a single video output using a plurality of channels to match syncs of multiple videos to be displayed on the plurality of displays, respectively. In addition, when a computer equipped with the video output card is not mounted to be adjacent to the multi-screen, it is difficult to accurately match the syncs of the multiple videos.

Also, the client device having the large display may only divide and output a single video output using a plurality of pixels or a plurality of channels on a large display screen. Accordingly, it is difficult to configure a display screen that enables a user to perceive a three-dimensional (3D) effect or animation.

Also, although a video is divided using a plurality of pixels or a plurality of channels on a large screen, the client device having the large display may display the video as a video having a single rectangular aspect ratio and thus, a user may have many limitations in applying a form.

An aspect of the present invention provides a device and method for displaying content in a cloud environment that may configure a display screen of vividly representing the story of content by controlling a video display location to be moved according to a motion of an object within video content distributed in a cloud environment or by controlling two objects within the video to face each other at a predetermined angle when the objects face each other.

An embodiment of the present invention provides a client device for displaying content in a cloud environment, the client device including a display configured to display video data on pixels of a predetermined area; an attribute information creator configured to create video attribute information including photographing data according to a camera direction and a photographing location with respect to the video data to be displayed on the display; a display driver configured to set sequential frames to be displayed along a temporal flow according to a motion of an object with respect to the video data based on the video attribute information, to make the set sequential frames correspond to the pixels, respectively, along a camera movement direction or an object movement direction in a background space, and to sequentially display the set sequential frames; and a video storage configured to store at least one of the video data and the video attribute information.

Here, the display driver may be further configured to set sequential frames F1, F2, F3, ... , Fn-1, and Fn to be displayed along the temporal flow according to the motion of the object with respect to the video data, and to make the set sequential frames F1, F2, F3, ... , Fn-1, and Fn correspond to pixels S1, S2, S3, ... , Sn-1, and Sn of the predetermined area, respectively, along the camera movement direction or the object movement direction in the background space. The display driver may be further configured to display the sequential frame F1 on the pixel S1 in a time t1, to display the sequential frame F2 on the pixel S2 in a time t2, to display the sequential frame F3 on the pixel S3 in a time t3, and continuously using the same method, to display the sequential frame Fn-1 on the pixel Sn-1 in a time tn-1 and to display the sequential frame Fn on the pixel Sn in a time tn.

Also, the display driver may be further configured to set the pixels S1 and S2, and continuously the pixels Sn-1 and Sn to have a partially overlapping area, and to display the sequential frames F1 and F2 to have a partially overlapping area on the pixels S1 and S2 and continuously, to display the sequential frames Fn-1 and Fn to have a partially overlapping area on the pixels Sn-1 and Sn.

The attribute information creator may be configured to create the video attribute information including at least one of story data according to the motion of the object and caption data according to a story development, with respect to the video data.

Also, the attribute information creator may be configured to create the video attribute information at a time of creating the video data and insert the created video attribute information into a predetermined interval of the video data, or to receive information associated with the video data and create the video attribute information, and to separately store the created video attribute information in the video storage.

Another embodiment of the present invention also provides a client device for displaying content in a cloud environment, the client device including a display configured to display video data on pixels of a predetermined area; a tilt driver configured to drive a tilting in response to a predetermined tilting control signal with respect to the pixels of the predetermined area; an attribute information creator configured to create video attribute information including at least one of story data according to a motion of each object, caption data according to a story development, and photographing data according to a camera direction and a photographing location, with respect to the video data to be displayed on the display; a display driver configured to, based on the video attribute information, set pixels for displaying each of mutually conversing objects when the different objects converse with each other or to set pixels for displaying each of facing objects when the different objects face each other, and to alternately display a speaking object on the set pixels or to transfer, to a tilt driver, a tilting control signal for controlling the pixels for displaying the facing objects to face each other at a predetermined angle; and a video storage configured to store at least one of the video data and the video attribute information.

In this instance, the display driver may be further configured to set pixels for displaying an object on one side and pixels for displaying an object on another side based on a horizontal symmetry line, a vertical symmetry line, or a diagonal symmetry line, with respect to two objects that converse with each other according to the story data or with respect to a plurality of objects in a multilateral conversation, based on the video data.

Also, the display driver may be configured to display pixels set on the left of the display when an object on the left based on the vertical symmetry line speaks according to the story data based on the video data, to display pixels set on the right of the display when an object on the right based on the vertical symmetry line speaks, and to simultaneously display the pixels set on the left and the pixels set on the right when the object on the left and the object on the right simultaneously speak.

The display driver may be further configured to display pixels set on an upper side of the display when an object on an upper side based on the horizontal symmetry line speaks according to the story data based on the video data, to display pixels on a lower side of the display when an object on a lower side based on the horizontal symmetry line speaks, and to simultaneously display the pixels set on the upper side and the pixels set on the lower side when the object on the upper side and the object on the lower side simultaneously speak.

Also, the display driver may be further configured to set displays for displaying an object on the left and displays for displaying an object on the right based on the vertical symmetry line with respect to two objects that converse with each other according to the story data based on the video data. The display driver may be further configured to transfer, to the tilt driver, a tilting control signal for controlling the displays for displaying the object on the left to be tilted toward a right direction at a predetermined angle and a tilting control signal for controlling the displays for displaying the object on the right to be tilted toward a left direction at the predetermined angle.

The display driver may be further configured to set displays for displaying an object on an upper side and displays for displaying an object on a lower side with respect to different two objects that face each other in a vertical direction at a predetermined angle, based on the horizontal symmetry line according to the story data based on the video data. The display driver may be further configured to transfer, to the tilt driver, a tilting control signal for controlling the displays for displaying the object on the upper side to be tilted toward a downward direction at a predetermined angle and a tilting control signal for controlling the displays for displaying the object on the lower side to be tilted toward an upward direction at the predetermined angle.

Still another embodiment of the present invention also provides a method of displaying content on a client device in a cloud environment, the method including (a) creating video attribute information including at least one of story data according to a motion of each object, caption data according to a story development, and photographing data according to a camera direction and a photographing location, with respect to video data to be displayed on pixels of a predetermined area through the client device; (b) setting sequential frames to be displayed along a temporal flow according to a motion of an object with respect to the video data, based on the created video attribute information through the client device; (c) making the set sequential frames correspond to the pixels, respectively, along a camera movement direction or an object movement direction in a background space, through the client device; and (d) sequentially displaying the set sequential frames on the corresponding pixels along the temporal flow, through the client device.

Still another embodiment of the present invention also provides a method of displaying content on a client device in a cloud environment, the method including (a) creating video attribute information including at least one of story data according to a motion of each object, caption data according to a story development, and photographing data according to a camera direction and a photographing location, with respect to video data to be displayed on pixels of a predetermined area through the client device; (b) setting pixels for displaying each object when different two objects converse with each other or when a plurality of objects converse with one another according to the story data based on the video attribute information, through the client device; and (c) displaying set pixels corresponding to a speaking object through the client device.

According to embodiments of the present invention, it is possible to configure a display screen of vividly representing the story of content by controlling a video display location to be moved according to a motion of an object within a video or by controlling two objects within the video to face each other at a predetermined angle when the objects face each other.

FIG. 1 is a block diagram illustrating a configuration of a content display system in a cloud environment according to an embodiment of the present invention.

FIG. 2 is a block diagram illustrating a configuration of a client device for displaying content in a cloud environment according to an embodiment of the present invention.

FIG. 3 is a block diagram illustrating a configuration of a client device for displaying content in a cloud environment according to another embodiment of the present invention.

FIG. 4 is a flowchart illustrating a method of displaying content on a client device in a cloud environment according to an embodiment of the present invention.

FIG. 5 is a flowchart illustrating a method of displaying content on a client device in a cloud environment according to another embodiment of the present invention.

FIG. 6 illustrates an example of making set sequential frames correspond to at least one pixel, respectively, according to an embodiment of the present invention.

FIG. 7 illustrates an example of sequentially displaying each of set sequential frames on a corresponding pixel at a corresponding time according to an embodiment of the present invention.

FIG. 8 illustrates an example of a connection configuration of a tilt driver when a single pixel is a television (TV) screen or a liquid crystal display (LCD) screen according to an embodiment of the present invention.

FIG. 9 illustrates an example of setting pixels to be tilted in a horizontal direction according to an embodiment of the present invention.

FIG. 10 illustrates an example of alternately displaying corresponding pixels when different objects on the left and the right converse with each other according to an embodiment of the present invention.

FIG. 11 illustrates an example of alternately displaying corresponding pixels when different objects on an upper side and a lower side converse with each other according to an embodiment of the present invention.

FIG. 12 illustrates an example of tilting a screen so that different objects face each other based on a symmetry line according to an embodiment of the present invention.

FIG. 13 illustrates an example of tilting screen devices for displaying facing objects to face each other when displays are configured as the screen devices, respectively, according to an embodiment of the present invention.

Various alterations and modifications may be made to the present invention and various embodiments may also be applied to the present invention and thus, the present invention will be described with reference to the accompanying drawings. However, the present invention is not provided to be limiting thereof and should be understood to include all the changes, equivalents, and replacements included in the technical spirit and technical scope of the invention.

Hereinafter, a video display method and device according to embodiments of the present invention will be described with reference to the accompanying drawings. Like reference numerals refer to like elements throughout although they are illustrated in different drawings. Also, a repeated description related thereto is omitted here.

FIG. 1 is a block diagram illustrating a configuration of a content display system in a cloud environment according to an embodiment of the present invention.

Referring to FIG. 1, the content display system in the cloud environment according to an embodiment of the present invention includes a video management server 300 and a client device 100 connected thereto over a network.

The video management server 300 collects and stores contents provided from a broadcasting provider or a user.

Content provided from the broadcasting provider may include a live program provided in real time or a program provided on demand.

Also, content according to an embodiment of the present invention may include video data.

The video management server 300 provides content such as video data to the client device 100 over the network, in response to a request of the client device 100.

The client device 100 may be a display device that creates video attribute information from video data, set sequential frames along a temporal flow according to a motion of an object with respect to the video data based on the created video attribute information, makes the set sequential frames correspond to at least one pixel, respectively, along a camera movement direction or an object movement direction in a background space, and sequentially displays the set sequential frames on the at least one pixel along the temporal flow.

Depending on embodiments, the client device 100 may adjust a video frame so that video data to be displayed may be tilted on each screen in response to a tilting control signal, may set sequential frames to be displayed along a temporal flow according to a motion of an object with respect to video data based on video attribute information, may make the set sequential frames correspond to at least one pixel, respectively, along a camera movement direction or an object movement direction in a background space, and thereby sequentially display the set sequential frames on each corresponding pixel along the temporal flow, may set pixels for displaying each of facing objects when the different objects face each other based on story data according to a motion of each object, and may create a tilting control signal for controlling a screen so that the set pixels face each other at a predetermined angle.

The client device 100 includes a general broadcasting receiving device, such as a set-top box and a TV, and also includes any type of devices capable of making sequential frames correspond to at least one pixel and thereby displaying the sequential frames.

FIG. 2 is a block diagram illustrating a configuration of a client device for displaying content in a cloud environment according to an embodiment of the present invention.

Referring to FIG. 2, the client device 100 according to an embodiment of the present invention includes a display 110, an attribute information creator 120, a display driver 130, and a video storage 140.

In the display 110, a plurality of pixels of a predetermined area for displaying content such as video data is configured as a single display screen. The display 110 displays the video data on the pixels of the predetermined area.

For example, the display 110 may configure the number of pixels of a predetermined area recognizable by a user or the number of liquid crystal display (LCD) screens of a predetermined size to be at least one, or may include a single display screen configured by gathering at least a predetermined number of TV receivers or LCD monitors.

The attribute information creator 120 creates video attribute information including story data according to a motion of each object, caption data according to a story development, and photographing data according to a camera direction and a photographing location, with respect to video data to be displayed on the display 110.

According to an aspect of the present invention, at a time of creating video data, the attribute information creator 120 may create video attribute information including story data according to a motion of each object, caption data according to a story development, and photographing data according to a camera direction and a photographing location, and may insert the created video attribute information into a predetermined interval of the video data.

Also, the attribute information creator 120 may receive data or information from a user, separate from video data, and may create video attribute information including story data according to a motion of each object, caption data according to a story development, and photographing data according to a camera direction and a photographing location, with respect to the video data.

In this case, the video storage 140 may store the received data or information as separate video attribute information.

When simultaneously displaying video data and video attribute information on a screen, the client device 100 according to an embodiment of the present invention may synchronize the video attribute information to be suitable for each frame of the video data and thereby display the video data and the video attribute information.

The display driver 130 sets sequential frames along a temporal flow according to a motion of an object with respect to video data based on video attribute information, makes the set sequential frames correspond to at least one pixel, respectively, along a camera movement direction or an object movement direction in a background space, and may sequentially display the set sequential frames on each of the at least one pixel along the temporal flow.

Also, the display driver 130 sets sequential frames F1, F2, F3, ... , Fn-1, and Fn to be displayed along the temporal flow according to the motion of the object with respect to the video data, makes the set sequential frames F1, F2, F3, ... , Fn-1, and Fn correspond to pixels S1, S2, S3, ... , Sn-1, and Sn of the predetermined area, respectively, along the camera movement direction or the object movement direction in the background space, and displays the sequential frame F1 on the pixel S1 in a time t1, displays the sequential frame F2 on the pixel S2 in a time t2, displays the sequential frame F3 on the pixel S3 in a time t3, and continuously using the same method, displays the sequential frame Fn-1 on the pixel Sn-1 in a time tn-1 and displays the sequential frame Fn on the pixel Sn in a time tn.

Also, when setting the pixels S1, S2, and S2, and continuously the pixels Sn-1 and Sn, the display driver 130 may set the pixels to have a partially overlapping area.

In this case, the display driver 130 controls the sequential frames F1 and F2 and continuously the sequential frames Fn-1 and Fn to have partially overlapping areas and thereby be displayed on the pixels S1 and S2 and continuously the pixels Sn-1 and Sn, respectively.

The video storage 140 stores video data to be displayed on at least one pixel or stores video attribute information about the video data.

FIG. 3 is a block diagram illustrating a configuration of a client device for displaying content in a cloud environment according to another embodiment of the present invention.

Referring to FIG. 3, a client device 200 according to another embodiment of the present invention includes the display 110, the attribute information creator 120, a tilt driver 210, a display driver 220, and the video storage 140.

As described above with FIG. 2, the display 110 configures a single large screen using a plurality of pixels of a predetermined area for displaying video data. For example, the display 110 may be an ultra high definition (UHD) TV or a motion tracking head mounted display (HMD).

As described above with FIG. 2, the attribute information creator 120 creates video attribute information including story data according to a motion of each object, caption data according to a story development, and photographing data according to a camera direction and a photographing location, with respect to video data to be displayed on the display 110.

The tilt driver 210 adjusts a video frame so that video data to be displayed on the display 110 may be tilted on each screen in response to a tilting control signal of the display driver 220.

For example, the tilt driver 210 may be installed on the side of a plurality of pixels or a plurality of screens, or may be installed at the rear of the plurality of pixels or the plurality of screens.

FIG. 8 illustrates an example of a connection configuration of a tilt driver when a single pixel is a TV screen or an LCD screen. Referring to FIG. 8, when a single pixel is configured as a screen device such as a TV screen or an LCD screen, the tilt driver 210 may be configured to be connected to each of at least one screen device.

The display driver 220 sets sequential frames to be displayed along a temporal flow according to a motion of an object with respect to video data based on video attribute information, makes the set sequential frames correspond to at least one pixel, respectively, along a camera movement direction or an object movement direction in a background space, and sequentially displays the set sequential frames on each corresponding pixel along the temporal flow. When different objects face each other based on story data according to a motion of each object, the display driver 220 sets pixels for displaying each of the facing objects and transfers, to the tilt driver 210, a tilting control signal for controlling a screen so that the set pixels face each other at a predetermined angle.

The video storage 140 stores video data to be displayed on the display 110 or stores video attribute information about the video data.

According to an aspect of the present invention, the display driver 220 may set displays in a continuous direction for displaying an object on one side based on a symmetry line and displays in a continuous direction for displaying an object on another side based on the symmetry line, with respect to different two objects when two facing objects converse with each other in a predetermined direction based on a horizontal symmetry line, a vertical symmetry line, or a diagonal symmetry line, according to story data, or with respect to a plurality of objects when a plurality of objects converses with one another.

Also, according to another aspect of the present invention, the display driver 220 may set displays in a continuous vertical direction for displaying an object on the left based on a vertical symmetry line and displays in a continuous vertical direction for displaying an object on the right based on the vertical symmetry line, with respect to different two objects that face each other in a horizontal direction at a predetermined angle based on the vertical symmetry line according to story data.

Also, according to still another aspect of the present invention, the display driver 220 may transfer, to the tilt driver 210, a tilting control signal for controlling displays in a continuous vertical direction for displaying an object on the left based on a vertical symmetry line to be tilted toward a right direction at a predetermined angle and a tilting control signal for controlling displays in a continuous vertical direction for displaying an object on the right based on the vertical symmetry line to be tilted toward a left direction at the predetermined angle, so that the different two objects face each other in a horizontal direction at the predetermined angle based on the vertical symmetry line according to story data.

Also, according to still another aspect of the present invention, the display driver 220 may set displays in a continuous horizontal direction for displaying an object on an upper side based on a horizontal symmetry line and displays in a continuous horizontal direction for displaying an object on a lower side based on the horizontal symmetry line, with respect to the different two objects that face each other in a vertical direction at a predetermined angle based on the horizontal symmetry line according to story data.

Depending on embodiments, the display driver 220 may also transfer, to the tilt driver 210, a tilting control signal for controlling displays in a continuous horizontal direction for displaying an object on an upper side based on a horizontal symmetry line to be tilted toward a downward direction at a predetermined angle and a tilting control signal for controlling displays in a continuous horizontal direction for displaying an object on a lower side based on the horizontal symmetry line to be tilted toward an upward direction at the predetermined angle, so that the different two objects face each other in a vertical direction at the predetermined angle based on the horizontal symmetry line according to story data.

FIG. 4 is a flowchart illustrating a method of displaying content on a client device in a cloud environment according to an embodiment of the present invention.

Referring to FIG. 4, in operation S310, the attribute information creator 120 of the client device 100 of FIG. 2 creates video attribute information including story data according to a motion of each object, caption data according to a story development, and photographing data according to a camera direction and a photographing location, with respect to video data to be displayed on the display 110.

In this case, operation S310 may include an operation of storing, using the attribute information creator 120, video attribute information created with respect to the video data in the video storage 140.

Also, operation S310 may include an operation of creating, using the attribute information creator 120 at a time of creating video data, video attribute information including story data according to a motion of each object, caption data according to a story development, and photographing data according to a camera direction and a photographing location, and inserting the created video attribute information into a predetermined interval of the video data.

Also, operation S310 may include an operation of receiving, using the attribute information creator 120, data or information from a user, separate from video data, creating video attribute information including story data according to a motion of each object, caption data according to a story development, and photographing data according to a camera direction and a photographing location, with respect to the video data, and storing the received data or information in the video storage 140 as separate video attribute information.

In operation S320, the display driver 130 sets sequential frames to be displayed along a temporal flow according to a motion of an object with respect to the video data based on the video attribute information, which is illustrated in FIG. 6

That is, in operation S320, the display driver 130 may set the sequential frames F1, F2, F3, ... , Fn-1, and Fn to be displayed along the temporal flow according to a motion of an object with respect to the video data.

In operation S330, the display driver 130 makes the set sequential frames correspond to at least one pixel, for example, pixels S1, S2, S3, ... , Sn-1, and Sn, respectively, for displaying the set sequential frames along a camera movement direction or an object movement direction in a background space.

That is, FIG. 6 illustrates an example of making, using the display driver 130, the set sequential frames correspond to at least one pixel, respectively. Referring to FIG. 6, in operation S330, the display driver 130 may make the set sequential frames F1, F2, F3, ... , Fn-1, and Fn correspond to at least one pixel, for example, the pixels S1, S2, S3, ... , Sn-1, and Sn, respectively, along the camera movement direction or the object movement direction in the background space.

In operation S340, the display driver 130 sequentially displays the set sequential frames on each corresponding pixel along the temporal flow.

That is, as illustrated in FIG. 7, the display driver 130 displays the sequential frame F1 on the pixel S1 in the time t1, displays the sequential frame F2 on the pixel S2 in the time t2, displays the sequential frame F3 on the pixel S3 in the time t3, and continuously using the same method, displays the sequential frame Fn-1 on the pixel Sn-1 in the time tn-1 and displays the sequential frame Fn on the pixel Sn in the time tn.

FIG. 7 illustrates an example of sequentially displaying each of set sequential frames on a corresponding pixel at a corresponding time according to an embodiment of the present invention.

FIG. 7 illustrates an example in which when displaying the sequential frame F1 on the pixel S1 in a time t1, displaying the sequential frame F2 on the pixel S2 in the time t2, and continuously using the same method, displaying the sequential frame Fn-1 on the pixel Sn-1 in the time tn-1 and displaying the sequential frame Fn on the pixel Sn in the time tn, the client device displays the sequential frames F1 and F2 and continuously the sequential frames Fn-1 and Fn on the respective cells not to overlap. However, it is only an example and thus, when setting the pixels S1, S2, and S3, and continuously the pixels Sn-1 and Sn, the pixels may be set to have partially overlapping areas.

In this case, the client device may display the sequential frames F1 and F2 and continuously the sequential frames Fn-1 and Fn to have partially overlapping areas on the pixels S1 and S2 and continuously the pixels Sn-1 and Sn, respectively.

FIG. 5 is a flowchart illustrating a method of displaying content on a client device in a cloud environment according to another embodiment of the present invention.

Referring to FIG. 5, in operation S410, the attribute information creator 120 of the client device 200 according to another embodiment of the present invention creates video attribute information including story data according to a motion of each object, caption data according to a story development, and photographing data according to a camera direction and a photographing location, with respect to video data to be displayed on the display 110.

When it is determined that different objects converse with each other according to the story data based on the video attribute information in operation S420, the display driver 220 sets pixels for displaying each of the different objects in operation S430.

That is, FIG. 9 illustrates an example of setting pixels to be displayed according to story data using the display driver 220, and as illustrated in an upper diagram of FIG. 9, operation S430 may include an operation of setting pixels in a continuous vertical direction for displaying an object on the left based on a vertical symmetry line and pixels in a continuous vertical direction for displaying an object on the right based on the vertical symmetry line, with respect to different two objects that face and thereby converse with each other in a horizontal direction based on the vertical symmetry line.

Also, depending on embodiments, as illustrated in a lower diagram of FIG. 9, operation S430 may include an operation of setting, using the display driver 220, pixels in a continuous horizontal direction for displaying an object on an upper side based on a horizontal symmetry line and pixels on a continuous horizontal direction for displaying an object on a lower side based on the horizontal symmetry line, with respect to different two objects that face and thereby converse with each other in a vertical direction based on the horizontal symmetry line according to story data.

In operation S440, the display driver 220 displays a speaking object on the set pixels when the different two objects converse with each other according to story data based on the video attribute information.

That is, FIG. 10 illustrates an example of alternately displaying corresponding pixels when different objects on the left and the right converse with each other according to an embodiment of the present invention. As illustrated in FIG. 10, when different objects on the left and the right converse with each other according to story data based on video attribute information through the display driver 220, operation S440 may include an operation of displaying pixels set on the left when an object on the left based on a horizontal symmetry line speaks, displaying pixels set on the right when an object on the right based on the horizontal symmetry line speaks, and simultaneously displaying the pixels set on the left and the pixels set on the right based on a symmetry line including a diagonal line when the objects on the left and the right simultaneously speak.

According to embodiments of the present invention, when objects on the left and the right based on a symmetry line, including a diagonal line, of a screen converse with each other, a video effect in conversation may be enhanced by displaying only the object on the left on the screen when the object on the left speaks and by displaying only the object on the right on the screen when the object on the right speaks.

Also, FIG. 11 illustrates an example of alternately displaying corresponding pixels when different objects on an upper side and a lower side converse with each other according to an embodiment of the present invention. Depending on embodiments, as illustrated in FIG. 11, when different objects on an upper side and a lower side converse with each other according to story data based on video attribute information through the display driver 220, operation S440 may include an operation of displaying pixels set on the upper side when an object on the upper side based on a horizontal symmetry line speaks, displaying pixels set on the lower side when an object on the lower side based on the horizontal symmetry line speaks, and simultaneously displaying the pixels set on the upper side and the pixels set on the lower side based on a symmetry line including a diagonal line when the objects on the upper side and the lower side simultaneously speak.

When it is determined that the different objects face each other according to story data based on video attribute information in operation S420, the display driver 220 sets pixels for displaying each of the facing objects in operation S450, which is illustrated in FIG. 9.

That is, as illustrated in FIG. 9, in operation S450, the display driver 220 may set pixels in a continuous vertical direction for displaying an object on the left based on a vertical symmetry line and pixels in a continuous vertical direction for displaying an object on the right based on the vertical symmetry line, with respect to the different two objects that face each other in a horizontal direction at a predetermined angel based on the vertical symmetry line according to story data.

Also, depending on embodiments, as illustrated in FIG. 9, in operation S450, the display driver 220 may set pixels in a continuous horizontal direction for displaying an object on an upper side based on a horizontal symmetry line and pixels in a continuous horizontal direction for displaying an object on a lower side based on the horizontal symmetry line, with respect to the different two objects that face each other in a vertical direction at a predetermined angle based on the horizontal symmetry line according to story data.

In operation S460, the display driver 220 transfers, to the tilt driver 210, a tilting control signal for controlling the set pixels to face each other at the predetermined angle based on a symmetry line including a diagonal line.

That is, operation S460 may include an operation of transferring, to the tilt driver 210, a tilting control signal for controlling pixels in a continuous vertical direction for displaying an object on the left based on a vertical symmetry line to be tilted toward a right direction at a predetermined angle and a tilting control signal for controlling pixels in a continuous vertical direction for displaying an object on the right based on the vertical symmetry line to be tilted toward a left direction at the predetermined angle, so that the different two objects face each other in a horizontal direction at the predetermined angle based on the vertical symmetry line according to story data.

Also, depending on embodiments, operation S460 may also include an operation of transferring, using the display driver 220 to the tilt driver 210, a tilting control signal for controlling pixels in a continuous horizontal direction for displaying an object on an upper side based on a horizontal symmetry line to be tilted toward a downward direction at a predetermined angle and a tilting control signal for controlling pixels in a continuous horizontal direction for displaying an object on a lower side based on the horizontal symmetry line to be tilted toward an upward direction at the predetermined angle, so that the different two objects face each other in a vertical direction at the predetermined angle based on the horizontal symmetry line according to story data.

As illustrated in FIG. 12, in operation S470, the tilt driver 210 performs tilting processing so that pixels for displaying different objects on both sides based on a symmetry line, including a diagonal line, of a screen may face each other at a predetermined angle based on the symmetry line including the diagonal line in response to the tilting control signal,.

That is, FIG. 12 illustrates an example of tilting a screen so that different objects face each other based on a symmetry line according to an embodiment of the present invention. As illustrated in FIG. 12, in operation S470, the tilt driver 210 controls a screen to be tilted so that pixels in a continuous vertical direction for displaying on the left based on a vertical symmetry line may be toward a right direction at a predetermined angle and pixels in a continuous vertical direction for displaying on the right based on the vertical symmetry line may be toward a left direction at the predetermined angle, so that different objects face each other in a horizontal direction at a predetermined angle based on a symmetry line, for example, a vertical symmetry line according to story data.

According to embodiments of the present invention, by processing a screen to be tilted so that pixels on the left for displaying different two objects based on a vertical symmetry line may be toward a right direction at a predetermined angle and pixels on the right may be toward a left direction at the predetermined angle, to match that the different two objects face each other in a horizontal direction at a predetermined angle based on a vertical symmetry line according to story data, it is possible to configure a display screen for providing a user with a 3D effect.

Also, depending on embodiments, as illustrated in FIG. 12, operation S470 may include an operation of controlling, using the tilt driver 210, displays in a continuous horizontal direction for displaying an object on an upper side based on a horizontal symmetry line to be tilted toward a downward direction at a predetermined angle and controlling displays in a continuous horizontal direction for displaying an object on a lower side based on the horizontal symmetry line to be tilted toward an upward direction at the predetermined angle, so that the different two objects may face each other in a vertical direction at the predetermined angle based on the horizontal symmetry line according to story data.

According to embodiments of the present invention, displays on an upper side for displaying different two objects may be tilted toward a downward direction at a predetermined angle and displays on a lower side may be tilted toward an upward direction at the predetermined angle based on a horizontal symmetry line, to match that the different two objects face each other in a vertical direction at a predetermined angle based on a horizontal symmetry line according to story data.

Meanwhile, FIG. 13 illustrates an example of tilting screen devices for displaying facing objects to face each other when displays are configured as the screen devices, respectively, according to an embodiment of the present invention. As illustrated in FIG. 13, when the display 110 is configured as a single display screen by gathering at least a predetermined number of screen devices, such as a TV receiver or an LCD monitor, operation S470 may also include an operation of controlling, using the tilt driver 210, a screen device on the left for displaying different two objects based on a vertical symmetry line to be tilted toward a right direction at a predetermined angle and controlling a screen device on the right to be tilted toward a left direction at the predetermined angle to match that the different two objects based on the vertical symmetry line face each other in a horizontal direction at the predetermined angle.

Also, depending on embodiments, as illustrated in FIG. 13, operation S470 may include an operation of controlling, using the tilt driver 210, a screen device in a continuous horizontal direction for displaying an object on an upper side based on a horizontal symmetry line to be tilted toward a downward direction at a predetermined angle and controlling a screen device in a continuous horizontal direction for displaying an object on a lower side based on the horizontal symmetry line to be tilted toward an upward direction at the predetermined angle, so that the different two objects face each other in a vertical direction at the predetermined angle based on the horizontal symmetry line in response to a tilting control signal.

Meanwhile, although embodiments of the present invention are described based on a conversation between users positioned on both sides based on a symmetry line, including a diagonal line, of a screen, the present invention is not limited thereto and thus, the same principle may be applied to a conversation among a plurality of users. That is, in the case of a conversation among a plurality of users, locations of users on an actual background space may be mapped on a screen. In this case, pixels for displaying each user may be configured to be appropriately tilted.

As described above, according to embodiments of the present invention, there may be provided a video display method and device that may configure a display screen capable of providing a vivid 3D effect by moving a video display location according to a motion of an object within a video and displaying videos of two objects to face each other at a predetermined angle when the two objects face each other within the video.

Although a few exemplary embodiments of the present invention have been shown and described, the present invention is not limited to the described exemplary embodiments. Instead, it would be appreciated by those skilled in the art that changes may be made to these exemplary embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

[Industrial Applicability]

The present invention may be applicable to a video display method and device that may configure a display screen capable of providing a vivid 3D effect by moving a video display location in response to a motion of an object within a video and displaying videos of two objects to face each other at a predetermined angle when the two objects face within the video.

Claims (13)

  1. A client device for displaying content in a cloud environment, the client device comprising:
    a display configured to display video data on pixels of a predetermined area;
    an attribute information creator configured to create video attribute information comprising photographing data according to a camera direction and a photographing location with respect to the video data to be displayed on the display;
    a display driver configured to set sequential frames to be displayed along a temporal flow according to a motion of an object with respect to the video data based on the video attribute information, to make the set sequential frames correspond to the pixels, respectively, along a camera movement direction or an object movement direction in a background space, and to sequentially display the set sequential frames; and
    a video storage configured to store at least one of the video data and the video attribute information.
  2. The client device of claim 1, wherein the display driver is further configured to set sequential frames F1, F2, F3, ... , Fn-1, and Fn to be displayed along the temporal flow according to the motion of the object with respect to the video data, and to make the set sequential frames F1, F2, F3, ... , Fn-1, and Fn correspond to pixels S1, S2, S3, ... , Sn-1, and Sn of the predetermined area, respectively, along the camera movement direction or the object movement direction in the background space, and
    is further configured to display the sequential frame F1 on the pixel S1 in a time t1, to display the sequential frame F2 on the pixel S2 in a time t2, to display the sequential frame F3 on the pixel S3 in a time t3, and continuously using the same method, to display the sequential frame Fn-1 on the pixel Sn-1 in a time tn-1 and to display the sequential frame Fn on the pixel Sn in a time tn.
  3. The client device of claim 2, wherein the display driver is further configured to set the pixels S1 and S2, and continuously the pixels Sn-1 and Sn to have a partially overlapping area, and to display the sequential frames F1 and F2 to have a partially overlapping area on the pixels S1 and S2 and continuously, to display the sequential frames Fn-1 and Fn to have a partially overlapping area on the pixels Sn-1 and Sn.
  4. The client device of claim 1, wherein the attribute information creator is configured to create the video attribute information comprising at least one of story data according to the motion of the object and caption data according to a story development, with respect to the video data.
  5. The client device of claim 1, wherein the attribute information creator is configured to create the video attribute information at a time of creating the video data and insert the created video attribute information into a predetermined interval of the video data, or to receive information associated with the video data and create the video attribute information, and to separately store the created video attribute information in the video storage.
  6. A client device for displaying content in a cloud environment, the client device comprising:
    a display configured to display video data on pixels of a predetermined area;
    a tilt driver configured to drive a tilting in response to a predetermined tilting control signal with respect to the pixels of the predetermined area;
    an attribute information creator configured to create video attribute information comprising at least one of story data according to a motion of each object, caption data according to a story development, and photographing data according to a camera direction and a photographing location, with respect to the video data to be displayed on the display;
    a display driver configured to, based on the video attribute information, set pixels for displaying each of mutually conversing objects when the different objects converse with each other or to set pixels for displaying each of facing objects when the different objects face each other, and to alternately display a speaking object on the set pixels or to transfer, to a tilt driver, a tilting control signal for controlling the pixels for displaying the facing objects to face each other at a predetermined angle; and
    a video storage configured to store at least one of the video data and the video attribute information.
  7. The client device of claim 6, wherein the display driver is further configured to set pixels for displaying an object on one side and pixels for displaying an object on another side based on a horizontal symmetry line, a vertical symmetry line, or a diagonal symmetry line, with respect to two objects that converse with each other according to the story data or with respect to a plurality of objects in a multilateral conversation, based on the video data.
  8. The client device of claim 7, wherein the display driver is configured to display pixels set on the left of the display when an object on the left based on the vertical symmetry line speaks according to the story data based on the video data, to display pixels set on the right of the display when an object on the right based on the vertical symmetry line speaks, and to simultaneously display the pixels set on the left and the pixels set on the right when the object on the left and the object on the right simultaneously speak.
  9. The client device of claim 7, wherein the display driver is further configured to display pixels set on an upper side of the display when an object on an upper side based on the horizontal symmetry line speaks according to the story data based on the video data, to display pixels on a lower side of the display when an object on a lower side based on the horizontal symmetry line speaks, and to simultaneously display the pixels set on the upper side and the pixels set on the lower side when the object on the upper side and the object on the lower side simultaneously speak.
  10. The client device of claim 7, wherein the display driver is further configured to set displays for displaying an object on the left and displays for displaying an object on the right based on the vertical symmetry line with respect to two objects that converse with each other according to the story data based on the video data, and
    is further configured to transfer, to the tilt driver, a tilting control signal for controlling the displays for displaying the object on the left to be tilted toward a right direction at a predetermined angle and a tilting control signal for controlling the displays for displaying the object on the right to be tilted toward a left direction at the predetermined angle.
  11. The client device of claim 7, wherein the display driver is further configured to set displays for displaying an object on an upper side and displays for displaying an object on a lower side with respect to different two objects that face each other in a vertical direction at a predetermined angle, based on the horizontal symmetry line according to the story data based on the video data, and
    is further configured to transfer, to the tilt driver, a tilting control signal for controlling the displays for displaying the object on the upper side to be tilted toward a downward direction at a predetermined angle and a tilting control signal for controlling the displays for displaying the object on the lower side to be tilted toward an upward direction at the predetermined angle.
  12. A method of displaying content on a client device in a cloud environment, the method comprising:
    (a) creating video attribute information comprising at least one of story data according to a motion of each object, caption data according to a story development, and photographing data according to a camera direction and a photographing location, with respect to video data to be displayed on pixels of a predetermined area through the client device;
    (b) setting sequential frames to be displayed along a temporal flow according to a motion of an object with respect to the video data, based on the created video attribute information through the client device;
    (c) making the set sequential frames correspond to the pixels, respectively, along a camera movement direction or an object movement direction in a background space, through the client device; and
    (d) sequentially displaying the set sequential frames on the corresponding pixels along the temporal flow, through the client device.
  13. A method of displaying content on a client device in a cloud environment, the method comprising:
    (a) creating video attribute information comprising at least one of story data according to a motion of each object, caption data according to a story development, and photographing data according to a camera direction and a photographing location, with respect to video data to be displayed on pixels of a predetermined area through the client device;
    (b) setting pixels for displaying each object when different two objects converse with each other or when a plurality of objects converse with one another according to the story data based on the video attribute information, through the client device; and
    (c) displaying set pixels corresponding to a speaking object through the client device.
PCT/KR2014/012720 2013-12-24 2014-12-23 Client device and method for displaying contents in cloud environment WO2015099407A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020130162254A KR20150074455A (en) 2013-12-24 2013-12-24 Apparatus and method for displaying a video frame
KR10-2013-0162254 2013-12-24

Publications (1)

Publication Number Publication Date
WO2015099407A1 true WO2015099407A1 (en) 2015-07-02

Family

ID=53479182

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2014/012720 WO2015099407A1 (en) 2013-12-24 2014-12-23 Client device and method for displaying contents in cloud environment

Country Status (2)

Country Link
KR (1) KR20150074455A (en)
WO (1) WO2015099407A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040257444A1 (en) * 2003-06-18 2004-12-23 Matsushita Electric Industrial Co., Ltd. Video surveillance system, surveillance video composition apparatus, and video surveillance server
US20060282855A1 (en) * 2005-05-05 2006-12-14 Digital Display Innovations, Llc Multiple remote display system
KR20120121627A (en) * 2011-04-27 2012-11-06 경북대학교 산학협력단 Object detection and tracking apparatus and method thereof, and intelligent surveillance vision system using the same
KR20130010657A (en) * 2011-07-19 2013-01-29 김영대 A video wall system using network synchronized rendering to display multiple sources
US20130329129A1 (en) * 2009-08-17 2013-12-12 Adobe Systems Incorporated Systems and Methods for Moving Objects in Video by Generating and Using Keyframes

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040257444A1 (en) * 2003-06-18 2004-12-23 Matsushita Electric Industrial Co., Ltd. Video surveillance system, surveillance video composition apparatus, and video surveillance server
US20060282855A1 (en) * 2005-05-05 2006-12-14 Digital Display Innovations, Llc Multiple remote display system
US20130329129A1 (en) * 2009-08-17 2013-12-12 Adobe Systems Incorporated Systems and Methods for Moving Objects in Video by Generating and Using Keyframes
KR20120121627A (en) * 2011-04-27 2012-11-06 경북대학교 산학협력단 Object detection and tracking apparatus and method thereof, and intelligent surveillance vision system using the same
KR20130010657A (en) * 2011-07-19 2013-01-29 김영대 A video wall system using network synchronized rendering to display multiple sources

Also Published As

Publication number Publication date
KR20150074455A (en) 2015-07-02

Similar Documents

Publication Publication Date Title
WO2014014155A1 (en) A head mounted dipslay and method of outputting a content using the same
EP2346021B1 (en) Video frame formatting supporting mixed two and three dimensional video data communication
US7250978B2 (en) Multi-vision system and method of controlling the same
WO2010053246A2 (en) Apparatus and method for synchronizing stereoscopic image, and apparatus and method for providing stereoscopic image based on the same
US8564642B2 (en) Communication method, communication system, transmission method, transmission apparatus, receiving method and receiving apparatus
WO2012044130A2 (en) 3d display device using barrier and driving method thereof
DeFanti et al. The OptIPortal, a scalable visualization, storage, and computing interface device for the OptiPuter
KR20050028816A (en) Using packet transfer for driving lcd panel driver electronics
WO2010150936A1 (en) Stereoscopic image reproduction device and method for providing 3d user interface
WO2013094841A1 (en) Device for displaying multi-view 3d image using dynamic visual field expansion applicable to multiple observers and method for same
US20100177161A1 (en) Multiplexed stereoscopic video transmission
WO2007117485A3 (en) Screen sharing method and apparatus
WO2014014185A1 (en) Method of controlling display of display device by mobile terminal and mobile terminal for the same
WO2012093849A2 (en) 3d display device and method
WO2007123960A3 (en) System and method for enhancing eye gaze in a telepresence system
US7821550B2 (en) Remote image-pickup system, camera device, and card substrate
US20110050850A1 (en) Video combining device, video display apparatus, and video combining method
WO2011149266A2 (en) Stereoscopic display apparatus and method of driving the same cross-reference to related patent applications
WO2009088265A2 (en) Video processing system, video processing method, and video transfer method
WO2014010942A1 (en) Multi-projection system
WO2010150973A1 (en) Shutter glasses, method for adjusting optical characteristics thereof, and 3d display system adapted for the same
WO2010143820A2 (en) Device and method for providing a three-dimensional pip image
WO2010150976A2 (en) Receiving system and method of providing 3d image
WO2014010940A1 (en) Image correction system and method for multi-projection
CN102082944B (en) Meeting venue comprising telepresence control method, apparatus and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14875137

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14/10/2016)

122 Ep: pct application non-entry in european phase

Ref document number: 14875137

Country of ref document: EP

Kind code of ref document: A1