KR102002037B1 - Method for watching multi-view video, method for providing multi-view video and user device - Google Patents

Method for watching multi-view video, method for providing multi-view video and user device Download PDF

Info

Publication number
KR102002037B1
KR102002037B1 KR1020150153603A KR20150153603A KR102002037B1 KR 102002037 B1 KR102002037 B1 KR 102002037B1 KR 1020150153603 A KR1020150153603 A KR 1020150153603A KR 20150153603 A KR20150153603 A KR 20150153603A KR 102002037 B1 KR102002037 B1 KR 102002037B1
Authority
KR
South Korea
Prior art keywords
image
track
output
frame
time
Prior art date
Application number
KR1020150153603A
Other languages
Korean (ko)
Other versions
KR20170051913A (en
Inventor
문준희
Original Assignee
주식회사 케이티
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 케이티 filed Critical 주식회사 케이티
Priority to KR1020150153603A priority Critical patent/KR102002037B1/en
Publication of KR20170051913A publication Critical patent/KR20170051913A/en
Application granted granted Critical
Publication of KR102002037B1 publication Critical patent/KR102002037B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/349Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/349Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking
    • H04N13/351Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking for displaying simultaneously

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A multi-view image viewing method using a multi-track image performed by a user terminal includes receiving a multi-track image including a plurality of track images corresponding to images at respective view points from a streaming server, And outputting a first track image corresponding to a first point in time of the multi-track image, and outputting a second track image at a first point in time during the output of the first track image And outputting a second track image corresponding to a second point in time of the multi-track image from a first output time.

Description

TECHNICAL FIELD [0001] The present invention relates to a multi-view video viewing method, a multi-view video providing method, and a multi-view video displaying method,

The present invention relates to a multi-view image viewing method and a multi-view image providing method using a multi-track image.

The time slice service is a service for photographing a subject at various viewpoints using a plurality of cameras, and editing various points of view of the subject at a specific time in a single image.

The conventional time slice service has a problem in that although the viewer views the scene at various points of view, the image touched by the viewer has no freedom of view selection as an edited image.

In Korean Patent Publication No. 2005-0121345, video data captured by a plurality of video cameras are edited and divided into a main screen track portion constituting a main screen and a sub-screen track portion constituting a plurality of sub-screens, And the data of the main screen track part and one or more sub screen track parts are integrated into one or two or more pieces of data in accordance with the time synchronization part in accordance with the operation of the viewer.

When a request for outputting an image for a specific time point is received from among a plurality of track images corresponding to images at each time point, a track image corresponding to the time point is output, and while the track image is being output, It is desired to output a track image corresponding to another point in time. It is to be understood, however, that the technical scope of the present invention is not limited to the above-described technical problems, and other technical problems may exist.

According to an aspect of the present invention, there is provided a multi-view image viewing method using a multi-track image performed by a user terminal, the multi-view image viewing method including: Receiving a request for outputting an image at a first point in time of the multi-track image, and outputting a first track image corresponding to the first point in time of the multi-track image, Receiving a request for outputting an image at a second time point from a first output time during the output of the first track image, and outputting a second track image corresponding to the second time point from the first output time .

According to a second aspect of the present invention, there is provided a multi-track image providing method using multi-track images performed in a streaming server, the method comprising: receiving images of a plurality of viewpoints from a plurality of cameras photographed at different viewpoints; The method comprising: generating a first multi-track image including a plurality of track images corresponding to respective images of the track, transmitting the first multi-track image to a user terminal, receiving a request for an integrated image of each track from the user terminal Generating a second multi-track image including the integrated image, and transmitting the second multi-track image to the user terminal.

A user terminal providing a multi-track image using a multi-track image according to the third aspect of the present invention includes a transceiver for receiving a multi-track image including a plurality of track images corresponding to images at respective view points from a streaming server, And an output unit for outputting a track image corresponding to the input time point of the multi-track image, wherein the multi-track image includes time information of each track image, A header area including time information of each frame included in the track image, and a frame area including the plurality of track images.

The above-described task solution is merely exemplary and should not be construed as limiting the present invention. In addition to the exemplary embodiments described above, there may be additional embodiments described in the drawings and the detailed description of the invention.

According to any one of the above objects, when a request for outputting a video image at a specific time point is received from among a plurality of track images corresponding to images at each time point, a track image corresponding to the time point is output , And when a request for outputting an image at another time point is received while the track image is being output, the track image corresponding to another time point can be output.

1 is a configuration diagram of a multi-view image providing system according to an embodiment of the present invention.
FIG. 2 is a view showing images of a plurality of viewpoints received from a plurality of cameras according to an embodiment of the present invention.
3 is a block diagram of the user terminal shown in FIG. 1, in accordance with one embodiment of the present invention.
4 is a diagram illustrating a structure of a multi-track image according to an embodiment of the present invention.
5 is a diagram illustrating interface information mapped at each viewpoint according to an embodiment of the present invention.
6 is a diagram for explaining a method for viewing a multi-view image in a user terminal according to an embodiment of the present invention.
7 is a view for explaining a method of providing an integrated image of each track according to an embodiment of the present invention.
FIG. 8 is a flowchart illustrating a multi-view image viewing method using a multi-track image performed in a user terminal according to an exemplary embodiment of the present invention.
FIG. 9 is a flowchart illustrating a multi-view image providing method using a multi-track image performed in a streaming server according to an exemplary embodiment of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings, which will be readily apparent to those skilled in the art. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In order to clearly illustrate the present invention, parts not related to the description are omitted, and similar parts are denoted by like reference characters throughout the specification.

Throughout the specification, when a part is referred to as being "connected" to another part, it includes not only "directly connected" but also "electrically connected" with another part in between . Also, when an element is referred to as "comprising ", it means that it can include other elements as well, without departing from the other elements unless specifically stated otherwise.

In this specification, the term " part " includes a unit realized by hardware, a unit realized by software, and a unit realized by using both. Further, one unit may be implemented using two or more hardware, or two or more units may be implemented by one hardware.

In this specification, some of the operations or functions described as being performed by the terminal or the device may be performed in the server connected to the terminal or the device instead. Similarly, some of the operations or functions described as being performed by the server may also be performed on a terminal or device connected to the server.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings.

1 is a configuration diagram of a multi-view image providing system according to an embodiment of the present invention.

Referring to FIG. 1, the multi-view image providing system may include a user terminal 100 and a streaming server 110. However, since the multi-view image providing system of FIG. 1 is only one embodiment of the present invention, the present invention is not limited to FIG. 1, and the multi- have.

The streaming server 110 may be a server for providing multi-view images using multi-track images.

Specifically, the streaming server 110 can receive images at different viewpoints from a plurality of cameras. Here, each of the plurality of cameras may be a camera photographed at various angles toward the same subject.

2, an image at a plurality of viewpoints received from a plurality of cameras will be described for a moment.

FIG. 2 is a view showing images of a plurality of viewpoints received from a plurality of cameras according to an embodiment of the present invention.

Referring to FIG. 2, a plurality of cameras 200 can photograph the same subject at different points in time for a preset time.

For example, the first camera 201 can shoot the same subject at the first viewpoint. In addition, the first camera 201 can transmit the photographed image 203 to the streaming server 110.

The second camera 202 can take the same subject at the second time point. Also, the second camera 202 may transmit the photographed image 204 to the streaming server 110.

1, the streaming server 110 generates a first multi-track image including a plurality of track images corresponding to images of a plurality of viewpoints, and transmits the generated first multi-track image to the user terminal 100 ). Here, each of the plurality of track images or each frame of each track image may include position information of the camera (photographed point information).

For example, the streaming server 110 maps an image photographed at a first time point to a first track image, maps an image photographed at a second time point to a second track image, To the Nth track image. In addition, the streaming server 110 may map the audio stream to one audio stream.

The user terminal 100 can receive a multi-track image including a plurality of track images corresponding to images at each viewpoint from the streaming server 110. [ Here, the multi-track image may include view information of each track image, a header area including time information of each frame included in each track image, and a frame area including a plurality of track images.

The user terminal 100 may provide an interface through which a user may select a viewpoint of an image. The interface may be, for example, an interface for moving the viewpoint by touching and rotating the screen (hereinafter referred to as slide touch input), and may include a separate button for moving the viewpoint.

For example, the user terminal 100 sets the track image switching to the first viewpoint as the slide touch input in the forward direction and switches the track image to the second viewpoint in the backward direction )) Directional slide touch input.

When the user terminal 100 receives a selection of any one of the viewpoints from the user, the user terminal 100 can output a track image corresponding to the input point of time of the multi-track image. For example, the user terminal 100 may output a first track image corresponding to a first time point when receiving a request for outputting an image at a first time point.

The user terminal 100 may comprise a mobile terminal capable of wireless communication, and according to various embodiments of the present invention, the user terminal 100 may be various types of devices. For example, the user terminal 100 may be a portable terminal that can access a remote server via a network. Here, as an example of a portable terminal, a portable communication device that is guaranteed to be portable and mobility may be a personal communication system (PCS), a global system for mobile communications (GSM), a personal digital cellular (PDC), a personal handyphone system (PHS) (Personal Digital Assistant), IMT (International Mobile Telecommunication) -2000, CDMA (Code Division Multiple Access) -2000, W-CDMA (W-CDMA), Wibro (Wireless Broadband Internet) , Tablet PCs, and the like, all of which may include handheld based wireless communication devices. However, the user terminal 100 is not limited to the one shown in FIG. 1 or the ones illustrated above.

Generally, the components of the multi-view image providing system of FIG. 1 are connected through the network 120. FIG. A network refers to a connection structure in which information can be exchanged between nodes such as terminals and servers. An example of such a network is Wi-Fi, Bluetooth, Internet, LAN Network, wireless LAN, WAN, PAN, 3G, 4G, 5G, LTE, and the like.

Hereinafter, the operation of each component of the multi-view image providing system of FIG. 1 will be described in more detail.

3 is a block diagram of the user terminal 100 shown in FIG. 1, in accordance with one embodiment of the present invention.

3, the user terminal 100 may include a transceiver unit 300, an interface unit 310, an output unit 320, and a memory 330. However, the user terminal 100 shown in FIG. 3 is only one embodiment of the present invention, and various modifications are possible based on the components shown in FIG.

The transmission / reception unit 300 may receive a first multi-track image including a plurality of track images corresponding to images at respective viewpoints from the streaming server 110. [

The structure of the first multi-track image will be described with reference to FIG.

4 is a diagram illustrating a structure of a multi-track image according to an embodiment of the present invention.

4, the first multi-track image includes a header area 400 including time information of each track image, time information of each frame included in each track image, and a frame area 401 including a plurality of track images ).

For example, the header area 400 of the first multitrack image includes file type information, the number of track images, the viewpoint information of each track image (e.g., the origin information of the viewpoint of each track), the frame total number information, And may include time information of each track and each frame. Here, the file type information may include information for confirming that the file type information is a multi-track image including a plurality of track images.

The frame region 401 includes a plurality of sub-frame regions 402, and each sub-frame region 402 may include frames at the same time (time interval) in each track image. In addition, each subframe area 402 stores time information and may include frames and audio data of each track image corresponding to the same time (time interval).

For example, the first sub-frame region 402-1 may include N frames 403 extracted one by one from each of the first track image to the N-th track image. Here, the N frames 403 may be frames corresponding to the same first time (time interval). In addition, the first sub-frame region 402-1 may include first audio data corresponding to a first time (time interval).

Referring again to FIG. 3, the interface unit 310 may receive a selection for one of the viewpoints. For example, the interface unit 310 may receive a request for outputting a video image at a specific time point in a touch direction input by a user.

Referring to FIG. 5, the point-in-time information mapped at each point in time will be described briefly.

5 is a diagram illustrating interface information mapped at each viewpoint according to an embodiment of the present invention.

5, the interface unit 310 sets the track image switching to the first view point as the slide touch input in the forward direction and switches the track image to the second view point in the backward direction The track image switching to the third viewpoint can be set to the slide touch input in the up direction and the track image switch to the fourth view point can be set to the slide touch input in the lower direction .

For example, the frame region 500 of the first multi-track image is composed of three sub-frame regions 501 to 503, and each frame of four track images is included in each of the sub-frame regions 501 to 503 .

The user terminal 100 transmits each frame 504, 508, and 512 of the first track image corresponding to the first viewpoint included in each of the subframe regions 501 to 503 to the slide touch input 516 in the forward direction ). ≪ / RTI >

Also, the user terminal 100 may input each frame 505, 509, and 513 of the second track image corresponding to the second time included in each of the sub-frame areas 501 to 503 in a backward direction (517).

In addition, the user terminal 100 may input each frame 506, 510, and 514 of the third track image corresponding to the third viewpoint included in each of the sub-frame areas 501 to 503 in a slide touch input Lt; RTI ID = 0.0 > 518 < / RTI >

In addition, the user terminal 100 displays each frame 507, 511, and 515 of the fourth track image corresponding to the fourth viewpoint included in each of the sub-frame areas 501 to 503 as a slide touch input in the lower direction (519).

The interface unit 310 may output a frame corresponding to the time point according to user input based on the interface information mapped to each frame.

Referring again to FIG. 3, the output unit 320 may output a track image corresponding to a point of time when the first multi-track image is input from the user.

The output unit 320 may output the first track image corresponding to the first time point when the first multi-track image is requested to be output.

The output unit 320 may output the frame of the first track image included in each sub-frame area based on the time information of each frame included in the header area of the first multi-track image.

The output unit 320 outputs the second track image corresponding to the second point of time of the first multi-track image when receiving the output request of the image at the second point in time at the first output time while outputting the first track image It can be outputted from the first output time.

Based on the time information of each frame included in the header area of the first multi-track image, the output unit 320 outputs the second track image after the first output time of the frames of the second track image included in each sub- Can be output.

If the output request of the integrated image of each track is input at the second output time during the output of the second track image, the transmission / reception unit 300 can transmit a request for the integrated image to the streaming server 110. [ Here, the request for the integrated image may include the current output track and the current output time.

Thereafter, the output unit 320 receives the second multi-track image including the integrated image from the streaming server 110, and outputs an integrated image of the second multi-track image. Here, the second multitrack image includes a plurality of track images, and the second track image (corresponding to the track currently being output transmitted from the transmission / reception unit 300 to the streaming server 110) And an integrated image including a plurality of frames corresponding to a second output time of the image.

The memory unit 330 stores data input and output between the respective components in the user terminal 100 and stores data input and output between the user terminal 100 and components outside the user terminal 100 . For example, the memory 330 may store a multi-track image including a plurality of track images received from the streaming server 110. One example of such a memory unit 330 includes a hard disk drive, a ROM (Read Only Memory), a RAM (Random Access Memory), a flash memory, a memory card, and the like existing in the user terminal 100 or the like.

Those skilled in the art will appreciate that the transceiver 300, the interface unit 310, the output unit 320, and the memory 330 may be separately implemented, or one or more of them may be integrated.

6 is a diagram for explaining a method for viewing a multi-view image in a user terminal according to an embodiment of the present invention.

Referring to FIG. 6, each track image 600 includes images photographed at different points in time. For example, the first track image 601 may include an image photographed at a first viewpoint, and the second track image 602 may include an image photographed at a second viewpoint.

For example, when the user terminal 100 is requested to output a video image at a second time point from a multi-track image including a plurality of track images 600, the user terminal 100 includes the second track image 602 corresponding to the second time point It is possible to output any one of the frames 603 that have been processed.

When the first output time 605 is requested to output the video at the first time point while the second track video is being output, the user terminal 100 transmits the first output time 605, It is possible to output the frame 604 of one track image.

7 is a view for explaining a method of providing an integrated image of each track according to an embodiment of the present invention.

7, if the user terminal 100 is requested to output the integrated image of each track at the second output time 708 during the output of the third track image 700, You can request images.

When the streaming server 110 receives the request for the integrated image from the user terminal 100, the streaming server 110 can extract a plurality of frames corresponding to the second output time 708 in a track other than the track currently being output (the third track) have.

Specifically, the streaming server 110 extracts a frame 704 of the first track image corresponding to the second output time 708 in the first track 701, The frame 705 of the second track image corresponding to the second output image 708 can be extracted and the frame 706 of the fourth track image corresponding to the second output time 708 can be extracted from the fourth track 703 .

The streaming server 110 may insert the extracted plurality of frames into a frame after the frame corresponding to the second output time 708 of the third track image currently being output.

The streaming server 110 includes a third track image 700 including an integrated image 707 including a plurality of extracted frames and a multi-track image 700 including track images 701, 702, and 703 other than a third track image. And transmits the generated multi-track image to the user terminal 100. [0050] FIG.

The user terminal 100 may output the integrated image 707 among the multi-track images received from the streaming server 110 at the second output time 708. [

FIG. 8 is a flowchart illustrating a multi-view image viewing method using a multi-track image performed in a user terminal according to an exemplary embodiment of the present invention.

The multi-view image viewing method according to the embodiment shown in FIG. 8 includes steps that are processed in a time-series manner in the user terminal 100 and the streaming server 110 according to the embodiment shown in FIG. 1 to FIG. Accordingly, the contents described with respect to the user terminal 100 and the streaming server 110 of FIGS. 1 to 7 can be applied to the multi-view image viewing method according to the embodiment shown in FIG.

Referring to FIG. 8, in step S801, the user terminal 100 may receive a multi-track image including a plurality of track images corresponding to images of respective viewpoints from the streaming server 110. FIG. Here, the multi-track image may include view information of each track image, a header area including time information of each frame included in each track image, and a frame area including a plurality of track images. Here, the frame region includes a plurality of sub-frame regions, and each sub-frame region may include a frame at the same time in each track image.

In step S803, the user terminal 100 may receive a request for outputting a video of the first viewpoint of the multitrack video from the user.

In step S805, the user terminal 100 may output a first track image corresponding to a first point in time of the multi-track image.

In step S807, the user terminal 100 may receive an output request of an image at the second time point from the first output time point during the output of the first track image.

In step S809, the user terminal 100 may output the second track image corresponding to the second point in time of the multi-track image from the first output time.

Although not shown in FIG. 8, in step S805, the user terminal 100 may output the frame of the first track image included in each subframe area based on the time information of each frame included in the header area.

Although not shown in FIG. 8, in step S809, based on the time information of each frame included in the header area, the user terminal 100 generates a frame after the first output time of frames of the second track image included in each sub- It is possible to output a frame of a two-track video.

Although not shown in FIG. 8, after step S809, the user terminal 100 may receive an output request for the integrated image of each track at the second output time during the output of the second track image.

Although not shown in FIG. 8, after step S809, the user terminal 100 may transmit a request for outputting an integrated image to the streaming server 110. FIG.

Although not shown in FIG. 8, after step S809, the user terminal 100 may receive a multi-track image from the streaming server 110. FIG. Here, the multitrack image may include a plurality of track images, and the second track image may include an integrated image including a plurality of frames corresponding to a second output time of each track image.

Although not shown in FIG. 8, after step S809, the user terminal 100 may output an integrated image of the multi-track image.

In the above description, steps S801 to S809 may be further divided into further steps or combined into fewer steps, according to an embodiment of the present invention. Also, some of the steps may be omitted as necessary, and the order between the steps may be changed.

FIG. 9 is a flowchart illustrating a multi-view image providing method using a multi-track image performed in a streaming server according to an exemplary embodiment of the present invention.

The multi-view image providing method according to the embodiment shown in FIG. 9 includes the steps of the user terminal 100 and the streaming server 110 according to the embodiment shown in FIGS. 1 to 8, which are processed in a time-series manner. Therefore, the contents described with respect to the user terminal 100 and the streaming server 110 in FIGS. 1 to 8 can be applied to the multi-view image providing method according to the embodiment shown in FIG.

Referring to FIG. 9, in step S901, the streaming server 110 can receive images of a plurality of viewpoints from a plurality of cameras photographed at different viewpoints.

In step S903, the streaming server 110 may generate a first multi-track image including a plurality of track images corresponding to images of a plurality of viewpoints. Here, the first multi-track image may include view information of each track image, a header area including time information of each frame included in each track image, and a frame area including a plurality of track images. Here, the frame region includes a plurality of sub-frame regions, and each sub-frame region may include a frame at the same time in each track image.

In step S905, the streaming server 110 may transmit the first multi-track image generated in step S903 to the user terminal 100. [

In step S907, the streaming server 110 may receive a request for an integrated image of each track from the user terminal 100. [ Here, the request for the integrated image may include the current output track and the current output time.

In step S909, the streaming server 110 may generate a second multi-track image including an integrated image.

In step S911, the streaming server 110 may transmit the second multi-track image generated in step S909 to the user terminal 100. [

Although not shown in FIG. 9, in step S909, the streaming server 110 may extract a plurality of frames corresponding to a current output time of a track other than the track currently being output.

Although not shown in FIG. 9, in step S909, the streaming server 110 may insert a plurality of extracted frames into a frame subsequent to the current output time of the current output track.

In the above description, steps S901 to S911 may be further divided into additional steps, or combined in fewer steps, according to an embodiment of the present invention. Also, some of the steps may be omitted as necessary, and the order between the steps may be changed.

One embodiment of the present invention may also be embodied in the form of a recording medium including instructions executable by a computer, such as program modules, being executed by a computer. Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media. In addition, the computer-readable medium may include both computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Communication media typically includes any information delivery media, including computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, or other transport mechanism.

It will be understood by those skilled in the art that the foregoing description of the present invention is for illustrative purposes only and that those of ordinary skill in the art can readily understand that various changes and modifications may be made without departing from the spirit or essential characteristics of the present invention. will be. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. For example, each component described as a single entity may be distributed and implemented, and components described as being distributed may also be implemented in a combined form.

It is intended that the present invention covers the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents. .

100: user terminal
110: streaming server

Claims (16)

A multi-view image viewing method using a multi-track image performed in a user terminal,
Receiving a multi-track image including a plurality of track images corresponding to images at respective viewpoints from a streaming server;
Receiving an output request of an image at a first time point of the multi-track image; And
Outputting a first track image corresponding to the first time point among the multitrack images;
Receiving an output request of an image at a second time point from a first output time point during the output of the first track image; And
Outputting a second track image corresponding to the second point in time of the multi-track image from the first output time
, ≪ / RTI &
Receiving an output request for an integrated image at a second output time during the output of the second track image;
Transmitting an output request for the integrated image to the streaming server and receiving the integrated image from the streaming server;
Further comprising the step of outputting the integrated image,
Wherein the integrated image is a video generated by sequentially arranging frames corresponding to the second output time of the frame other than the second track video and the frame corresponding to the second output time of the second track video, How to view multi - view video.
The method according to claim 1,
Wherein the multi-track image includes a view area of each track image, a header area including time information of each frame included in each track image, and a frame area including the plurality of track images. Way.
3. The method of claim 2,
Wherein the frame region includes a plurality of sub-frame regions, and each sub-frame region includes frames at the same time in each track image.
The method of claim 3,
Wherein the outputting of the first track image comprises:
And outputs the frame of the first track image included in each of the sub-frame areas based on time information of each frame included in the header area.
5. The method of claim 4,
The step of outputting the second track image from the first output time
And outputs a frame of the second track image after the first output time out of the frames of the second track image included in each sub frame region based on the time information of each frame included in the header area , A method for viewing a multi-view image.
delete delete delete A multi-view image providing method using a multi-track image performed by a streaming server,
Receiving images of a plurality of viewpoints from a plurality of cameras photographed at different viewpoints;
Generating a first multi-track image including a plurality of track images corresponding to the images of the plurality of viewpoints;
Transmitting the first multi-track image to a user terminal;
Receiving a request for an integrated image of each track from the user terminal;
Generating a second multi-track image including the integrated image; And
Transmitting the second multi-track image to the user terminal
, ≪ / RTI &
Wherein the request for the integrated video includes information on a current output track and a current output time,
Wherein the integrated image is a video generated by sequentially arranging a frame corresponding to the current output time of the currently output track and a frame corresponding to the current output time of a track other than the currently output track Point image providing method.
10. The method of claim 9,
Wherein the first multi-track image includes a view area information of each track image, a header area including time information of each frame included in each track image, and a frame area including the plurality of track images. Image providing method.
11. The method of claim 10,
Wherein the frame region includes a plurality of sub-frame regions, and each sub-frame region includes frames at the same time in each track image.
delete 10. The method of claim 9,
Wherein the generating of the second multi-track image including the integrated image comprises:
Extracting a plurality of frames corresponding to the current output time of a track other than the track currently being output; And
Inserting the extracted plurality of frames into a frame subsequent to the current output time of the currently output track
The method of claim 1,
A user terminal for providing a multi-view video using a multi-track video,
A transmission / reception unit for receiving a multi-track image including a plurality of track images corresponding to images at each viewpoint from a streaming server;
An interface unit receiving a selection of one of the viewpoints; And
An output unit for outputting a track image corresponding to the input time point of the multi-
, ≪ / RTI &
Wherein the multi-track image includes a view area information of each track image, a header area including time information of each frame included in each track image, and a frame area including the plurality of track images,
Wherein the transmission / reception unit transmits an output request of the integrated image to the streaming server when receiving an output request of the integrated image during the output of the track image corresponding to the input time point, receives the integrated image from the streaming server,
Wherein the output unit outputs the received combined image,
Wherein the integrated image is a video image generated by sequentially arranging frames corresponding to the output request time of the integrated video image of the output track image and the output request time of the video image other than the output track image, User terminal.
15. The method of claim 14,
Wherein the frame region includes a plurality of sub-frame regions, and each sub-frame region includes a frame at the same time in each track image.
16. The method of claim 15,
Wherein the output unit outputs a frame of the track image included in each sub-frame area based on time information of each frame included in the header area.
KR1020150153603A 2015-11-03 2015-11-03 Method for watching multi-view video, method for providing multi-view video and user device KR102002037B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150153603A KR102002037B1 (en) 2015-11-03 2015-11-03 Method for watching multi-view video, method for providing multi-view video and user device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150153603A KR102002037B1 (en) 2015-11-03 2015-11-03 Method for watching multi-view video, method for providing multi-view video and user device

Publications (2)

Publication Number Publication Date
KR20170051913A KR20170051913A (en) 2017-05-12
KR102002037B1 true KR102002037B1 (en) 2019-07-19

Family

ID=58740783

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150153603A KR102002037B1 (en) 2015-11-03 2015-11-03 Method for watching multi-view video, method for providing multi-view video and user device

Country Status (1)

Country Link
KR (1) KR102002037B1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102362513B1 (en) 2017-12-04 2022-02-14 주식회사 케이티 Server and method for generating time slice video, and user device
KR101950852B1 (en) * 2017-12-06 2019-02-21 서울과학기술대학교 산학협력단 The apparatus and method for using multi-view image acquisition camers
KR102313309B1 (en) * 2019-09-23 2021-10-15 김성현 Personalized live broadcasting system
KR20210110097A (en) * 2020-02-28 2021-09-07 삼성전자주식회사 The method for streaming a video and the electronic device supporting same

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100430272B1 (en) * 1999-09-27 2004-05-06 엘지전자 주식회사 User profile and method of multi view display based on user preference

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8872888B2 (en) * 2010-10-01 2014-10-28 Sony Corporation Content transmission apparatus, content transmission method, content reproduction apparatus, content reproduction method, program and content delivery system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100430272B1 (en) * 1999-09-27 2004-05-06 엘지전자 주식회사 User profile and method of multi view display based on user preference

Also Published As

Publication number Publication date
KR20170051913A (en) 2017-05-12

Similar Documents

Publication Publication Date Title
US10778951B2 (en) Camerawork generating method and video processing device
US10334275B2 (en) Panoramic view customization
US10271082B2 (en) Video distribution method, video reception method, server, terminal apparatus, and video distribution system
EP3316589B1 (en) Video synchronization device and video synchronization method
US11729465B2 (en) System and method providing object-oriented zoom in multimedia messaging
KR102002037B1 (en) Method for watching multi-view video, method for providing multi-view video and user device
US10277832B2 (en) Image processing method and image processing system
US20150208103A1 (en) System and Method for Enabling User Control of Live Video Stream(s)
CN104301769B (en) Method, terminal device and the server of image is presented
CN104012106A (en) Aligning videos representing different viewpoints
KR20140118605A (en) Server and method for transmitting augmented reality object
KR20170086203A (en) Method for providing sports broadcasting service based on virtual reality
JP2020522926A (en) Method and system for providing virtual reality content using captured 2D landscape images
WO2017193830A1 (en) Video switching method, device and system, and storage medium
US20130097649A1 (en) Method and apparatus for providing image to device
US11924397B2 (en) Generation and distribution of immersive media content from streams captured via distributed mobile devices
CN114189696A (en) Video playing method and device
US20200213631A1 (en) Transmission system for multi-channel image, control method therefor, and multi-channel image playback method and apparatus
JP2014116922A (en) Video playback device and video distribution device
WO2015194082A1 (en) Image processing method and image processing system
CN112804471A (en) Video conference method, conference terminal, server and storage medium
KR101910609B1 (en) System and method for providing user selective view
KR102559011B1 (en) Method for providing virtual reality service, device and server
WO2015182034A1 (en) Image shooting method, image shooting system, server, image shooting apparatus, and image shooting program
KR102459768B1 (en) Cooperation broadcasting method using crowd sourcing, cooperation broadcasting server and system

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
AMND Amendment
E601 Decision to refuse application
AMND Amendment
X701 Decision to grant (after re-examination)
GRNT Written decision to grant