WO2017046956A1 - Système vidéo - Google Patents

Système vidéo Download PDF

Info

Publication number
WO2017046956A1
WO2017046956A1 PCT/JP2015/076765 JP2015076765W WO2017046956A1 WO 2017046956 A1 WO2017046956 A1 WO 2017046956A1 JP 2015076765 W JP2015076765 W JP 2015076765W WO 2017046956 A1 WO2017046956 A1 WO 2017046956A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
unit
gazing point
user
communication
Prior art date
Application number
PCT/JP2015/076765
Other languages
English (en)
Japanese (ja)
Inventor
ロックラン ウィルソン
圭一 瀬古
由香 小島
大和 金子
Original Assignee
フォーブ インコーポレーテッド
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by フォーブ インコーポレーテッド filed Critical フォーブ インコーポレーテッド
Priority to CN201580083269.3A priority Critical patent/CN108141559B/zh
Priority to KR1020187008945A priority patent/KR101971479B1/ko
Priority to PCT/JP2015/076765 priority patent/WO2017046956A1/fr
Priority to US15/267,917 priority patent/US9978183B2/en
Publication of WO2017046956A1 publication Critical patent/WO2017046956A1/fr
Priority to US15/963,476 priority patent/US20180247458A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/66Transforming electric information into light information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Definitions

  • the present invention relates to a video system, and more particularly, to a video system including a head mounted display and a video generation device.
  • Patent Document 1 discloses an image generation apparatus and an image generation method capable of detecting a user's movement and displaying an image corresponding to the user's movement on a head-mounted display.
  • the head mounted display can display an image corresponding to the user's line-of-sight direction on the screen.
  • the video displayed by the head mounted display is a moving image.
  • the amount of data is large, and if the video is transmitted as it is from the video generation device to the head-mounted display, the update of the image may be delayed and the video may be interrupted.
  • the number of high-quality monitors has increased, and it is desired to process a large amount of video data.
  • the video generation device and the head mounted display may be integrated.
  • the head mounted display is mounted and used by the user, downsizing is desired, and it is difficult to incorporate it in the housing. . Therefore, in practice, the video generation device and the head mounted display are connected by wireless or the like.
  • due to the large amount of video data there is a possibility that the provision of video to the user may be stagnant.
  • the present invention has been made in view of such a problem, and an object thereof is to provide a technique related to a video system capable of suppressing a stagnation of communication between a head mounted display and a video generation device. .
  • the video generation device receives an image captured by the imaging unit from the head mounted display, transmits a video to the head mounted display, and a user's gaze point on the video based on the image captured by the imaging unit. Based on the gazing point acquired by the gazing point acquisition unit and the gazing point acquisition unit, a predetermined area based on the gazing point is set, and the area outside the predetermined area is calculated for the predetermined area And a calculation unit that generates an image with a smaller amount of data per unit pixel than the obtained image.
  • the video generation device further includes a communication determination unit that determines a communication environment between the first communication unit and the second communication unit, and the calculation unit, when the communication environment is bad, compares the video data with the good case. The amount may be reduced.
  • the communication determination unit determines the communication environment based on information including the latest data of communication parameters including at least one of radio wave intensity, communication speed, data loss rate, throughput, noise status, or physical distance from the router. You may judge.
  • the video generation apparatus further includes a gazing point movement acquisition unit that acquires a movement of the gazing point of the user based on the gazing point acquired by the gazing point acquisition unit, and the calculation unit is configured to calculate a predetermined area according to the movement of the gazing point. You may change at least one of a magnitude
  • the calculation unit sets the shape of the predetermined region to be a shape having a long axis and a short axis, or a shape having a long side and a short side, and the long axis direction or the long side direction of the predetermined region according to the direction of movement of the gazing point May be set.
  • the calculation unit may generate an image in which the data amount per unit pixel number is continuously reduced outside the predetermined area as the distance from the gazing point increases.
  • any combination of the above-described constituent elements and a representation obtained by converting the expression of the present invention between a method, an apparatus, a system, a recording medium, a computer program, and the like are also effective as an aspect of the present invention.
  • FIG. 7A is a schematic diagram illustrating another example of the relationship between the X coordinate of the video display area and the data amount per unit pixel.
  • FIG. 7B is a schematic diagram showing another example of the relationship between the X coordinate of the video display area and the data amount per unit pixel. It is a sequence diagram which shows the process example regarding the video system which concerns on embodiment. It is a flowchart which shows an example of the process regarding the communication determination which concerns on embodiment.
  • FIG. 1 is a diagram schematically showing an overview of a video system 1 according to an embodiment.
  • the video system 1 according to the embodiment includes a head mounted display 100 and a video generation device 200. As shown in FIG. 1, the head mounted display 100 is used by being mounted on the head of a user 300.
  • the video generation device 200 generates a video that the head mounted display 100 presents to the user.
  • the video generation device 200 is a device capable of reproducing video such as a stationary game machine, a portable game machine, a PC (Personal Computer), a tablet, a smartphone, a fablet, a video player, and a television. is there.
  • the video generation device 200 is connected to the head mounted display 100 wirelessly or by wire. In the example illustrated in FIG. 1, the video generation device 200 is connected to the head mounted display 100 wirelessly.
  • the wireless connection between the video generation apparatus 200 and the head mounted display 100 can be realized by using a wireless communication technology such as known Wi-Fi (registered trademark) or Bluetooth (registered trademark).
  • transmission of video between the head mounted display 100 and the video generation device 200 is performed according to standards such as Miracast (trademark), WiGig (trademark), and WHDI (trademark).
  • the head mounted display 100 includes a housing 150, a wearing tool 160, and headphones 170.
  • the housing 150 accommodates an image display system such as an image display element for presenting video to the user 300, and a wireless transmission module such as a Wi-Fi module or a Bluetooth (registered trademark) module (not shown).
  • the wearing tool 160 wears the head mounted display 100 on the user's 300 head.
  • the wearing tool 160 can be realized by, for example, a belt or a stretchable band.
  • the housing 150 is arranged at a position that covers the eyes of the user 300. For this reason, when the user 300 wears the head mounted display 100, the field of view of the user 300 is blocked by the housing 150.
  • the headphone 170 outputs the audio of the video reproduced by the video generation device 200.
  • the headphones 170 may not be fixed to the head mounted display 100.
  • the user 300 can freely attach and detach the headphones 170 even when the head mounted display 100 is worn using the wearing tool 160.
  • FIG. 2 is a block diagram illustrating an example of a functional configuration of the video system 1 according to the embodiment.
  • the head mounted display 100 includes a video presentation unit 110, an imaging unit 120, and a first communication unit 130.
  • the video generation device 200 includes a second communication unit 210, a communication determination unit 220, a gazing point acquisition unit 230, a gazing point movement acquisition unit 240, a calculation unit 250, and a storage unit 260.
  • the second communication unit 210 is connected to the head mounted display 100 by wireless or wired.
  • the second communication unit 210 receives an image captured by the imaging unit 120 from the head mounted display 100 and transmits an image to the head mounted display 100.
  • “video” refers to video generated by the calculation unit 250 described later.
  • the gaze point acquisition unit 230 acquires the user's gaze point P on the video based on the image captured by the imaging unit 120.
  • the position of the gazing point P can be acquired using, for example, a known gaze detection technique.
  • the gazing point acquisition unit 230 acquires the relationship between the image display position and the reference point and moving point of the user's eyes as calibration information in advance.
  • the imaging unit 120 captures an image of the eye of the user 300 as in the calibration, and the gazing point acquisition unit 230 acquires positional information of the reference point and the moving point based on the image.
  • the gazing point acquisition unit 230 estimates the user's gazing point P on the video.
  • the “reference point” indicates, for example, a point such as the eye with little movement relative to the head-mounted display, and the “moving point” indicates an iris or a pupil that moves depending on the position where the user 300 is viewing.
  • gaze point P indicates a user's gaze point estimated by the gaze point acquisition unit 230.
  • FIGS. 4A and 4B are diagrams illustrating examples of the predetermined area A set by the calculation unit 250.
  • FIG. A case where the calculation unit 250 sets a region whose distance from the gazing point P is a or less as the predetermined region A will be described with reference to FIG.
  • the predetermined area A may be a closed area, but FIG. 4A shows an example of a circle, and FIG. 4B shows an example of a rectangle.
  • the predetermined area A is a simple shape, the calculation for the calculation unit 250 to set the predetermined area A according to the movement of the gazing point P can be reduced.
  • the visual acuity of a human eye is higher in the central vision region including the fovea, and decreases rapidly when it is out of the fovea. It is known that the human eye can see the details well within a range within 5 ° of the central fovea at best. Therefore, the calculation unit 250 approximates the distance between the display pixel of the head mounted display 100 and the central fovea of the user's 300 eye, and displays the video corresponding to the region of the central fovea 5 ° with the gazing point P of the user 300 as a reference. A range on the area may be set as the predetermined area A.
  • the specific size of the predetermined area A when viewed by the user 300 is the optical system employed by the liquid crystal monitor of the head mounted display 100 and the human visual characteristics described above (for example, central vision, In view of age, viewing angle, etc.), it may be determined by experiment.
  • FIG. 5 is a diagram illustrating a graph showing the relationship between the X coordinate of the video display area and the data amount D per unit pixel number.
  • the horizontal axis of the graph corresponds to the X coordinate of the video display area
  • the vertical axis of the graph represents the data amount D per unit pixel number on a line parallel to the X axis including the gazing point P.
  • FIG. 5 shows an example in which the calculation unit 250 sets the range of the distance a from the gazing point P as the predetermined area A.
  • the calculation unit 250 extracts video data of a video to be presented to the user next from video data stored in the storage unit 260.
  • the calculation unit 250 may acquire external video data of the video generation device 200.
  • the calculation unit 250 calculates the video data so that the data amount D per unit pixel is reduced in a place where the X coordinate is less than (x ⁇ a) or greater than (x + a).
  • a method for reducing the data amount for example, a known method such as compression by dropping a high-frequency component of a video may be used. As a result, it is possible to obtain an image with a reduced amount of data during communication as a whole.
  • the communication determination unit 220 determines a communication environment between the first communication unit 130 and the second communication unit 210.
  • the calculation unit 250 may reduce the data amount of the video when the communication environment is bad as compared with when the communication environment is good.
  • the calculation unit 250 may reduce the data amount D per unit pixel number in the external area B according to the determination result of the communication environment. For example, the communication environment is divided into three stages of C1, C2, and C3 from the better one, and the storage unit 260 stores the values of the data compression ratio used in each as E1, E2, and E3.
  • the communication determination unit 220 determines which of C1 to C3 corresponds to the communication environment.
  • the calculation unit 250 acquires the value of the data compression rate corresponding to the determination result from the storage unit 260 and compresses the video data in the external area B with the acquired data compression rate to generate a video.
  • the data amount of the video transmitted from the video generation device 200 to the head mounted display 100 is adjusted according to the communication environment, it is possible to avoid the stagnation of the video due to a transfer time delay or the like.
  • the image quality does not change in the vicinity of the gazing point P of the user 300, even when the amount of data is reduced, the uncomfortable feeling given to the user 300 can be suppressed.
  • the video reflecting the information of the gazing point P of the user 300 captured by the imaging unit 120 can be provided to the user without delay.
  • the communication determination unit 220 May be determined.
  • the communication determination unit 220 may monitor communication parameters and determine whether the communication environment is good or bad based on the communication parameters.
  • the communication determination unit 220 transmits a message for inquiring the communication status to the head mounted display 100.
  • the first communication unit 130 receives this message, acquires the communication parameters on the head mounted display 100 side, and transmits the acquired communication parameters to the video generation device 200.
  • the second communication unit 210 acquires communication parameters on the video generation device 200 side. Thereby, the communication determination unit 220 may determine whether the communication environment is good or bad based on the communication parameter received from the head mounted display 100 and the communication parameter acquired by the second communication unit 210.
  • the information including the latest data may be, for example, a value obtained by calculation by the communication determination unit 220 using a moving average from a certain number of past observation values.
  • the calculation unit 250 can generate an image with a data amount suitable for the communication environment at that time. Therefore, even in a place where the communication environment is bad or easily changed, the frame rate of the video presented to the user can be maintained, and a video that does not feel strange when viewed by the user can be provided.
  • the gazing point movement acquisition unit 240 may acquire the movement of the gazing point P of the user 300 based on the gazing point P acquired by the gazing point acquisition unit 230.
  • the calculation unit 250 changes at least one of the size or the shape of the predetermined area A according to the movement of the gazing point P acquired by the gazing point movement acquisition unit 240.
  • FIG. 6 is a diagram illustrating an example of the movement of the gazing point P acquired by the gazing point movement acquisition unit 240 according to the embodiment.
  • FIG. 6 shows a state where the user's gazing point P has moved from P1 to P2.
  • the calculation unit 250 sets a predetermined region A with reference to the gazing point P acquired by the gazing point acquisition unit 230 and the movement of the gazing point P acquired by the gazing point movement acquisition unit 240.
  • the gazing point P is at the position P2, and the direction of movement of the gazing point is indicated by an arrow.
  • the predetermined area A does not need to be arranged around the gazing point P. For example, as shown in FIG.
  • the calculation unit 250 sets the boundary of the predetermined area A so that the moving direction of the movement of the gazing point P is wide and within the predetermined area A, not equidistant from the gazing point P2. May be.
  • the head mounted display 100 can provide the user 300 with an image that maintains the image quality in a wide range including the direction in which the user 300 turns the line of sight.
  • the predetermined area A may be circular or rectangular as shown in FIGS. 4 (a) and 4 (b).
  • the calculation unit 250 sets the shape of the predetermined region A to be a shape having a long axis and a short axis, or a long side and a short side, and the length of the predetermined region according to the direction of movement of the gazing point P.
  • An axis or a long side direction may be set.
  • the calculation unit 250 sets the shape of the predetermined area A as an ellipse.
  • the calculation unit 250 sets the shape of the predetermined region A as an ellipse based on the movement of the gazing point P acquired by the gazing point movement acquisition unit 240.
  • the calculation unit 250 may set the direction of movement of the gazing point P to be the major axis direction of the ellipse.
  • the gazing point P does not need to be the center of the ellipse, and the positional relationship between the gazing point P and the ellipse may be set so that the moving direction side of the movement of the gazing point P is widely within the ellipse.
  • the video presentation unit 110 can display a video with a wide image quality maintained in a direction that moves better than a direction in which the movement of the gazing point P is small.
  • the shape of the predetermined region A set by the calculation unit 250 is not limited to the above-described ellipse as long as it has a long axis and a short axis, or a long side and a short side.
  • the calculation unit 250 sets the shape of the predetermined area A to be a rectangle, the predetermined area A and the predetermined area A can be used when a compression method is used in which a plurality of pixels are compressed as one block. It is possible to simplify the calculation of the overlapping portion of the blocks existing on the boundary as compared with the case where the predetermined area A is an ellipse.
  • the calculation unit 250 may generate an image in which the data amount D per unit pixel number is changed according to the distance from the gazing point P outside the predetermined area A.
  • FIG. 7A is a schematic diagram when the relationship between the X coordinate of the video display area and the data amount D per unit pixel number is changed in a plurality of stages.
  • the lower graph in FIG. 7A shows the change in the data amount D per unit pixel number on the dotted line in the video display area shown above.
  • the calculation unit 250 sets a predetermined area A based on the gazing point P. Further, in addition to the boundary of the predetermined area A, a boundary defining the second external area B2 is provided so as to surround the first external areas B1 and B1 so as to surround A. The outside of the boundary of the second external region B2 is defined as B3.
  • the video system 1 shown in FIG. 7A can provide the user 300 with a video in which the amount of data is reduced in accordance with human visual recognition as compared with the case where the external area B is not divided into a plurality of areas.
  • the calculation unit 250 may generate an image in which the data amount D per unit pixel number is continuously reduced outside the predetermined area A as the distance from the gazing point P increases.
  • FIG. 7B is a schematic diagram when the relationship between the X coordinate of the video display area and the data amount D per unit pixel number is continuously changed.
  • the calculation unit 250 sets the gradation between the vertical axis and the horizontal axis in FIG. Thereby, the data amount D per unit pixel is changed, the difference in image quality at the region boundary is reduced, and a smooth image can be obtained.
  • the calculation unit 250 may generate an image so that the data amount D per unit pixel number does not fall below the lower limit DL.
  • the lower limit DL related to the data amount D per unit pixel.
  • a specific motion may occur particularly near an object boundary on a video depending on an image processing method.
  • the human eye has decreased visual acuity in the peripheral visual field, but is sensitive to movement. Therefore, the calculation unit 250 generates a video with reference to the lower limit value DL so as not to generate such a video.
  • the video system 1 can provide the user 300 with a video with a sense of incongruity in the peripheral visual field region.
  • the specific value of the lower limit DL may be determined by experiment in view of the image display system of the head mounted display 100, the image processing applied by the video generation device 200, and the like.
  • FIG. 8 is a sequence diagram illustrating the flow of main processing for the head mounted display 100 and the video generation device 200 according to the embodiment.
  • the user 300 wears the head mounted display 100 and views the video presented by the video presentation unit 110.
  • the imaging unit 120 acquires an image including the eyes of the user 300 (S101), and the first communication unit 130 transmits the image to the video generation device 200 (S102).
  • the second communication unit 210 of the video generation device 200 receives an image including eyes from the head mounted display 100 (S201).
  • the gazing point acquisition unit 230 acquires the gazing point P of the user 300 based on the image (S202). Further, the communication determination unit 220 determines the communication environment based on the communication parameter (S203). Details of the communication determination will be described later.
  • the calculation unit 250 sets the data compression rate based on the result determined by the communication determination unit 220 (S204).
  • the calculation unit 250 acquires the video data of the video to be displayed to the user from the storage unit 260 (S205).
  • the calculation unit 250 acquires information on the gazing point P from the gazing point acquisition unit 230, and sets a predetermined area A based on the gazing point P (S206).
  • the calculation unit 250 For the external area B, the calculation unit 250 generates an image with a smaller data amount D per unit pixel than the image calculated for the predetermined area A (S207). When generating a video with a small amount of data D, the calculation unit 250 determines the data amount D in the external area B with reference to the compression rate set based on the communication result. Next, the second communication unit 210 transmits the video generated by the calculation unit 250 to the head mounted display 100 (S208). The first communication unit 130 of the head mounted display 100 receives the generated video (S103), and the video presentation unit 110 presents this video to the user 300 (S104).
  • FIG. 9 is a flowchart illustrating an example of processing relating to communication determination according to the embodiment.
  • the communication determination unit 220 acquires the latest data of communication parameters including at least one of radio wave intensity, communication speed, data loss rate, throughput, noise status, or physical distance from the router (S211).
  • the communication determination unit 220 calculates an average value of communication parameters based on the acquired latest data and past communication information for a predetermined period (S212).
  • the communication determination unit 220 determines the communication environment based on the calculated average value (S213).
  • the video system 1 repeats the processes described in FIGS. 8 and 9 while reproducing the video.
  • the communication determination may be performed based on the latest data of the communication parameters on the head mounted display 100 side and the video generation device 200 side.
  • the image generation apparatus 200 is configured to reduce the image quality at a distance from the gazing point P while maintaining the image quality of the image near the gazing point P that the user is viewing. Since the amount of data to be transmitted to the mount display 100 is reduced, it is possible to provide the user with a video with little discomfort. In addition, since the amount of data at the time of communication is reduced, even when the communication environment is deteriorated, it is possible to reduce the influence due to the data transfer delay caused by the communication environment. Therefore, the video system 1 of the present invention is suitable for an apparatus that is used by interactively communicating with the user 300, such as an application or a game used in a game machine, a computer, a portable terminal, and the like. .
  • the gaze point acquisition unit 230 is not limited to the case where it is mounted on the video generation device 200.
  • the gazing point acquisition unit 230 may be mounted on the head mounted display 100.
  • the head mounted display 100 may be provided with a control function, and a program function for realizing processing performed by the gazing point acquisition unit 230 may be provided by the control function of the head mounted display 100.
  • 1 video system 100 head-mounted display, 110 video presentation unit, 120 imaging unit, 130 first communication unit, 150 housing, 160 wearing tool, 170 headphones, 200 video generation device, 210 second communication unit, 220 communication determination unit 230 gaze point acquisition unit, 240 gaze point movement acquisition unit, 250 calculation unit, 260 storage unit.
  • the present invention can be used for a video system including a head mounted display and a video generation device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

L'invention concerne un système vidéo comprenant un afficheur monté sur tête, utilisé monté sur la tête d'un utilisateur, ainsi qu'un dispositif de production de vidéo qui génère une vidéo à présenter à l'utilisateur par ledit afficheur. Dans l'afficheur monté sur tête, une unité de présentation vidéo présente une vidéo à un utilisateur. Une unité de capture d'image capture une image comprenant les yeux de l'utilisateur. Une première unité de communication transmet l'image capturée au dispositif de production de vidéo et reçoit une vidéo du dispositif de production de vidéo. Dans le dispositif de production de vidéo, une deuxième unité de communication reçoit l'image capturée à partir de l'afficheur monté sur tête et transmet une vidéo audit afficheur. Une unité d'acquisition de point de regard acquière un point de regard de l'utilisateur sur la vidéo en fonction de l'image capturée. Une unité de calcul établit, en fonction du point de regar acquis, une zone prédéterminée dans laquelle le point de regard est défini comme référence et génère, pour l'extérieur de la zone prédéterminée, une vidéo présentant une quantité de données inférieure, par nombre unitaire de pixels, à celle d'une vidéo calculée pour la zone prédéterminée.
PCT/JP2015/076765 2015-09-18 2015-09-18 Système vidéo WO2017046956A1 (fr)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201580083269.3A CN108141559B (zh) 2015-09-18 2015-09-18 影像系统、影像生成方法及计算机可读介质
KR1020187008945A KR101971479B1 (ko) 2015-09-18 2015-09-18 영상 시스템
PCT/JP2015/076765 WO2017046956A1 (fr) 2015-09-18 2015-09-18 Système vidéo
US15/267,917 US9978183B2 (en) 2015-09-18 2016-09-16 Video system, video generating method, video distribution method, video generating program, and video distribution program
US15/963,476 US20180247458A1 (en) 2015-09-18 2018-04-26 Video system, video generating method, video distribution method, video generating program, and video distribution program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/076765 WO2017046956A1 (fr) 2015-09-18 2015-09-18 Système vidéo

Publications (1)

Publication Number Publication Date
WO2017046956A1 true WO2017046956A1 (fr) 2017-03-23

Family

ID=58288481

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/076765 WO2017046956A1 (fr) 2015-09-18 2015-09-18 Système vidéo

Country Status (3)

Country Link
KR (1) KR101971479B1 (fr)
CN (1) CN108141559B (fr)
WO (1) WO2017046956A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022511838A (ja) * 2018-12-14 2022-02-01 アドバンスト・マイクロ・ディバイシズ・インコーポレイテッド フォービエイテッドコーディングのスライスサイズマップ制御

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020213088A1 (fr) * 2019-04-17 2020-10-22 楽天株式会社 Dispositif de commande d'affichage, procédé de commande d'affichage, programme, et support d'enregistrement d'informations non transitoire lisible par ordinateur
WO2021066210A1 (fr) * 2019-09-30 2021-04-08 엘지전자 주식회사 Dispositif d'affichage et système d'affichage

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0713552A (ja) * 1993-06-14 1995-01-17 Atr Tsushin Syst Kenkyusho:Kk 画像表示装置
JPH099253A (ja) * 1995-06-19 1997-01-10 Toshiba Corp 画像圧縮通信装置
JP2004056335A (ja) * 2002-07-18 2004-02-19 Sony Corp 情報処理装置および方法、表示装置および方法、並びにプログラム
JP2008131321A (ja) * 2006-11-21 2008-06-05 Nippon Telegr & Teleph Corp <Ntt> 映像伝送方法,映像伝送プログラムおよびそのプログラムを記録したコンピュータ読み取り可能な記録媒体

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5886735A (en) * 1997-01-14 1999-03-23 Bullister; Edward T Video telephone headset
US9344612B2 (en) * 2006-02-15 2016-05-17 Kenneth Ira Ritchey Non-interference field-of-view support apparatus for a panoramic facial sensor
US8611015B2 (en) * 2011-11-22 2013-12-17 Google Inc. User interface
CN204442580U (zh) * 2015-02-13 2015-07-01 北京维阿时代科技有限公司 一种头戴式虚拟现实设备及包括该设备的虚拟现实系统
CN104767992A (zh) * 2015-04-13 2015-07-08 北京集创北方科技有限公司 头戴式显示系统及影像低频宽传输方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0713552A (ja) * 1993-06-14 1995-01-17 Atr Tsushin Syst Kenkyusho:Kk 画像表示装置
JPH099253A (ja) * 1995-06-19 1997-01-10 Toshiba Corp 画像圧縮通信装置
JP2004056335A (ja) * 2002-07-18 2004-02-19 Sony Corp 情報処理装置および方法、表示装置および方法、並びにプログラム
JP2008131321A (ja) * 2006-11-21 2008-06-05 Nippon Telegr & Teleph Corp <Ntt> 映像伝送方法,映像伝送プログラムおよびそのプログラムを記録したコンピュータ読み取り可能な記録媒体

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022511838A (ja) * 2018-12-14 2022-02-01 アドバンスト・マイクロ・ディバイシズ・インコーポレイテッド フォービエイテッドコーディングのスライスサイズマップ制御
JP7311600B2 (ja) 2018-12-14 2023-07-19 アドバンスト・マイクロ・ディバイシズ・インコーポレイテッド フォービエイテッドコーディングのスライスサイズマップ制御

Also Published As

Publication number Publication date
CN108141559A (zh) 2018-06-08
KR20180037299A (ko) 2018-04-11
KR101971479B1 (ko) 2019-04-23
CN108141559B (zh) 2020-11-06

Similar Documents

Publication Publication Date Title
US9978183B2 (en) Video system, video generating method, video distribution method, video generating program, and video distribution program
JP2018141816A (ja) 映像システム、映像生成方法、映像配信方法、映像生成プログラム及び映像配信プログラム
CA2928248C (fr) Dispositif d&#39;affichage d&#39;image et procede d&#39;affichage d&#39;image, dispositif de sortie d&#39;image et procede de sortie d&#39;image, et systeme d&#39;affichage d&#39;image
US11196975B2 (en) Image generating device, image display system, and image generating method
WO2016157677A1 (fr) Dispositif de traitement d&#39;informations, procédé de traitement d&#39;informations et programme
US10692300B2 (en) Information processing apparatus, information processing method, and image display system
KR102503029B1 (ko) 픽셀 재분할 최적화를 갖는 방법 및 디스플레이 디바이스
EP3341818B1 (fr) Procédé et appareil d&#39;affichage de contenu
US9191658B2 (en) Head-mounted display and position gap adjustment method
CN112106366B (zh) 动态中央凹聚焦管线
US20140126877A1 (en) Controlling Audio Visual Content Based on Biofeedback
WO2018211672A1 (fr) Dispositif de génération d&#39;image, système d&#39;affichage d&#39;image et procédé de génération d&#39;image
WO2015149554A1 (fr) Procédé de commande d&#39;affichage et appareil de commande d&#39;affichage
WO2017208957A1 (fr) Procédé, système et dispositif de génération d&#39;image
WO2019030467A1 (fr) Appareil pouvant être installé sur la tête, et procédés
JP2010050645A (ja) 画像処理装置、画像処理方法及び画像処理プログラム
JP2015125502A (ja) 画像処理装置及び画像処理方法、表示装置及び表示方法、コンピューター・プログラム、並びに画像表示システム
WO2017046956A1 (fr) Système vidéo
WO2019217260A1 (fr) Affichage fovéal dynamique
JP6591667B2 (ja) 画像処理システム、画像処理装置、及びプログラム
JP6500570B2 (ja) 画像表示装置および画像表示方法
WO2022004130A1 (fr) Dispositif de traitement d&#39;informations, procédé de traitement d&#39;informations et support de stockage
JP6725999B2 (ja) 情報処理装置、情報処理装置の制御方法およびプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15904142

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20187008945

Country of ref document: KR

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 15904142

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP