CN108141559A - Image system - Google Patents

Image system Download PDF

Info

Publication number
CN108141559A
CN108141559A CN201580083269.3A CN201580083269A CN108141559A CN 108141559 A CN108141559 A CN 108141559A CN 201580083269 A CN201580083269 A CN 201580083269A CN 108141559 A CN108141559 A CN 108141559A
Authority
CN
China
Prior art keywords
mentioned
image
fixation point
head
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201580083269.3A
Other languages
Chinese (zh)
Other versions
CN108141559B (en
Inventor
洛克拉因·威尔逊
濑古圭
濑古圭一
小岛由香
金子大和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fove Ltd By Share Ltd
Original Assignee
Fove Ltd By Share Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fove Ltd By Share Ltd filed Critical Fove Ltd By Share Ltd
Publication of CN108141559A publication Critical patent/CN108141559A/en
Application granted granted Critical
Publication of CN108141559B publication Critical patent/CN108141559B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/66Transforming electric information into light information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The image system of the present invention includes the head-mounted display used by being worn on the head of user and the video generation device for generating the image that head-mounted display provides a user.The image demonstration portion of head-mounted display is used to demonstrate image to user.Shoot part includes the image of the eyes of user for shooting.First communication unit is used to taken image being sent to video generation device, and receive image from video generation device.Second communication unit of video generation device receives captured image from head-mounted display, and image is sent to head-mounted display.Fixation point acquisition unit obtains the fixation point of the user on image based on captured image.Calculating part sets the predetermined region on the basis of fixation point based on acquired fixation point, except predetermined region, generates the few image of the data volume of the per unit pixel number compared with the image calculated for predetermined region.

Description

Image system
Technical field
The present invention relates to image system more particularly to the image system with head-mounted display and video generation device.
Background technology
Head-mounted display is worn on head to use by user, and at the closely place for the eyes for being set to user Picture on show image.Since user can't see when dress wears head-mounted display in other than shown image Hold, therefore the impression being integrally formed with Virtual Space can be enjoyed.As with the relevant technology of the above institute, in patent document 1 The movement of detectable user is disclosed, and the image for the image for corresponding to the movement of user can be shown in head-mounted display Generating means and image generating method.
Existing technical literature
Patent document
Patent document 1:Japanese Unexamined Patent Publication 2013-258614 bulletins
Invention content
Problem to be solved by the invention
Using above-mentioned technology, head-mounted display can be shown and the corresponding image of the direction of visual lines of user on picture. However, at most of conditions, the image shown by head-mounted display is animation.Therefore, data volume is big and works as from image Generating means to head-mounted display transmit intact image when, it is possible to the update delay of image and image is interrupted. Also, recently, high-resolution monitor also increases, and expects the image data of processing large capacity.When in view of image data When sending and receiving, video generation device can be made integrated with head-mounted display, but due to head-mounted display need it is logical The dress for crossing user is worn to use, thus it is expected miniaturization, and be difficult to be attached in framework.Therefore, in fact, using it is wireless come Video generation device is made to be connected with head-mounted display, but due to the data volume of image image that is big, thus providing a user It is likely to occur the situation of stagnation.
The present invention in view of this technical problem and complete, shown it is intended that providing one kind and being related to can inhibit wear-type The technology of image system that communication between device and video generation device is stagnated.
The means solved the problems, such as
In order to solve the above-mentioned technical problem, one embodiment of the present invention is image system, and above-mentioned image system includes logical It crosses and is worn on the head of user the head-mounted display that uses and for generating the image that head-mounted display is demonstrated to user Video generation device.For the image system, head-mounted display has:Image demonstration portion, for demonstrating shadow to user Picture;Shoot part, for shooting the image for including the eyes of user;And first communication unit, the image captured by shoot part is sent out Give video generation device, and the image demonstrated from the reception of video generation device by image demonstration portion.Video generation device has: Second communication unit receives as the image captured by shoot part, and by image from head-mounted display and is sent to head-mounted display; Fixation point acquisition unit obtains the fixation point of the user on image based on the image captured by shoot part;And calculating part, it is based on Fixation point acquired in fixation point acquisition unit sets the predetermined region on the basis of fixation point, except predetermined region, raw The image few into the data volume of the per unit pixel number compared with the image calculated for predetermined region.
Video generation device also has the communication for judging the communication environment between the first communication unit and the second communication unit Determination unit, calculating part are compared in the case where communication environment is poor with good situation, can also reduce the data of image Amount.
Communication determination unit can also judge communication environment, above-mentioned communication based on the information of the latest data including messaging parameter Parameter is included in received-signal strength, communication speed, data loss rate, handling capacity, noise situation or physical distance away from router It is at least one.
Video generation device also has obtains the fixation point of user based on the fixation point acquired in fixation point acquisition unit Mobile fixation point movement acquisition unit, calculating part can change according to the movement of fixation point in the size of predetermined region or form It is at least one.
The form of predetermined region is set as the form with long axis and short axle or is set as with long side and short by calculating part The form on side, and the long axis of predetermined region or long side direction are set according to the moving direction of fixation point.
Except predetermined region, calculating part can also generate the number for changing per unit pixel number according to the distance away from fixation point According to the image of amount.
Except predetermined region, calculating part can also generate the data volume of per unit pixel number as the distance away from fixation point becomes The image to become smaller to big and continuity.
Calculating part also is able to generate image in a manner that the data volume of per unit pixel number is not less than lower limiting value.
In addition, the arbitrary of above inscape combines, in method, apparatus, system, recording medium, computer program etc. Between the expression of the present invention converted also serve as embodiments of the present invention and effective.
Invention effect
According to the present invention, the image system including head-mounted display can suitably reduce amount of communication data, it is possible thereby to Provide the less image of indisposed sense to the user in the case of without stagnation.
Description of the drawings
Fig. 1 is the figure of the overview for the image system for schematically showing embodiment.
Fig. 2 is the exemplary block diagram of the functional structure for the image system for showing embodiment.
Fig. 3 is the figure for an example for showing the fixation point of the user acquired in the fixation point acquisition unit of embodiment.
Fig. 4 (a)~Fig. 4 (b) is the exemplary figure for showing the predetermined region set by calculating part.
Fig. 5 is the relationship schematically shown between the X-axis of image display area and the data volume of per unit pixel number Figure.
Fig. 6 is the figure of an example for the movement for showing the fixation point acquired in the fixation point movement acquisition unit of embodiment.
Fig. 7 (a)~Fig. 7 (b) is the relationship shown between the X-axis of image display area and the data volume of per unit pixel number The schematic diagram of another.
Fig. 8 is the sequence diagram for the processing example for showing the image system in relation to embodiment.
Fig. 9 is the flow chart of an example for the processing for showing the communication judgement in relation to embodiment.
Specific embodiment
To briefly describing for embodiments of the present invention.Fig. 1 is the image system 1 for schematically showing embodiment Overview figure.Image system 1 according to embodiment includes head-mounted display 100 and video generation device 200.Such as Fig. 1 Shown, head-mounted display 100 is used by being worn on the head of user 300.
Video generation device 200 is used to generate the image that head-mounted display 100 is demonstrated to user.Although it does not limit It is fixed, but for example, video generation device 200 is fixed game machine, portable game machine, personal computer (PC, Personal Computer), tablet computer, smart mobile phone, flat board mobile phone, video player, TV etc. renewable image Device.Video generation device 200 with head-mounted display 100 by being wirelessly or non-wirelessly connected.In example shown in Fig. 1, Video generation device 200 with head-mounted display 100 to be wirelessly connected.Video generation device 200 and head-mounted display 100 Between wireless connection can be for example, by, it is known that Wireless Fidelity (Wi-Fi, registered trademark) or bluetooth (Bluetooth, registration Trade mark) etc. wireless communication technique realize.Although not limiting, for example, head-mounted display 100 and video generation Image transmission between device 200 is according to Wireless Display (Miracast, trade mark) or wireless gigabit (WiGig, trade mark), nothing The standards such as line home digital interface (WHDI, trade mark) perform.
Head-mounted display 100 includes framework 150, dress wears part 160 and headphone 170.Framework 150 is used to accommodate Image display unit etc. is used to be demonstrated to the image display system of 300 image of user or Wireless Fidelity (not shown) (Wi-Fi) mould The wireless transport modules such as block or bluetooth (Bluetooth, registered trademark) module.Dress is worn part 160 and is used for head-mounted display 100 It is worn on the head of user 300.Dress wear part 160 such as belt or with retractility belt realization.If user 300 utilizes Dress wears part 160 and wears head-mounted display 100 to fill, then framework 150 is configured at the position of the eyes of covering user 300.Therefore, when When the dress of user 300 wears head-mounted display 100, the visual field of user 300 is blocked by framework 150.
Headphone 170 for image output generating means 200 regenerated image sound.Headphone 170 It can be not secured to head-mounted display 100.Even if wear part 160 using dress in user 300 has worn head-mounted display 100 to fill In the state of, it also can freely load and unload headphone 170.
Fig. 2 is the exemplary block diagram of the functional structure for the image system 1 for showing embodiment.Head-mounted display 100 wraps Include image demonstration portion 110,120 and first communication unit 130 of shoot part.
Image demonstration portion 110 is used to demonstrate image to user 300.Image demonstration portion 110 is for example, by liquid crystal display or have Organic electro luminescent (electroluminescence) is realized.Shoot part 120 includes the image of the eyes of user for shooting. Shoot part 120 is for example, the charge coupling device (CCD, charge-coupled device) accommodated by framework 150 or complementation Property matal-oxide semiconductor (CMOS, complementarymetal oxide semiconductor) etc. imaging sensors come it is real It is existing.First communication unit 130 is connected by wirelessly or non-wirelessly with video generation device 200, and is shown for performing wear-type Show the information transmission between device 100 and video generation device 200.Specifically, the first communication unit 130 is used for shoot part 120 Captured image is sent to video generation device 200, and receives from video generation device 200 and demonstrated by image demonstration portion 110 Image.First communication unit 130 can be by for example, Wireless Fidelity (Wi-Fi) module or bluetooth (Bluetooth) module etc. wirelessly pass Defeated module is realized.
Then, the video generation device 200 of Fig. 2 is illustrated.Video generation device 200 include the second communication unit 210, Communicate determination unit 220, fixation point acquisition unit 230, fixation point movement acquisition unit 240, calculating part 250 and storage part 260.By nothing Line is wired, and the second communication unit 210 is connected with head-mounted display 100.Second communication unit 210 is used for from head-mounted display 100 receive the image captured by shoot part 120, and send image to head-mounted display 100.So-called " shadow in this specification Picture " refers to the image that the calculating part hereinafter described 250 is generated.Fixation point acquisition unit 230 is based on captured by shoot part 120 Image obtains the fixation point P of the user on image.The position of fixation point P by known sight-line detection technology for example, obtained It takes.For example, the relationship of datum mark and dynamic point that fixation point acquisition unit 230 obtains Image display position and the eyes of user in advance is come As calibration information.When regenerating image, shoot part 120 is with the figure with the eyes of same place formula shooting user 300 during calibration Picture, and fixation point acquisition unit 230 obtains the location information of datum mark and dynamic point based on image.Believed based on acquired position Breath estimates the fixation point P of the user on image with calibration information acquired in advance, fixation point acquisition unit 230.Herein, it is so-called " datum mark " refer to for example, relative to the less canthus of the movement of head-mounted display etc. point, so-called " dynamic point " refers to basis Position that user 300 is seeing and iris or pupil of movement etc..Hereinafter, so-called " fixation point P " refers to fixation point acquisition unit 230 The fixation point of the user estimated.
Fig. 3 is the figure for an example for showing the fixation point P of the user 300 acquired in the fixation point acquisition unit 230 of embodiment. Image demonstration portion 110 shows the three-dimensional in image actually by the display pixel of image display area in two-dimensional quadrature coordinate Object.In figure 3, the transverse direction and longitudinal direction of the image display area of head-mounted display 100 is set as X-axis, Y-axis respectively, it will The coordinate of fixation point P is represented with (x, y).As shown in figure 3, the position of fixation point P can also be in the image upper table that user is seeing Show.
Back to the explanation of Fig. 2.Based on the fixation point P acquired in fixation point acquisition unit 230, calculating part 250 is set with solidifying Predetermined region A on the basis of viewpoint P.Also, for the perimeter B except predetermined region A, calculating part 250 is generated with being directed to The image that predetermined region A the is calculated image less compared to the data volume D of per unit pixel number.Hereinafter it is described in detail, But so-called " the data volume D of per unit pixel number " is that one kind is used to compare calculating part 250 in predetermined region A and perimeter B How the index of image that different processing by video generation device 200 generated and sent to head-mounted display 100, example are carried out Such as, the data volume D per pixel is represented.
Then, illustrate the processing performed by calculating part 250 using Fig. 4 and Fig. 5.(a) part-(b) of Fig. 4 is partly to show Go out the exemplary figure of the predetermined region A set by calculating part 250.Using (a) of Fig. 4 partly come will be away from staring to calculating part 250 The situation that the region that the distance of point P is below a is set as predetermined region A illustrates.Predetermined region A can be closed area, But it shows to be set as the example of circular situation in (a) part of Fig. 4, shows to be set as rectangle in (b) part of Fig. 4 The example of situation.Like this, if predetermined region A is simple form, calculating part 250 can be reduced for according to fixation point P's Move the calculating to set predetermined region A.
In general, the eyesight of human eye is high as the eyesight degree in the central vision region including central fovea, when from central fovea Then drastically decline during deviation.It is well known that the model ranging from most within 5 ° of the center of central fovea that human eye can be seen in detail It encloses.Therefore, calculating part 250 can also probably calculate the display pixel of head-mounted display 100 and the central fovea of the eyes of user 300 The distance between after, will be corresponding to the image display area in the region of 5 ° of central fovea on the basis of the fixation point P of user 300 On range set be predetermined region A.In view of used by the liquid crystal display of head-mounted display 100 optical system and above Human vision property of middle narration etc. (for example, central vision, age, field-of-view angle etc.), it is specific when user 300 sees The size of predetermined region A can be determined by testing.
Fig. 5 is to illustrate to represent the relationship between the X-axis of image display area and the data volume D of per unit pixel number The figure of coordinate diagram.The horizontal axis of coordinate diagram corresponds to the X-axis of image display area, and the longitudinal axis of coordinate diagram is represented relative to including solidifying The data volume D of per unit pixel number on the parallel line of the X-axis of viewpoint P.Fig. 5 is calculating part 250 by the distance a away from fixation point P In the range of be set as the example of predetermined region A.First, calculating part 250 is carried from the image data being stored in storage part 260 Remove the primary image data for needing the image demonstrated to user.Calculating part 250 can also obtain the outside of video generation device 200 Image data.For the image data, calculating part 250 is every into exercising in being less than (x-a) or more than the position of (x+a) in X-axis The calculating that the data volume D of unit pixel number reduces.As the method for reducing data volume, can be used for example, by deleting image Radio-frequency component carries out the known method compressed etc..The image that data volume when can obtain communication as a result, becomes smaller on the whole.
The example of the method for the radio-frequency component for deleting image is illustrated.Specifically, calculating part 250 is in predetermined region The sample rate that the inner and outer change of A uses during producing two dimensional image from the image data of threedimensional model.For The outside of predetermined region A, compared with inside predetermined region A, calculating part 250 reduces sample rate.Also, for unsampled area Domain, calculating part 250 generate image by interpolation processing.Interpolation processing is for example, well known bilinearity or spline interpolation.By This, compared with the situation of region-wide formation image of the high sampling rate to make image, image obscures.As a result, due to image Radio-frequency component is deleted, thus data volume during compression becomes smaller.And then sample rate during image formation declines, therefore can be at a high speed Form image.
Back to the explanation of Fig. 2.Determination unit 220 communicate for judging between the first communication unit 130 and the second communication unit 210 Communication environment.In the case where communication environment is bad, compared with the good situation of communication environment, calculating part 250 can also make The data volume for stating image becomes smaller.
According to the judgement of communication environment as a result, calculating part 250 can also make the data of the per unit pixel number in the B of perimeter D is measured to reduce.For example, by communication environment in terms of good be divided into C1, C2, C3 three phases, and will be in each stage The value of the data compression rate used is set as E1, E2, E3, and is stored by storage part 260.Communication determination unit 220 is used to judge to lead to Letter environment is equivalent to which in from C1 to C3 in stage.Calculating part 250 obtains the data according to judgement result from storage part 260 The value of compression ratio, and compress the image data of perimeter B with acquired data compression rate and generate image.
As a result, for being sent to the image of head-mounted display 100 from video generation device 200, data volume is according to logical Letter environment is conditioned, therefore can be to avoid the stagnation of the image caused by delay of transmission time etc..Also, due in user Near 300 fixation point P, picture quality is unchanged, therefore even if data volume is reduced, indisposed prevented also from being brought to user 300 Sense.Also, can it is undelayed in the case of, provide a user the fixation point P's for reflecting user 300 captured by shoot part 120 The image of information.
Communication determination unit 220 can also judge communication environment based on the information of the latest data including messaging parameter, above-mentioned Messaging parameter includes received-signal strength, communication speed, data loss rate, handling capacity, noise situation or the physical distance away from router At least one of.
Communication determination unit 220 can also be used to monitor messaging parameter, and judge the quality of communication environment based on messaging parameter. The determination unit 220 that communicates sends the message of inquiry communication conditions to head-mounted display 100.Then, for example, the first communication unit 130 The message is received, obtains the messaging parameter of 100 side of head-mounted display, and acquired messaging parameter is sent to video generation Device 200.And then the messaging parameter of 200 side of the second acquisition video generation of communication unit 210 device.Communicate determination unit 220 as a result, It can also be judged based on the messaging parameter acquired in the messaging parameter received from head-mounted display 100 and the second communication unit 210 The quality of communication environment.Herein, the so-called information including latest data can also be for example, communication determination unit 220 using from The value for calculating to obtain of the rolling average of a certain number of past observations.And then identical with above structure, work as utilization During the data compression rate set in a manner of associated with communication environment, calculating part 250 can generate the communication loop being suitable at this time The image of the data volume in border.Therefore, even if for the position that communication environment is bad or is easily varied, it can also keep what is demonstrated to user The frame rate of image, and the image for not generating indisposed sense after user sees can be provided.
Fixation point movement acquisition unit 240 can also obtain user based on the fixation point P acquired in fixation point acquisition unit 230 The movement of 300 fixation point P.The movement of fixation point P according to acquired in fixation point moves acquisition unit 240, calculating part 250 is at least Change at least one of size or form of predetermined region A.
Fig. 6 is to show that the fixation point of embodiment moves the exemplary of the movement of the fixation point P acquired in acquisition unit 240 Figure.The fixation point P that user is shown in FIG. 6 is moved to the state of P2 from P1.Calculating part 250 is with reference to 230 institute of fixation point acquisition unit The fixation point P of acquisition moves the movement of the fixation point P acquired in acquisition unit 240 with fixation point to set predetermined region A.In Fig. 6 In in the example that shows, fixation point P is in the position of P2, and the direction of the movement of fixation point is indicated by an arrow.Predetermined region A need not It is configured centered on fixation point P.For example, as shown in fig. 6, the boundary of predetermined region A can be set as not phase by calculating part 250 It is equidistant for fixation point P2 and wide by the moving direction of the movement of fixation point P and within the predetermined region A in a manner of set It is fixed.Head-mounted display 100 may be provided in user 300 in the wide scope in the direction being directed toward including 300 sight of user as a result, Keep the image of picture quality.As described above, as shown in (a) part of Fig. 4 and (b) part, it is specified that region A can be it is round or Rectangle.
Also, calculating part 250 can have long axis and the form of short axle or longer side and a shorter side in the form of predetermined region A Mode is set, and can also set the long axis of predetermined region or long side direction according to the moving direction of fixation point P.
In figure 6, the form of predetermined region A is set as ellipse by calculating part 250.Calculating part 250 is moved based on fixation point The form of predetermined region A is set as ellipse by the movement of the fixation point P acquired in dynamic acquisition unit 240.For example, with fixation point When predetermined region A being configured on the basis of P, calculating part 250 also is able to become elliptical long axis direction with the moving direction of fixation point P Mode set.Herein, fixation point P be need not as elliptical center, with the direction of advance side of the movement of fixation point P it is wide and Mode in ellipse sets the position relationship between fixation point P and ellipse.As a result, image demonstration portion 110 can to than The direction more directions of movement that the movement of fixation point P is few widely show the image for maintaining picture quality.Also, calculating part As long as the form of the predetermined region A set by 250 is not limited to above-mentioned with long axis and short axle or longer side and a shorter side Ellipse.For example, when calculating part 250 sets the form of predetermined region A as rectangle, using using multiple pixels as one piece And with block unit compression compression method in the case of, with predetermined region A be elliptical situation when compared with, can simplify and be present in rule Determine the calculating of borderline piece of the lap of region A and predetermined region A.
Except predetermined region A, calculating part 250 can also be generated changes per unit pixel according to the distance away from fixation point P The image of several data volume D.
(a) of Fig. 7 partly exists for the relationship between the X-axis of image display area and the data volume D of per unit pixel number The schematic diagram of situation about changing in multiple stages.Image of the coordinate diagram of the lower section of (a) part of Fig. 7 shown by top is shown The figure of the variation of the data volume D of per unit pixel number on the chain-dotted line in region.In the example of (a) part of Fig. 7, calculating part 250 set predetermined region A on the basis of fixation point P.And then other than the boundary of predetermined region A, setting boundary is fixed to be used for The boundary of first perimeter B1 of justice so that surround A, boundary is set to be used to define second perimeter B2 so that packet Enclose B1.B3 will be defined as on the outside of the boundary of second perimeter B2.It like this, will be external compared with situation about not dividing Region B, which is divided into multiple regions, can become the difference of the picture quality generated on the boundary between predetermined region A and perimeter B Smaller.It is compared as a result, in the situation that perimeter B is not divided into multiple regions, the image system shown in (a) part of Fig. 7 System 1 can provide the influence for reducing data volume to user 300 so that more meet the visual recognition of people.
Except predetermined region A, calculating part 250 makes per unit with can also generating the bigger continuity of distance away from fixation point P The image of the data volume D reductions of pixel number.
(b) of Fig. 7 partly makes for continuity between the X-axis of image display area and the data volume D of per unit pixel number Relationship generate variation in the case of schematic diagram.Change to 250 continuity of calculating part the longitudinal axis and horizontal axis of (b) part of Fig. 7 Between relationship and set as gradient.In the zone boundary of data volume D for changing per unit pixel number as a result, The difference of picture quality becomes smaller, and can obtain smooth image.
Calculating part 250 also is able to generate image in a manner that the data volume D of per unit pixel number is not less than lower limiting value DL.
In the longitudinal axis of (a) part-(b) part of Fig. 7, the lower limiting value of the data volume D in relation to per unit pixel number is shown DL.In general, in the case of the processing of the data volume in carrying out reduction animation, according to the method for image procossing, especially on image Near object boundaries on it is possible that generating the situation of distinctive movement.Also, in general, it is also well known that the eye of people Eyeball visual impairment in peripheral visual field, it is more sensitive for mobile in the case of but then.Therefore, in order not to generating this shadow Picture, calculating part 250 generate image with reference to lower limiting value DL.Image system 1 can be provided to user 300 and be inhibited to periphery as a result, The image of the indisposed sense of area of visual field.For specific lower limiting value DL values, in view of the image of head-mounted display 100 is suitble to show Show image procossing of system and video generation device 200 etc., can be determined by testing.
Hereinafter, illustrate the use example of present embodiment with reference to Fig. 8 and Fig. 9.Fig. 8 is for illustrating wearing for embodiment The sequence diagram of the main process task flow of formula display 100 and video generation device 200.First, the dress of user 300 wears head-mounted display 100, and the image that audiovisual image demonstration portion 110 is demonstrated.Shoot part 120 obtains the image (step for including the eyes of user 300 S101), the first communication unit 130 sends image (step S102) to video generation device 200.
Second communication unit 210 of video generation device 200 receives the image (step for including eyes from head-mounted display 100 Rapid S201).Fixation point P (step S202) of the fixation point acquisition unit 230 based on image acquisition user 300.Also, communicate determination unit 220 based on messaging parameter judgement communication environment (step S203).Detailed content about communication judgement is addressed below.It connects It, calculating part 250 sets the compression ratio (step S204) of data based on the result that is judged of communication determination unit 220.Calculating part 250 will be to the image data (step S205) for the image that user shows from the acquisition of storage part 260.Then, calculating part 250 is by coagulating Viewpoint acquisition unit 230 obtains the information of fixation point P, and sets predetermined region A (step S206) on the basis of fixation point P.It is right In perimeter B, calculating part 250 generate the data volume D of the per unit pixel number compared with the image calculated for predetermined region A compared with Small image (step S207).When the small images of generation data volume D, calculating part 250 is with reference to set by based on result of communication Compression ratio, to determine the data volume D in the B of perimeter.Then, the second communication unit 210 sends to head-mounted display 100 and calculates The image (step S208) that portion 250 is generated.First communication unit 130 of head-mounted display 100 receives generated image (step Rapid S103), which is demonstrated to user 300 (step S104) by image demonstration portion 110.
Fig. 9 is the exemplary flow chart for the processing for showing the communication judgement in relation to embodiment.Communication determination unit 220 obtains Including for example, in received-signal strength, communication speed, data loss rate, handling capacity, noise situation or physical distance away from router extremely A kind of latest data (step S211) of few messaging parameter.Then, communication determination unit 220 based on acquired latest data and The past communication information in specified time limit calculates the average value (step S212) of messaging parameter.Then, communicate determination unit 220 Communication environment (step S213) is judged based on the average value calculated.During image is regenerated, image system 1 is repeated Recorded processing in Fig. 8 and Fig. 9.In addition, in step S213, as described above, can also be based on 100 side of head-mounted display and The latest data of the messaging parameter of 200 side of video generation device carries out communication judgement.
As described above, according to embodiment, to keep the picture quality of the image near the fixation point P that user seeing Under state, the picture quality on the position far from fixation point P is reduced, due to reducing video generation device 200 to wear-type The data volume that display 100 transmits, therefore the few image of indisposed sense can be provided a user.Also, data volume when communicating is small, because Even if this can reduce the influence of thus caused data transfer delay etc. in the case where communication environment deteriorates.Therefore, it is of the invention Image system 1 be suitably adapted for the application program used such as in game machine, computer and portable terminal or game etc. The device that formula communicates is interacted with user 300.
More than, based on embodiment, the present invention is described.It should be appreciated by those skilled in the art that the embodiment To be illustrative, various modifications can be carried out, and such to the combination of these each inscapes or each processing procedure Variation is also within the scope of the invention.
More than, based on embodiment, the present invention is described.It should be appreciated by those skilled in the art that the embodiment To be illustrative, various modifications can be carried out, and such to the combination of these each inscapes or each processing procedure Variation is also within the scope of the invention.
Also, situation about hereinbefore, being mounted on video generation device 200 about fixation point acquisition unit 230 carries out Explanation.However, fixation point acquisition unit 230 is not limited to situation about being mounted on video generation device 200.For example, fixation point obtains Take portion 230 that can also be mounted on head-mounted display 100.In the case, make head-mounted display 100 that there is control function, By the control function of head-mounted display 100, it can also assign and be used to implement the processing that is carried out in fixation point acquisition unit 230 Program function.As a result, image is sent to due to that can omit from head-mounted display 100 by the image of the eyes including user 300 The step of generating means 200, therefore image system 1 can inhibit the bandwidth of communication or contribute to the high speed of processing.
Reference sign
1:Image system
100:Head-mounted display
110:Image demonstration portion
120:Shoot part
130:First communication unit
150:Framework
160:Dress wears part
170:Headphone
200:Video generation device
210:Second communication unit
220:Communicate determination unit
230:Fixation point acquisition unit
240:Fixation point moves acquisition unit
250:Calculating part
260:Storage part
Industry utilizability
The present invention can be used in the image system including head-mounted display and video generation device.

Claims (8)

1. a kind of image system, which is characterized in that
Including:
Head-mounted display is used by being worn on the head of user;And
Video generation device, for generating the image that above-mentioned head-mounted display is demonstrated to user,
Above-mentioned head-mounted display includes:
Image demonstration portion, for demonstrating above-mentioned image to above-mentioned user;
Shoot part, for shooting the image including the eyes of above-mentioned user;And
First communication unit, the above-mentioned image captured by by above-mentioned shoot part are sent to above-mentioned video generation device, and from above-mentioned shadow As the above-mentioned image demonstrated by above-mentioned image demonstration portion of generating means reception,
Above-mentioned video generation device includes:
Second communication unit is received as the above-mentioned image captured by above-mentioned shoot part from above-mentioned head-mounted display, and by above-mentioned shadow As being sent to above-mentioned head-mounted display;
Fixation point acquisition unit, above-mentioned image captured by based on above-mentioned shoot part obtain the solidifying of the above-mentioned user on above-mentioned image Viewpoint;And
Calculating part is set based on the above-mentioned fixation point acquired in above-mentioned fixation point acquisition unit on the basis of above-mentioned fixation point Predetermined region except above-mentioned predetermined region, generates the per unit picture compared with the image calculated for above-mentioned predetermined region The few image of the data volume of prime number.
2. image system according to claim 1, which is characterized in that above-mentioned video generation device also has to judge The communication determination unit of the communication environment between the first communication unit and above-mentioned second communication unit is stated,
Above-mentioned calculating part reduces the data volume of above-mentioned image in the case of above-mentioned communication environment difference compared with good situation.
3. image system according to claim 2, which is characterized in that above-mentioned communication determination unit is based on including messaging parameter Information including latest data judges communication environment, and above-mentioned messaging parameter includes received-signal strength, communication speed, loss of data At least one of rate, handling capacity, noise situation or physical distance away from router.
4. image system according to any one of claim 1 to 3, which is characterized in that
Above-mentioned video generation device also has obtains user's based on the above-mentioned fixation point acquired in above-mentioned fixation point acquisition unit The fixation point movement acquisition unit of the movement of fixation point,
Above-mentioned calculating part changes at least one of size or shape of above-mentioned predetermined region according to the movement of above-mentioned fixation point.
5. image system according to claim 4, which is characterized in that
The shape of above-mentioned predetermined region is set as the shape with long axis and short axle or is set as with long side by above-mentioned calculating part And the shape of short side, and the long axis of above-mentioned predetermined region or long side direction are set according to the moving direction of above-mentioned fixation point.
6. image system according to any one of claim 1 to 5, which is characterized in that
Except above-mentioned predetermined region, above-mentioned calculating part generation changes according to the distance away from above-mentioned fixation point per above-mentioned unit picture The image of the data volume of prime number.
7. image system according to any one of claim 1 to 6, which is characterized in that
Except above-mentioned predetermined region, data volume of the above-mentioned calculating part generation per above-mentioned unit pixel number is with away from above-mentioned fixation point Distance become larger and the image that becomes smaller of continuity ground.
8. image system according to any one of claim 1 to 7, which is characterized in that
Above-mentioned calculating part generates above-mentioned image in a manner that the data volume of every above-mentioned unit pixel number is not less than lower limiting value.
CN201580083269.3A 2015-09-18 2015-09-18 Image system, image generation method and computer readable medium Active CN108141559B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/076765 WO2017046956A1 (en) 2015-09-18 2015-09-18 Video system

Publications (2)

Publication Number Publication Date
CN108141559A true CN108141559A (en) 2018-06-08
CN108141559B CN108141559B (en) 2020-11-06

Family

ID=58288481

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201580083269.3A Active CN108141559B (en) 2015-09-18 2015-09-18 Image system, image generation method and computer readable medium

Country Status (3)

Country Link
KR (1) KR101971479B1 (en)
CN (1) CN108141559B (en)
WO (1) WO2017046956A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200195944A1 (en) * 2018-12-14 2020-06-18 Advanced Micro Devices, Inc. Slice size map control of foveated coding
US11756259B2 (en) * 2019-04-17 2023-09-12 Rakuten Group, Inc. Display controlling device, display controlling method, program, and non-transitory computer-readable information recording medium
WO2021066210A1 (en) * 2019-09-30 2021-04-08 엘지전자 주식회사 Display device and display system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6211903B1 (en) * 1997-01-14 2001-04-03 Cambridge Technology Development, Inc. Video telephone headset
JP2004056335A (en) * 2002-07-18 2004-02-19 Sony Corp Information processing apparatus and method, display apparatus and method, and program
CN104067160A (en) * 2011-11-22 2014-09-24 谷歌公司 Method of using eye-tracking to center image content in a display
CN204442580U (en) * 2015-02-13 2015-07-01 北京维阿时代科技有限公司 A kind of wear-type virtual reality device and comprise the virtual reality system of this equipment
CN104767992A (en) * 2015-04-13 2015-07-08 北京集创北方科技有限公司 Head-wearing type display system and image low-bandwidth transmission method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2868389B2 (en) * 1993-06-14 1999-03-10 株式会社エイ・ティ・アール通信システム研究所 Image display device
JP3263278B2 (en) * 1995-06-19 2002-03-04 株式会社東芝 Image compression communication device
US9344612B2 (en) * 2006-02-15 2016-05-17 Kenneth Ira Ritchey Non-interference field-of-view support apparatus for a panoramic facial sensor
JP2008131321A (en) * 2006-11-21 2008-06-05 Nippon Telegr & Teleph Corp <Ntt> Video transmission method, video transmission program and computer readable recording medium with the program recorded thereon

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6211903B1 (en) * 1997-01-14 2001-04-03 Cambridge Technology Development, Inc. Video telephone headset
JP2004056335A (en) * 2002-07-18 2004-02-19 Sony Corp Information processing apparatus and method, display apparatus and method, and program
CN104067160A (en) * 2011-11-22 2014-09-24 谷歌公司 Method of using eye-tracking to center image content in a display
CN204442580U (en) * 2015-02-13 2015-07-01 北京维阿时代科技有限公司 A kind of wear-type virtual reality device and comprise the virtual reality system of this equipment
CN104767992A (en) * 2015-04-13 2015-07-08 北京集创北方科技有限公司 Head-wearing type display system and image low-bandwidth transmission method

Also Published As

Publication number Publication date
KR101971479B1 (en) 2019-04-23
CN108141559B (en) 2020-11-06
KR20180037299A (en) 2018-04-11
WO2017046956A1 (en) 2017-03-23

Similar Documents

Publication Publication Date Title
US9978183B2 (en) Video system, video generating method, video distribution method, video generating program, and video distribution program
US11455032B2 (en) Immersive displays
CN104584531B (en) Image processing apparatus and image display device
US10976808B2 (en) Body position sensitive virtual reality
CN106462937B (en) Image processing apparatus and image display apparatus
EP3057089A1 (en) Image display device and image display method, image output device and image output method, and image display system
KR20190026004A (en) Single Deep Track Adaptation - Convergence Solutions
CN106998409A (en) A kind of image processing method, head-mounted display and rendering apparatus
CN111445583B (en) Augmented reality processing method and device, storage medium and electronic equipment
US9626564B2 (en) System for enabling eye contact in electronic images
CN114219878B (en) Animation generation method and device for virtual character, storage medium and terminal
US20200120322A1 (en) Image generating device, image display system, and image generating method
KR102461232B1 (en) Image processing method and apparatus, electronic device, and storage medium
KR20200142539A (en) Dynamic Forbited Pipeline
EP3619685B1 (en) Head mounted display and method
CN107065197B (en) Human eye tracking remote rendering real-time display method and system for VR glasses
JP2018141816A (en) Video system, video generation method, video distribution method, video generation program and video distribution program
CN107908278A (en) A kind of method and apparatus of Virtual Reality interface generation
CN108431872A (en) A kind of method and apparatus of shared virtual reality data
CN108141559A (en) Image system
CN105939497A (en) Media streaming system and media streaming method
CN111008929B (en) Image correction method and electronic equipment
WO2017199859A1 (en) Information processing device, information processing system and information processing method
US20230229010A1 (en) Head-mountable device for posture detection
CN107592520A (en) The imaging device and imaging method of AR equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant