WO2015145541A1 - Video display device - Google Patents

Video display device Download PDF

Info

Publication number
WO2015145541A1
WO2015145541A1 PCT/JP2014/058074 JP2014058074W WO2015145541A1 WO 2015145541 A1 WO2015145541 A1 WO 2015145541A1 JP 2014058074 W JP2014058074 W JP 2014058074W WO 2015145541 A1 WO2015145541 A1 WO 2015145541A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
unit
user
display device
image
Prior art date
Application number
PCT/JP2014/058074
Other languages
French (fr)
Japanese (ja)
Inventor
竜志 鵜飼
大内 敏
川村 友人
瀬尾 欣穂
佑哉 大木
Original Assignee
日立マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立マクセル株式会社 filed Critical 日立マクセル株式会社
Priority to PCT/JP2014/058074 priority Critical patent/WO2015145541A1/en
Publication of WO2015145541A1 publication Critical patent/WO2015145541A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0261Improving the quality of display appearance in the context of movement of objects on the screen or movement of the observer relative to the screen
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • G09G2340/0435Change or adaptation of the frame rate of the video stream
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user

Definitions

  • the present invention relates to a video display device, and more particularly to a video display device suitable for use by a user while moving.
  • HMD head-mounted display
  • video display devices such as HMDs are based on the premise that video is always displayed regardless of the user's environment and movement conditions.
  • a symptom discomfort
  • the user may be uncomfortable with it and may be less alert to the surrounding environment.
  • Patent Document 1 there is provided an HMD that secures a field of view by providing movement detection means for detecting the movement of the user, and stopping the image output (display) when the movement of the user is detected. It is disclosed.
  • Patent Document 1 since the image display of the HMD is stopped when the user is moving, the user cannot obtain any information from the HMD at all and the convenience is lowered.
  • the magnitude of the above-mentioned unpleasantness and the reduction in attention given to a moving user when viewing a video depend on the type of video to be displayed, for example, the degree of motion of the video. In other words, the feeling of the user differs greatly between the moving image and the still image, and it is not always necessary to stop the display of the image.
  • an object of the present invention is to provide a video display device that displays video so as not to cause discomfort to a moving user and to reduce attention.
  • the present invention is an image display device that can be used while a user moves, and includes an information amount changing unit that changes an information amount per unit time for an image provided from the image providing unit, and an information amount changing unit.
  • a video display unit that displays a video with a changed amount of information, an imaging unit that captures a subject around the user, a movement detection unit that detects a movement state of the user based on an image captured by the imaging unit, and a movement detection unit
  • an information amount determination unit that determines a change value of the information amount of the video per unit time with respect to the information amount change unit according to the movement state of the user detected in (1).
  • the information amount determination unit determines a change value of the information amount per unit time of the video smaller when the user is moving than when the user is not moving according to the detection result of the movement detection unit. To do.
  • an image display device that displays an image so as not to cause discomfort to a moving user and to prevent a reduction in attention.
  • FIG. 1 is a block diagram of a video display device according to Embodiment 1.
  • FIG. FIG. 3 is a diagram illustrating an example of an image captured by an imaging unit 3; The figure which shows an example of the relationship between a movement state and a frame rate. The figure explaining the specific example of frame rate conversion.
  • FIG. 9 is a block diagram of a video display apparatus according to a third embodiment.
  • FIG. 10 is a block diagram of a video display apparatus according to a fourth embodiment.
  • an HMD head mounted display
  • the user uses the HMD by wearing it on his / her head.
  • FIG. 1 is a block diagram of a video display apparatus according to the first embodiment.
  • the video display device (HMD) 10 is changed by the video receiving unit 2 that receives the video provided from the video providing unit 1, the information amount changing unit 6 that changes the information amount of the received video, and the information amount changing unit 6.
  • the video display device 10 is detected by the imaging unit 3 that images a surrounding subject, a movement detection unit 4 that detects the movement state of the user based on an image acquired by the imaging unit 3, and the movement detection unit 4.
  • An information amount determination unit 5 that determines the amount of information per unit time of the video displayed on the video display unit 8 according to the moving state of the user.
  • the information amount changing unit 6 changes the video received by the video receiving unit 2 to the information amount per unit time determined by the information amount determining unit 5.
  • it has a mounting part for mounting the HMD 10 on the user's head.
  • the video providing unit 1 is a device that provides video to be displayed.
  • the video providing unit 1 provides video from a storage device disposed in an external data center.
  • the video providing unit 1 may be built in the HMD 10 and may be a storage element such as a flash memory or an imaging element such as a camera.
  • the imaging unit 3 is attached to the HMD 10 and images a subject around the user. At that time, for example, it is attached to the front surface of the HMD 10 so that a subject in the forward direction that is the moving direction of the user can be imaged.
  • the movement detection unit 4 detects the movement state of the user based on the image acquired by the imaging unit 3. That is, it is determined whether or not the user is moving, and when the user is moving, the speed is determined.
  • FIG. 2 is a diagram illustrating an example of an image captured by the imaging unit 3, and shows a captured image 21 at time T1 and a captured image 22 at time T2.
  • the imaging unit 3 images a subject in the forward direction of the user at a predetermined time interval such as every second, and sends the captured image to the movement detection unit 4.
  • the movement detection unit 4 recognizes an arbitrary subject existing around the user or a part thereof as a feature around the user from the captured images at each time.
  • “tree” and “building” are recognized as feature objects 21a, 21b, 22a, and 22b.
  • it is not necessary to recognize the type and name of the feature for example, a tree or a building
  • the feature only needs to be composed of a specific graphic element (for example, a line or a polygon).
  • the movement detection unit 4 tracks the on-screen position of the feature in a plurality of continuous captured images. Then, when the characteristic object moves radially from the vicinity of the center in the screen as time passes, it is determined that the user is moving. In the example of FIG. 2, the feature 21a moves toward the lower left corner of the screen like 22a, and the feature 21b moves toward the lower right corner of the screen like 22b, so it is determined that the user is moving. it can. On the other hand, if the feature object has not moved in the four corner directions, it is determined that the user has not moved (still).
  • the attachment position is not limited to this.
  • the determination of the movement state of the user is different. For example, when imaging the user's right direction, the feature moves from left to right in the screen, and when imaging the user's left direction, the feature moves from right to left in the screen. If it is, the user may determine that the user is moving.
  • the imaging unit 3 may be attached so as to image a subject in the user's foot direction.
  • the movement detection unit 4 recognizes, for example, the user's limbs or road surface as a characteristic object, and determines whether or not the user is moving from those movements.
  • the movement detection unit 4 estimates the moving speed. That is, even when moving, a case of “walking” with a low moving speed and a case of “running” with a high moving speed are distinguished.
  • the moving speed of the user is estimated based on the moving speed of the feature in the captured image.
  • the movement speed of the user can be estimated with higher accuracy by tracking the position of the feature in the screen at a plurality of times. For example, when the moving speed is less than 6 km / h, “walking” is determined, and when the moving speed is 6 km or more, “running” is determined.
  • the other moving state of the user is up and down stairs. To detect this, image recognition is used together.
  • the movement detection unit 4 determines that the user is moving, and determines that the user is moving up and down the stairs when the captured image detects that the user is on the stairs.
  • the user himself / herself may be driving and moving.
  • Image recognition is also used when detecting this.
  • the movement detection unit 4 determines that the user is driving the vehicle when detecting that the user is moving and detecting that the user is sitting in the driver's seat of the vehicle.
  • the moving state is not limited to these, and various moving states are detected by using image recognition together. It is possible.
  • the information amount determination unit 5 receives the detection result of the movement state of the user detected by the movement detection unit 4, and determines the information amount per unit time of the video displayed on the user by the HMD 10 according to the movement state.
  • the information amount changing unit 6 changes the video received from the video receiving unit 2 so as to have the information amount per unit time determined by the information amount determining unit 5.
  • the video frame rate is taken up.
  • the frame rate is the number of images per second (number of frames) (unit: fps), but here it is defined by the frequency at which the images constituting the video are updated to different images.
  • the information amount determination unit 5 determines that the information amount per unit time, that is, the video frame rate is equal to or less than a predetermined frame rate.
  • the information amount changing unit 6 performs a conversion process of the video frame rate so as to be equal to or less than the frame rate determined by the information amount determining unit 5.
  • the information amount determination unit 5 and the information amount change unit 6 determine the frame rate and change the frame rate as follows, for example.
  • FIG. 3 is a diagram illustrating an example of the relationship between the movement state and the frame rate. This relationship is stored in advance as a lookup table, and the information amount determination unit 5 determines the frame rate with reference to this table.
  • the moving state of the user is classified into “at rest”, “walking”, “running”, “driving”, and the like, and the maximum frame rate of the output video in each state is defined.
  • the frame rate is not limited when stationary, but is limited to 10 fps or less during walking and 1 fps or less during traveling. In operation, it is set to 0 fps (still image).
  • the frame rate is lowered when the user is moving compared to when the user is stationary.
  • the frame rate is lowered as the moving speed of the user increases.
  • the still image is not updated.
  • the setting of the frame rate shown in FIG. 3 is merely an example, and can be appropriately set according to the moving state. For example, even in the same walking state, different frame rates may be set for walking at 3 km / h and walking at 4 km / h. Further, the frame rate may be continuously changed according to the speed.
  • FIG. 4 is a diagram for explaining a specific example of frame rate conversion.
  • (A) is an image before frame rate conversion
  • (b) and (c) are images after frame rate conversion.
  • the information amount changing unit 6 extracts a frame corresponding to the converted frame rate from the video frame sequence (a) received by the video receiving unit 2, and generates a new video frame sequence (b) or (c). .
  • the frames extracted from the received video are preferably at the same time, but when the times do not match, it is possible to convert the video to a smooth motion by selecting the frame that minimizes the time lag.
  • the number of images (number of frames) per second of the video 31 to be output becomes the value (10 fps) determined by the information amount determination unit 5.
  • the conversion of (c) is also performed to convert the video rate 32 to 10 fps, but the number of images to be updated per second is changed without changing the number of images to be output per second. Is. Then, at the timing not to be updated, the previous image is repeatedly output.
  • s 0, 3, and 6 are extracted from the frame of the pre-conversion video 30, and these are repeated three times to reduce the number of image updates. Also in the conversion of (c), the substantial frame rate of the received video is changed to the frame rate determined by the information amount determination unit 5.
  • the information amount changing unit 6 stores the image (frame) once output, and repeatedly outputs the same image at the next output timing.
  • converting the frame rate may give a sense of discomfort to the user who sees the video at the time of switching. Therefore, it is desirable that the information amount changing unit 6 should gradually switch the frame rate of the video to be output instead of instantaneously with a certain time width.
  • the video output unit 7 receives the video whose information amount has been changed by the information amount changing unit 6 and generates a display signal for driving the video display unit 8.
  • the video display unit 8 receives a display signal from the video output unit 7 and displays a video.
  • the moving user can reduce the degree of user's concentration (realism) with respect to the video by reducing the movement of the displayed video.
  • discomfort similar to sickness is reduced, and attention to the surrounding environment is improved.
  • the video display is continued for the user, the provision of information is not interrupted.
  • the video display apparatus which can provide information continuously can be implement
  • the frame rate of the video is lowered.
  • another method for reducing the amount of information per unit time will be described.
  • a method of changing the display image to black and white or gray scale display In order to reduce the amount of information per unit time, the information amount determination unit 5 determines to display the video in black and white or gray scale, and the information amount change unit 6 converts the color video received from the video reception unit 2 into black and white. Alternatively, processing for changing to gray scale display is performed. First, the process of changing to grayscale is performed as follows. For the color video received from the video receiver 2, the RGB values constituting each pixel in each frame are (R, G, B).
  • the threshold value Ath for the value A of each pixel is determined. If A ⁇ Ath, the pixel is white, and if A ⁇ Ath, the pixel is black.
  • the information amount determination unit 5 determines to match the hue of the image with the surrounding hues, and the information amount change unit 6 applies to the image received from the image reception unit 2. Then, a process for matching the hue with the surrounding hue is performed. Specifically, based on the surrounding image captured by the image capturing unit 3, the movement detection unit 4 detects in what hue environment the user is moving, and the detection result is used as the information amount determination unit 5. Send to. The information amount determination unit 5 sends the presence / absence of the hue change and the surrounding hue acquired from the movement detection unit 4 to the information amount change unit 6.
  • the information amount changing unit 6 changes the hue of each pixel of each frame of the video received from the video receiving unit 2 to the surrounding hue sent from the information amount determining unit 5.
  • the hue of the video to be displayed with the surrounding hue the video can be assimilated to the surroundings, and the user's degree of concentration (realism) can be reduced.
  • the information amount determination unit 5 determines to reduce and display the video
  • the information amount change unit 6 performs processing to reduce and display the video received from the video reception unit 2 Apply.
  • the video received from the video receiver 2 is reduced in the vertical and / or horizontal directions, and the reduced video is arranged in a part of the video display area output by the information amount changing unit 6. . It is desirable to convert the display area where the reduced video is not arranged into a single color such as black.
  • a method of displaying a part of an image and reducing the angle of view of the image In order to reduce the amount of information per unit time, the information amount determination unit 5 determines to display a part of the video, and the information amount change unit 6 applies to the video received from the video reception unit 2 with respect to the video.
  • a process of displaying a part is performed. Specifically, a part of the video received from the video receiver 2 is cut out and output. The remaining display area that is not cut out is preferably converted into a single color such as black.
  • a method of reducing the brightness of the displayed image In order to reduce the amount of information per unit time, the information amount determination unit 5 determines to reduce the luminance of the video, and the information amount change unit 6 applies to the video received from the video reception unit 2 with respect to the video.
  • a process for reducing the luminance is performed. Specifically, the luminance, hue, and saturation of each pixel of each frame of the image received from the image receiving unit 2 are obtained, and the hue and saturation are processed while maintaining the hue and saturation. The changed luminance is used as the color data of the pixel.
  • the information amount changing unit 6 performs processing for lowering the luminance of the video received from the video receiving unit 2.
  • the luminance can be lowered by the video output unit 7 instead of the information amount changing unit 6.
  • FIG. 5 is a block diagram showing a modification of the video display device.
  • the information amount changing unit 6 in FIG. 1 is deleted, the video output unit 7 receives the video from the video receiving unit 2, and changes the luminance of the video according to the luminance control signal determined by the information amount determining unit 5. It is configured to do.
  • the video display unit 8 includes a light source (backlight or the like) (not shown) for displaying a video, and changes the luminance by changing the intensity of light emitted from the light source.
  • the video output unit 7 sends a drive signal to the video display unit 8 so as to reduce the intensity of light emitted from the light source.
  • video display part 8 falls. According to this configuration, there is a secondary effect of reducing power consumption at the light source.
  • the image acquired by the imaging unit 3 is used to detect the movement of the user.
  • the measurement is performed by another sensor in addition to the image acquired by the imaging unit 3.
  • the sensor is, for example, a three-dimensional acceleration sensor, and thereby detects the user's walk.
  • FIG. 6 is a block diagram of the video display apparatus according to the third embodiment.
  • the video display device 10b newly adds a sensor 11.
  • the same elements as those in the first embodiment (FIG. 1) are denoted by the same reference numerals, and redundant description is omitted.
  • the sensor 11 is, for example, a three-dimensional acceleration sensor, and the movement detection unit 4 detects the movement state of the user using both the image acquired by the imaging unit 3 and the result measured by the sensor 11. At that time, the movement detection unit 4 uses the two pieces of information (captured image and sensor measurement result) to obtain the probability values that the user is moving, respectively, and statistically determines the final movement state of the user from the two probability values. Make a decision. Alternatively, the movement determination criterion based on one information can be changed according to the other information. The operation of this embodiment will be described below.
  • the detection of the movement of the user using the image from the imaging unit 3 is basically the same as in the first embodiment.
  • the probability that the user is moving is obtained instead of binaryly determining whether or not the user is moving by tracking the movement of the above-described feature in the image.
  • the moving speed of the feature can be obtained by comparing two or more captured images obtained at different times acquired from the imaging unit 3, and the probability that the user is moving can be obtained from the moving speed.
  • the body motion associated with the user walking (or running) is measured by an acceleration sensor.
  • the three-dimensional acceleration sensor is installed on the HMD so that when the HMD is mounted on the head and the user stands up and faces the front, the acceleration in three axial directions can be measured.
  • the user's up and down direction (Sz) is the first axis
  • the user's left and right direction (Sy) is the second axis
  • the user's front and rear direction (Sx) is the third axis.
  • FIG. 7 is a diagram illustrating a detection signal of the three-dimensional acceleration sensor with respect to the user's walking motion.
  • the user's walking (or running) operation is repeated with the following four states (W1) to (W4) as one cycle.
  • (W1) A state in which the right foot is on the front side in the traveling direction from the trunk and the left foot is on the rear side in the traveling direction from the trunk.
  • (W2) A state in which both the left and right feet are directly under the torso and the right foot is in contact with the ground while the left foot is not in contact with the ground.
  • W3 A state in which the left foot is on the front side in the traveling direction from the trunk and the right foot is on the rear side in the traveling direction from the trunk.
  • (W4) A state in which both the left and right feet are directly under the torso and the left foot is in contact with the ground, while the right foot is not in contact with the ground.
  • the acceleration sensors (Sz, Sx) in the first axis direction and the third axis direction output signals of two cycles, and the acceleration sensor (Sy) in the second axis direction has one cycle.
  • the signal is output.
  • the frequency (fz, fx) of the output of the acceleration sensor in the first axis direction and the third axis direction is twice the frequency (fy) of the output of the acceleration sensor in the second axis direction, and further the frequency in the second axis direction.
  • the frequency (fy) of the output of the acceleration sensor has a relationship that is the reciprocal (1 / T) of the time required for one cycle of walking.
  • the frequency ratio (fz / fy) of the output of the acceleration sensor in the first axis and second axis directions, or the frequency ratio (fx / fy) of the output of the acceleration sensor in the third axis and second axis directions are both
  • a walking determination condition is a value of 2.
  • the actual output of the acceleration sensor includes noise of other frequency components. Therefore, when determining whether the user is walking from the frequency ratio (fz / fy or fx / fy) of the output of the acceleration sensor, the determination condition is widened, and the frequency ratio is, for example, 1.7-2. It is better to judge that the possibility that the user is walking is in the range of .3.
  • the time required for one cycle of walking (running) varies depending on the user, and changes depending on the walking (running) environment. Therefore, when determining the user's walking from the output frequency (fy) of the acceleration sensor (Sy) in the second axis direction, the determination condition is widened, and the frequency ranges from 0.8 Hz to 1.2 Hz, for example. It is determined that there is a high possibility that the user is walking. Further, when the frequency is, for example, 2 Hz or more, it is determined that the user is likely to travel.
  • the frequency ratio (fz / fy, fx / fy) of the outputs between the acceleration sensors (Sz, Sy, Sx) in the triaxial direction and the acceleration sensor (Sy) in the second axial direction Both of the output frequencies (fy) are used, but only one of them may be used.
  • the output frequency (fy) of the acceleration sensor (Sy) in the second axis direction is used, but the first axis or third axis acceleration sensor (Sz, Sx), or the first to third axes. You may use the frequency of the output of several acceleration sensors among axes.
  • a one-dimensional acceleration sensor or a two-dimensional acceleration sensor may be used as the sensor 11.
  • the movement detection unit 4 determines the movement state of the user based on the probability of the user moving from the captured image of the imaging unit 3 and the probability of the user moving determined from the measurement result of the sensor 11. .
  • FIG. 8 is a diagram illustrating an example of determination of the movement state of the user.
  • the walking probability obtained from the captured image of the imaging unit 3 is P1
  • the walking probability obtained from the measurement result of the sensor 11 is P2.
  • the arithmetic average is obtained from the two probabilities P1 and P2. If the arithmetic average Pav is equal to or greater than the threshold value Pth, the user is walking, and if the arithmetic average Pav is less than the threshold value Pth, it is determined that the user is not walking.
  • the threshold value Pth is set to 60%.
  • the arithmetic average Pav is also 80%, so that it is determined that the user is walking.
  • the walking probability P1 obtained from the imaging unit 3 is 80% and the walking probability P2 obtained from the sensor 11 is 20%, the arithmetic average Pav is 50%, so that it is determined that the user is not walking.
  • Vibration detection switch A vibration detection switch is installed in the HMD in order to measure the movement of the user in the first axis direction (vertical direction). By installing the vibration detection switch in the HMD as described above, the vibration detection switch detects two vibrations in one cycle of user walking. By detecting this vibration, the movement detector 4 determines whether or not the user is walking.
  • Position measuring device A position measuring device using, for example, a GPS signal is installed in the HMD.
  • the movement detection unit 4 obtains the moving speed of the user from the position information calculated by the position measurement device. If the value is 3 km to 6 km per hour, for example, it is determined that the user is likely to be walking.
  • Imaging means for user eyes Apart from the imaging section 3 that acquires surrounding images, imaging means (camera) that detects the movement of the user's eyes is installed. When the user is walking, the movement of the eyes becomes active in order to grasp the surrounding environment. When the movement of the line of sight is detected and the user's line of sight is actively moving, it is determined that the user is likely to be walking.
  • Sweating sensor A sweating sensor is attached to the user's body. Since the sweating action is active when the user is walking, it is determined that the possibility that the user is walking is higher as the sweating amount per unit time is larger.
  • Pulse meter A pulse meter is attached to the user's body. Since the heart rate increases when the user is walking, it is determined that the higher the heart rate, the higher the possibility that the user is walking.
  • each sensor In the description of each sensor described above, it is assumed that the user's walking is detected, but it is needless to say that traveling can also be detected. In the above description, the sensor 11 is assumed to be mounted on the HMD. However, the sensor measurement result may be input from an external device.
  • FIG. 9 is a block diagram showing a modification of the video display device.
  • a sensor result input unit 13 for inputting a measurement result of a sensor mounted on the external device 12 is provided instead of the sensor 11.
  • the movement detection unit 4 determines the movement of the user using the sensor measurement result input to the sensor result input unit 13.
  • the external device 12 for example, a smartphone can be used.
  • Bluetooth registered trademark
  • the sensor result input unit 13 is a Bluetooth receiver.
  • the movement detection unit 4 determines only the image acquired by the imaging unit 3 by determining the movement of the user using both the image acquired by the imaging unit 3 and the result measured by the sensor 11. Compared with the case where it uses, the detection accuracy of a movement state can be improved.
  • the video receiving unit 2 receives video from the video providing unit 1, and the information amount changing unit 6 changes the information amount per unit time of the received video according to the determination of the information amount determining unit 5. It was something to do.
  • the video providing unit 2 in accordance with the determination by the information amount determining unit 5, the video providing unit 2 is requested to provide a video having the determined information amount per unit time.
  • FIG. 10 is a block diagram of a video display apparatus according to the fourth embodiment.
  • the video display device 10 d is provided with a video request unit 14 by deleting the information amount changing unit 6.
  • the same elements as those in the first embodiment (FIG. 1) are denoted by the same reference numerals, and redundant description is omitted.
  • the information amount determination unit 5 determines the information amount per unit time of the video to be displayed according to the movement state of the user, the information amount determination unit 5 sets a video specification (for example, a frame rate) for realizing this, and the video request unit 14 Send to.
  • the video request unit 14 sends a request signal related to the video to be provided to the video providing unit 1 based on the video specification received from the information amount determination unit 5.
  • the video providing unit 1 transmits a video corresponding to the request to the video receiving unit 2.
  • the video output unit 7 and the video display unit 8 can display a video having an information amount per unit time determined by the information amount determination unit 5. That is, the request signal from the video request unit 14 to the video providing unit 1 reflects the movement state of the user detected by the movement detection unit 4.
  • the video providing unit 1 is a storage device arranged in an external data center, for example, and is a device that can convert and provide a video in response to a request from the video requesting unit 14.
  • the video providing unit 1 may be a configuration built in the video display device 10d, for example, an imaging device.
  • the movement detection unit 4 detects the movement of the user based on the image acquired by the imaging unit 3.
  • the sensor 11 measured in addition to the imaging unit 3. A configuration in which the movement of the user is detected using the result may be used.
  • the fourth embodiment there is an effect of minimizing the amount of video data transmitted from the video providing unit 1 to the video receiving unit 2.
  • the video providing unit 1 is a storage device arranged in an external data center, the amount of data transmitted over the network is reduced, and the load on the transmission path is reduced.
  • the video providing unit 1 is an imaging device, the number of operations of the imaging element in the device is reduced, and there is a secondary effect of power saving and longer life of the device.
  • the present invention is not limited to the above-described embodiments, and includes various modifications.
  • the HMD has been described as an example of the video display device used while the user is moving.
  • the present invention is not limited to this.
  • a head-up display hereinafter abbreviated as HUD
  • the traveling state of the vehicle is detected as the movement state of the user, and the information amount per unit time of the video to be displayed is changed accordingly. Thereby, the gaze degree with respect to the user's image at the time of vehicle travel can be eased, and HUD which secured safety can be provided.
  • Video provider 2 Video receiver
  • 10a to 10d video display device

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

This video display device (10) is provided with an information-amount modification unit (6) that modifies the amount of information per unit time in video provided by a video provision unit (1), a video display unit (8) that displays the resulting video, an image-capturing unit (3) that captures images of an object near a user, a movement detection unit (4) that detects the movement state of the user on the basis of the captured images, and an information-amount setting unit (5) that sets the modified amount of information per unit time for the video in accordance with the detected movement state of the user. If the user is moving, the information-amount setting unit (5) sets a lower modified amount of information per unit time for the video than if the user is not moving. This results in a video display device that displays video in such a manner so as not to cause discomfort in a moving user or reduce said user's attentiveness.

Description

映像表示装置Video display device
 本発明は映像表示装置に係り、特にユーザが移動しながら使用する場合に好適な映像表示装置に関するものである。 The present invention relates to a video display device, and more particularly to a video display device suitable for use by a user while moving.
 ユーザが移動しながら使用することが想定される映像表示装置として、例えばユーザが自身の頭部に装着して利用するヘッドマウントディスプレイ(Head Mounted Display、以下HMDと略す)がある。ユーザは、HMDが表示する映像を視覚により認識することにより、移動しながら様々な情報を取得することができる。 As a video display device that is assumed to be used while the user moves, for example, there is a head-mounted display (hereinafter abbreviated as HMD) that the user wears on his / her head. The user can acquire various information while moving by visually recognizing the video displayed by the HMD.
 従来、HMDなどの映像表示装置は、ユーザの環境・移動状況等に関係なく、常に映像を表示することを前提としている。しかし、移動中のユーザが当該映像表示装置の表示する映像を見ていると、酔いに似た症状(不快感)を覚える場合がある。また、当該映像表示装置が表示する映像の臨場感のために、これに熱中し、周囲の環境に対する注意力が低下する場合がある。 Conventionally, video display devices such as HMDs are based on the premise that video is always displayed regardless of the user's environment and movement conditions. However, when a moving user is watching a video displayed by the video display device, a symptom (discomfort) similar to sickness may be felt. In addition, due to the presence of the video displayed by the video display device, the user may be obsessed with it and may be less alert to the surrounding environment.
 この問題に関し、例えば特許文献1では、ユーザの移動を検出する移動検出手段を設け、ユーザの移動を検出した場合は画像出力(表示)を停止することで、視界を確保するようにしたHMDが開示されている。 With respect to this problem, for example, in Patent Document 1, there is provided an HMD that secures a field of view by providing movement detection means for detecting the movement of the user, and stopping the image output (display) when the movement of the user is detected. It is disclosed.
特開2004-233903号公報JP 2004-233903 A
 上記特許文献1に開示されている方法では、ユーザが移動中の場合はHMDの画像表示が停止してしまうので、ユーザはHMDから全く情報を得ることができなくなり、利便性が低下する。移動中のユーザが映像を見ることで受ける上記した不快感の大きさや注意力の低下は、表示する映像の種類、例えば映像の動き度に依存している。つまり、動画と静止画ではユーザの感じ方は大きく異なり、必ずしも映像の表示を停止する必要はない。 In the method disclosed in Patent Document 1, since the image display of the HMD is stopped when the user is moving, the user cannot obtain any information from the HMD at all and the convenience is lowered. The magnitude of the above-mentioned unpleasantness and the reduction in attention given to a moving user when viewing a video depend on the type of video to be displayed, for example, the degree of motion of the video. In other words, the feeling of the user differs greatly between the moving image and the still image, and it is not always necessary to stop the display of the image.
 以上のような課題に鑑み、本発明の目的は、移動中のユーザに不快感を与えず、また注意力を低下させないように映像を表示する映像表示装置を提供することである。 In view of the above-described problems, an object of the present invention is to provide a video display device that displays video so as not to cause discomfort to a moving user and to reduce attention.
 本発明は、ユーザが移動しながら使用可能な映像表示装置であって、映像提供部から提供された映像に対し単位時間当たりの情報量を変更する情報量変更部と、情報量変更部にて情報量を変更した映像を表示する映像表示部と、ユーザの周囲の被写体を撮像する撮像部と、撮像部が撮像した画像に基づいてユーザの移動状態を検出する移動検出部と、移動検出部にて検出したユーザの移動状態に応じて、情報量変更部に対する単位時間当たりの映像の情報量の変更値を決定する情報量決定部と、を備える。ここに情報量決定部は、移動検出部の検出結果に応じて、ユーザが移動している場合はユーザが移動していない場合よりも、映像の単位時間当たりの情報量の変更値を小さく決定する。 The present invention is an image display device that can be used while a user moves, and includes an information amount changing unit that changes an information amount per unit time for an image provided from the image providing unit, and an information amount changing unit. A video display unit that displays a video with a changed amount of information, an imaging unit that captures a subject around the user, a movement detection unit that detects a movement state of the user based on an image captured by the imaging unit, and a movement detection unit And an information amount determination unit that determines a change value of the information amount of the video per unit time with respect to the information amount change unit according to the movement state of the user detected in (1). Here, the information amount determination unit determines a change value of the information amount per unit time of the video smaller when the user is moving than when the user is not moving according to the detection result of the movement detection unit. To do.
 本発明によれば、移動中のユーザに不快感を与えず、また注意力を低下させないように映像を表示する映像表示装置を提供することができる。 According to the present invention, it is possible to provide an image display device that displays an image so as not to cause discomfort to a moving user and to prevent a reduction in attention.
実施例1に係る映像表示装置のブロック図。1 is a block diagram of a video display device according to Embodiment 1. FIG. 撮像部3が撮像した画像の一例を示す図。FIG. 3 is a diagram illustrating an example of an image captured by an imaging unit 3; 移動状態とフレームレートの関係の一例を示す図。The figure which shows an example of the relationship between a movement state and a frame rate. フレームレート変換の具体例を説明する図。The figure explaining the specific example of frame rate conversion. 映像表示装置の変形例を示すブロック図。The block diagram which shows the modification of a video display apparatus. 実施例3に係る映像表示装置のブロック図。FIG. 9 is a block diagram of a video display apparatus according to a third embodiment. ユーザの歩行動作に対する3次元加速度センサの検出信号を示す図。The figure which shows the detection signal of the three-dimensional acceleration sensor with respect to a user's walking motion. ユーザの移動状態の判定の例を示す図。The figure which shows the example of determination of a user's movement state. 映像表示装置の変形例を示すブロック図。The block diagram which shows the modification of a video display apparatus. 実施例4に係る映像表示装置のブロック図。FIG. 10 is a block diagram of a video display apparatus according to a fourth embodiment.
 以下、本発明の実施形態について図面を参照して説明する。以下の実施形態では、映像表示装置としてHMD(ヘッドマウントディスプレイ)を例に説明する。ユーザは、HMDを自身の頭部に装着して利用する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. In the following embodiments, an HMD (head mounted display) will be described as an example of a video display device. The user uses the HMD by wearing it on his / her head.
 図1は、実施例1に係る映像表示装置のブロック図である。映像表示装置(HMD)10は、映像提供部1から提供された映像を受信する映像受信部2と、受信した映像の情報量を変更する情報量変更部6と、情報量変更部6が変更した映像を出力する映像出力部7と、映像出力部7から出力された映像を表示する例えば液晶ディスプレイなどの映像表示部8、を備えている。さらに映像表示装置10は、周囲の被写体を撮像する撮像部3と、撮像部3により取得された画像を基にユーザの移動状態を検出する移動検出部4と、移動検出部4によって検出されたユーザの移動状態に応じて映像表示部8にて表示する映像の単位時間当たりの情報量を決定する情報量決定部5を有する。情報量変更部6は、映像受信部2が受信した映像を、情報量決定部5が決定した単位時間当たりの情報量に変更する。なお、図では省略しているが、HMD10をユーザの頭部に装着するための装着部を有する。 FIG. 1 is a block diagram of a video display apparatus according to the first embodiment. The video display device (HMD) 10 is changed by the video receiving unit 2 that receives the video provided from the video providing unit 1, the information amount changing unit 6 that changes the information amount of the received video, and the information amount changing unit 6. A video output unit 7 for outputting the video and a video display unit 8 such as a liquid crystal display for displaying the video output from the video output unit 7. Further, the video display device 10 is detected by the imaging unit 3 that images a surrounding subject, a movement detection unit 4 that detects the movement state of the user based on an image acquired by the imaging unit 3, and the movement detection unit 4. An information amount determination unit 5 that determines the amount of information per unit time of the video displayed on the video display unit 8 according to the moving state of the user. The information amount changing unit 6 changes the video received by the video receiving unit 2 to the information amount per unit time determined by the information amount determining unit 5. Although not shown in the figure, it has a mounting part for mounting the HMD 10 on the user's head.
 映像提供部1は、表示する映像を提供する装置であり、例えば外部のデータセンタに配置された記憶装置から映像を提供する。あるいは映像提供部1をHMD10に内蔵して構成し、例えばフラッシュメモリのような記憶素子や、カメラのような撮像素子でもよい。 The video providing unit 1 is a device that provides video to be displayed. For example, the video providing unit 1 provides video from a storage device disposed in an external data center. Alternatively, the video providing unit 1 may be built in the HMD 10 and may be a storage element such as a flash memory or an imaging element such as a camera.
 撮像部3は、HMD10に取り付けてユーザの周囲の被写体を撮像する。その際、ユーザの移動方向である前方方向の被写体を撮像することができるように、例えばHMD10の前面に取り付ける。移動検出部4は、撮像部3が取得した画像を基に、ユーザの移動状態を検出する。すなわち、ユーザが移動しているか否か、移動しているときはその速度を判定する。 The imaging unit 3 is attached to the HMD 10 and images a subject around the user. At that time, for example, it is attached to the front surface of the HMD 10 so that a subject in the forward direction that is the moving direction of the user can be imaged. The movement detection unit 4 detects the movement state of the user based on the image acquired by the imaging unit 3. That is, it is determined whether or not the user is moving, and when the user is moving, the speed is determined.
 図2は、撮像部3が撮像した画像の一例を示す図で、ここには、時刻T1の撮像画像21と時刻T2の撮像画像22を示している。撮像部3は、例えば1秒毎など、ある決まった時間間隔でユーザの前方方向の被写体を撮像して、撮像画像を移動検出部4に送る。移動検出部4は、各時刻の撮像画像の中から、ユーザの周囲に存在している任意の被写体、あるいはその一部をユーザの周囲の特徴物として認識する。図2では、「木」や「建物」を、特徴物21a,21b,22a,22bとして認識している。その際、特徴物の種類や名称(例えば木や建物など)を認識する必要はなく、特徴物は特有の図形要素(例えば線や多角形など)で構成されていればよい。 FIG. 2 is a diagram illustrating an example of an image captured by the imaging unit 3, and shows a captured image 21 at time T1 and a captured image 22 at time T2. The imaging unit 3 images a subject in the forward direction of the user at a predetermined time interval such as every second, and sends the captured image to the movement detection unit 4. The movement detection unit 4 recognizes an arbitrary subject existing around the user or a part thereof as a feature around the user from the captured images at each time. In FIG. 2, “tree” and “building” are recognized as feature objects 21a, 21b, 22a, and 22b. At this time, it is not necessary to recognize the type and name of the feature (for example, a tree or a building), and the feature only needs to be composed of a specific graphic element (for example, a line or a polygon).
 移動検出部4は、連続する複数の撮像画像において、上記の特徴物の画面内位置を追跡する。そして、時間経過と共に特徴物が画面内の中心付近から放射状に四隅方向へ移動している場合は、ユーザは移動していると判定する。図2の例では、特徴物21aは22aのように画面左下隅方向に移動し、特徴物21bは22bのように画面右下隅方向に移動していることから、ユーザは移動していると判定できる。一方、特徴物が四隅方向へ移動していない場合は、ユーザは移動していない(静止している)と判定する。 The movement detection unit 4 tracks the on-screen position of the feature in a plurality of continuous captured images. Then, when the characteristic object moves radially from the vicinity of the center in the screen as time passes, it is determined that the user is moving. In the example of FIG. 2, the feature 21a moves toward the lower left corner of the screen like 22a, and the feature 21b moves toward the lower right corner of the screen like 22b, so it is determined that the user is moving. it can. On the other hand, if the feature object has not moved in the four corner directions, it is determined that the user has not moved (still).
 撮像部3をHMDに取り付ける際、上記例では、ユーザの前方方向の被写体を撮像するようにしたが、取り付け位置はこれに限らない。撮像部3をユーザの右側方向、または左側方向、またはその両方向を撮像するように取り付けてもよい。ただし、それらの場合にはユーザの移動状態の判定が異なってくる。例えば、ユーザの右側方向を撮像するときは特徴物が画面内を左から右に移動している場合に、ユーザの左側方向を撮像するときは特徴物が画面内を右から左に移動している場合に、ユーザは移動していると判定すればよい。さらに、撮像部3をユーザの足元方向の被写体を撮像するように取り付けてもよい。この場合には、移動検出部4は、例えばユーザの手足や路面などを特徴物として認識し、それらの移動からユーザが移動しているか否かを判定する。 When attaching the imaging unit 3 to the HMD, in the above example, the subject in the front direction of the user is imaged, but the attachment position is not limited to this. You may attach the imaging part 3 so that it may image a user's right direction or left direction, or the both directions. However, in these cases, the determination of the movement state of the user is different. For example, when imaging the user's right direction, the feature moves from left to right in the screen, and when imaging the user's left direction, the feature moves from right to left in the screen. If it is, the user may determine that the user is moving. Further, the imaging unit 3 may be attached so as to image a subject in the user's foot direction. In this case, the movement detection unit 4 recognizes, for example, the user's limbs or road surface as a characteristic object, and determines whether or not the user is moving from those movements.
 次に移動検出部4は、ユーザが移動していると判定した場合、その移動速度を推定する。すなわち、移動している場合であっても、移動速度が小さい「歩行」の場合と、移動速度が大きい「走行」の場合を区別する。ユーザの移動速度は、撮像画像内の特徴物の移動速度を基に推定する。その際、複数の時刻における画面内の特徴物の位置を追跡することで、ユーザの移動速度をより精度良く推定できる。ユーザの移動状態の区分として、例えば移動速度が時速6km未満の場合は「歩行」、6km以上の場合は「走行」と判定する。 Next, when it is determined that the user is moving, the movement detection unit 4 estimates the moving speed. That is, even when moving, a case of “walking” with a low moving speed and a case of “running” with a high moving speed are distinguished. The moving speed of the user is estimated based on the moving speed of the feature in the captured image. At that time, the movement speed of the user can be estimated with higher accuracy by tracking the position of the feature in the screen at a plurality of times. For example, when the moving speed is less than 6 km / h, “walking” is determined, and when the moving speed is 6 km or more, “running” is determined.
 ユーザの他の移動状態として階段の昇降がある。これを検出するには画像認識を併用する。移動検出部4は、ユーザが移動していると判定するとともに、撮像画像によりユーザが階段上にいることを検出した場合には、ユーザが階段を昇降していると判定する。 The other moving state of the user is up and down stairs. To detect this, image recognition is used together. The movement detection unit 4 determines that the user is moving, and determines that the user is moving up and down the stairs when the captured image detects that the user is on the stairs.
 さらにユーザの他の移動状態として、ユーザ自身が車両を運転して移動している場合がある。これを検出する場合も画像認識を併用する。移動検出部4は、ユーザが移動していることを検出するとともに、ユーザが車両の運転席に座っていることを検出した際には、ユーザが車両を運転していると判定する。 Furthermore, as another movement state of the user, the user himself / herself may be driving and moving. Image recognition is also used when detecting this. The movement detection unit 4 determines that the user is driving the vehicle when detecting that the user is moving and detecting that the user is sitting in the driver's seat of the vehicle.
 上記例では、ユーザの移動状態として、歩行、走行、階段の昇降、運転中の場合を説明したが、移動状態はこれらに限定されず、画像認識を併用することで様々な移動状態を検出することが可能である。 In the above example, the case of walking, running, up and down stairs, and driving is described as the user's moving state, but the moving state is not limited to these, and various moving states are detected by using image recognition together. It is possible.
 なお、ユーザが移動状態であっても、例えばユーザ自身が運転せず車両の客席に乗車している場合や、ユーザが列車に乗車している場合には、運転中とは区別すべきである。これらの区別は、画像認識により判定可能である。 Even when the user is in a moving state, for example, when the user is not driving and is in the passenger seat of the vehicle, or when the user is in the train, it should be distinguished from driving. . These distinctions can be determined by image recognition.
 情報量決定部5は、移動検出部4が検出したユーザの移動状態の検出結果を受け取り、当該移動状態に応じて、HMD10がユーザに表示する映像の単位時間当たりの情報量を決定する。情報量変更部6は、映像受信部2から受け取った映像を、情報量決定部5が決定した単位時間当たりの情報量となるように変更する。 The information amount determination unit 5 receives the detection result of the movement state of the user detected by the movement detection unit 4, and determines the information amount per unit time of the video displayed on the user by the HMD 10 according to the movement state. The information amount changing unit 6 changes the video received from the video receiving unit 2 so as to have the information amount per unit time determined by the information amount determining unit 5.
 ここで、単位時間当たりの情報量の具体例として、映像のフレームレートを取り上げる。フレームレートとは、1秒当たりの画像数(フレーム数)(単位fps)であるが、ここでは映像を構成する画像を異なる画像に更新する周波数で定義する。移動検出部4によりユーザが移動していることを検出した場合、情報量決定部5は、単位時間当たりの情報量、すなわち映像のフレームレートが所定のフレームレート以下になるように決定する。そして情報量変更部6は、情報量決定部5が決定したフレームレート以下となるように、映像のフレームレートの変換処理を施す。情報量決定部5及び情報量変更部6は、例えば以下のようにしてフレームレートの決定及びフレームレートの変更を行う。 Here, as a specific example of the amount of information per unit time, the video frame rate is taken up. The frame rate is the number of images per second (number of frames) (unit: fps), but here it is defined by the frequency at which the images constituting the video are updated to different images. When the movement detection unit 4 detects that the user is moving, the information amount determination unit 5 determines that the information amount per unit time, that is, the video frame rate is equal to or less than a predetermined frame rate. Then, the information amount changing unit 6 performs a conversion process of the video frame rate so as to be equal to or less than the frame rate determined by the information amount determining unit 5. The information amount determination unit 5 and the information amount change unit 6 determine the frame rate and change the frame rate as follows, for example.
 図3は、移動状態とフレームレートの関係の一例を示す図である。この関係は予めルックアップテーブルとして格納しておき、情報量決定部5はこのテーブルを参照してフレームレートを決定する。このテーブルでは、ユーザの移動状態を「静止時」、「歩行時」、「走行時」、「運転時」などに分類し、それぞれの状態における出力映像のフレームレート最大値を規定している。例えば、静止時にはフレームレートを制限しないが、歩行時は10fps以下に、走行時は1fps以下に制限する。また、運転時は、0fps(静止画)とする。このように、ユーザが移動している場合は静止している場合よりもフレームレートを下げる。ユーザが移動している場合には、ユーザの移動速度が大きいほどフレームレートを下げる。特にユーザが運転中には、映像を更新しない静止画とする。 FIG. 3 is a diagram illustrating an example of the relationship between the movement state and the frame rate. This relationship is stored in advance as a lookup table, and the information amount determination unit 5 determines the frame rate with reference to this table. In this table, the moving state of the user is classified into “at rest”, “walking”, “running”, “driving”, and the like, and the maximum frame rate of the output video in each state is defined. For example, the frame rate is not limited when stationary, but is limited to 10 fps or less during walking and 1 fps or less during traveling. In operation, it is set to 0 fps (still image). Thus, the frame rate is lowered when the user is moving compared to when the user is stationary. When the user is moving, the frame rate is lowered as the moving speed of the user increases. Especially when the user is driving, the still image is not updated.
 図3に示すフレームレートの設定は一例にすぎず、移動状態に応じて適宜設定することが可能である。例えば、同じ歩行状態でも、時速3kmの歩行時と時速4kmの歩行時で異なるフレームレートに設定してもよい。また、速度に応じてフレームレートを連続的に変えて設定してもよい。 The setting of the frame rate shown in FIG. 3 is merely an example, and can be appropriately set according to the moving state. For example, even in the same walking state, different frame rates may be set for walking at 3 km / h and walking at 4 km / h. Further, the frame rate may be continuously changed according to the speed.
 なお、移動速度が大きい場合であっても、他人の運転する車両に乗車している場合や、列車に乗車している場合には、フレームレートの制限は緩くし、ユーザに不快感(酔い感)を与えない程度に設定すればよい。 Even when the moving speed is high, if you are in a vehicle driven by another person or if you are on a train, the frame rate limit is relaxed and the user feels uncomfortable (sickness ) Should not be given.
 図4は、フレームレート変換の具体例を説明する図である。(a)はフレームレート変換前の映像、(b)と(c)はフレームレート変換後の映像である。 FIG. 4 is a diagram for explaining a specific example of frame rate conversion. (A) is an image before frame rate conversion, and (b) and (c) are images after frame rate conversion.
 情報量変更部6は、映像受信部2が受信した映像フレーム列(a)から変換後のフレームレートに対応するフレームを抽出して、新たな映像フレーム列(b)または(c)を生成する。受信映像から抽出するフレームは同一時刻であることが望ましいが、時刻が一致しないときは、時刻のずれが最小となるフレームを選択することで、動きが滑らかな映像に変換することができる。 The information amount changing unit 6 extracts a frame corresponding to the converted frame rate from the video frame sequence (a) received by the video receiving unit 2, and generates a new video frame sequence (b) or (c). . The frames extracted from the received video are preferably at the same time, but when the times do not match, it is possible to convert the video to a smooth motion by selecting the frame that minimizes the time lag.
 まず、(a)のフレームレート=30fpsの映像30を、(b)のフレームレート=10fpsの映像31に変換する場合を説明する。変換前のフレーム列を番号s=0,1,…,6で、変換後のフレーム列を番号t=0,1,2で表している。変換後映像31のt=0,1,2の各フレームは、変換前映像30の同時刻の映像であるs=0,3,6の各フレームを抽出したものである。その結果、出力する映像31の1秒間の画像数(フレーム数)は、情報量決定部5の決定した値(10fps)となる。 First, the case where the video 30 of (a) frame rate = 30 fps is converted into the video 31 of (b) frame rate = 10 fps will be described. The frame sequence before conversion is represented by numbers s = 0, 1,..., 6 and the frame sequence after conversion is represented by numbers t = 0, 1, 2, and so on. The frames of t = 0, 1, 2 of the converted video 31 are obtained by extracting the frames of s = 0, 3, 6 which are videos at the same time of the video 30 before conversion. As a result, the number of images (number of frames) per second of the video 31 to be output becomes the value (10 fps) determined by the information amount determination unit 5.
 これに対し(c)の変換は、同じくフレームレート=10fpsの映像32に変換するものであるが、出力する映像の1秒間の画像数は変えずに、1秒間に更新する画像数を変更したものである。そして、更新しないタイミングでは、直前の画像を繰り返して出力する。変換後映像32のフレーム列を番号u=0,1,…,6で表しており、変換前映像30の1秒間の画像の枚数に等しい。変換後映像32のu=0,1,2,3,4,5,6の各フレームは、変換前映像30のs=0,0,0,3,3,3,6の各フレームに等しい。この場合、変換前映像30のフレームからs=0,3,6を抽出し、それらを3回繰り返して出力することで、画像の更新回数を下げている。(c)の変換においても、受信した映像の実質的なフレームレートを、情報量決定部5が決定したフレームレートに変更するものとなる。 On the other hand, the conversion of (c) is also performed to convert the video rate 32 to 10 fps, but the number of images to be updated per second is changed without changing the number of images to be output per second. Is. Then, at the timing not to be updated, the previous image is repeatedly output. The frame sequence of the post-conversion video 32 is represented by numbers u = 0, 1,..., 6 and is equal to the number of images per second of the pre-conversion video 30. The frames of u = 0, 1, 2, 3, 4, 5, 6 of the converted image 32 are equal to the frames of s = 0, 0, 0, 3, 3, 3, 6 of the image 30 before conversion. . In this case, s = 0, 3, and 6 are extracted from the frame of the pre-conversion video 30, and these are repeated three times to reduce the number of image updates. Also in the conversion of (c), the substantial frame rate of the received video is changed to the frame rate determined by the information amount determination unit 5.
 また、フレームレート=0fpsへの変換は、静止画像を出力することを意味する。その場合は、情報量変更部6は一旦出力した画像(フレーム)を記憶しておき、次の出力タイミングにおいても同一の画像を繰り返して出力する。このように、フレームレート=0fpsとは、ある決まった映像フレームを毎回表示するというもので、映像表示を停止するものではない。 Also, conversion to frame rate = 0 fps means that a still image is output. In this case, the information amount changing unit 6 stores the image (frame) once output, and repeatedly outputs the same image at the next output timing. Thus, the frame rate = 0 fps means that a certain video frame is displayed every time, and does not stop the video display.
 なお、フレームレートを変換すると、切り替え時点で映像を見るユーザに対し違和感を与える場合がある。よって情報量変更部6は、出力する映像のフレームレートを瞬時に切り替えるのではなく、ある時間幅をもって徐々に切り替えることが望ましい。 It should be noted that converting the frame rate may give a sense of discomfort to the user who sees the video at the time of switching. Therefore, it is desirable that the information amount changing unit 6 should gradually switch the frame rate of the video to be output instead of instantaneously with a certain time width.
 映像出力部7は、情報量変更部6によって情報量が変更された映像を受け取り、映像表示部8を駆動するための表示信号を生成する。映像表示部8は映像出力部7から表示信号を受け取り、映像を表示する。 The video output unit 7 receives the video whose information amount has been changed by the information amount changing unit 6 and generates a display signal for driving the video display unit 8. The video display unit 8 receives a display signal from the video output unit 7 and displays a video.
 このようにして、映像表示部8にて表示する映像のフレームレートが低下すると、ユーザの視界に入る映像の変化は減少する。移動中のユーザは、表示される映像の動きが少なくなることにより、映像に対するユーザの集中度(臨場感)が緩和される。これにより酔いに似た不快感が軽減し、また、周囲環境に対する注意力が向上することになる。特にユーザが運転中の場合は、静止画を表示することにより、運転動作の妨げにならないようにする。ただし、いずれの場合でも、ユーザに対して映像表示が継続されるので、情報の提供が中断されることはない。これにより、移動中のユーザの不快感の解消と安全性を確保しつつ、情報を継続して提供できる映像表示装置を実現することができる。 In this way, when the frame rate of the video displayed on the video display unit 8 decreases, the change in the video entering the user's field of view decreases. The moving user can reduce the degree of user's concentration (realism) with respect to the video by reducing the movement of the displayed video. As a result, discomfort similar to sickness is reduced, and attention to the surrounding environment is improved. In particular, when the user is driving, a still image is displayed so that the driving operation is not hindered. However, in any case, since the video display is continued for the user, the provision of information is not interrupted. Thereby, the video display apparatus which can provide information continuously can be implement | achieved, ensuring the cancellation and safety of the discomfort of the user who is moving.
 前記実施例1では、単位時間当たりの情報量を低減させる例として、映像のフレームレートを下げることを説明した。実施例2では、単位時間当たりの情報量を低減させるための他の方法を説明する。 In the first embodiment, as an example of reducing the amount of information per unit time, it has been described that the frame rate of the video is lowered. In the second embodiment, another method for reducing the amount of information per unit time will be described.
 (1)表示映像を白黒またはグレースケール表示に変更する方法。
  単位時間当たりの情報量を低減するために、情報量決定部5は映像を白黒またはグレースケール表示にすることを決定し、情報量変更部6は、映像受信部2から受信したカラー映像を白黒またはグレースケール表示に変更する処理を施す。まず、グレースケールに変更する処理は次のように行う。映像受信部2から受信したカラー映像について、各フレーム内の各ピクセルを構成するRGB値を(R,G,B)とする。そして、r+g+b=1を満たす係数r,g,bを用いて、A=(r×R+g×G+b×B)を算出し、この算出値Aを用いてそのピクセルのRGB値(A,A,A)とする。一方、白黒表示に変更する処理では、各ピクセルの上記値Aに対する閾値Athを定め、A≧Athの場合は当該ピクセルを白色とし、A<Athの場合は当該ピクセルを黒色とすればよい。表示する映像を白黒またはグレースケールにすることにより、映像に対するユーザの集中度(臨場感)を緩和させることができる。
(1) A method of changing the display image to black and white or gray scale display.
In order to reduce the amount of information per unit time, the information amount determination unit 5 determines to display the video in black and white or gray scale, and the information amount change unit 6 converts the color video received from the video reception unit 2 into black and white. Alternatively, processing for changing to gray scale display is performed. First, the process of changing to grayscale is performed as follows. For the color video received from the video receiver 2, the RGB values constituting each pixel in each frame are (R, G, B). Then, A = (r × R + g × G + b × B) is calculated using coefficients r, g, and b satisfying r + g + b = 1, and the RGB value (A, A, A) of the pixel is calculated using the calculated value A. ). On the other hand, in the process of changing to black and white display, the threshold value Ath for the value A of each pixel is determined. If A ≧ Ath, the pixel is white, and if A <Ath, the pixel is black. By making the video to be displayed black and white or grayscale, the user's degree of concentration (realism) with respect to the video can be reduced.
 (2)表示映像の色相を周囲の色相に一致させる方法。
  単位時間当たりの情報量を低減するために、情報量決定部5は映像の色相を周辺の色相に一致させることを決定し、情報量変更部6は映像受信部2から受信した映像に対して、色相を周辺の色相に一致させる処理を施す。具体的には、撮像部3が撮像した周囲の画像を基に、移動検出部4はユーザがどのような色相の環境の中を移動しているかを検出し、検出結果を情報量決定部5に送る。情報量決定部5は色相変更の有無、及び移動検出部4から取得した周辺の色相を情報量変更部6に送る。情報量変更部6は、映像受信部2から受信した映像の各フレームの各ピクセルの色相を、情報量決定部5から送られた周辺の色相に変更する。表示する映像の色相を周囲の色相に一致させることにより、映像を周囲に同化させ、映像に対するユーザの集中度(臨場感)を緩和させることができる。
(2) A method of matching the hue of the displayed image with the surrounding hue.
In order to reduce the amount of information per unit time, the information amount determination unit 5 determines to match the hue of the image with the surrounding hues, and the information amount change unit 6 applies to the image received from the image reception unit 2. Then, a process for matching the hue with the surrounding hue is performed. Specifically, based on the surrounding image captured by the image capturing unit 3, the movement detection unit 4 detects in what hue environment the user is moving, and the detection result is used as the information amount determination unit 5. Send to. The information amount determination unit 5 sends the presence / absence of the hue change and the surrounding hue acquired from the movement detection unit 4 to the information amount change unit 6. The information amount changing unit 6 changes the hue of each pixel of each frame of the video received from the video receiving unit 2 to the surrounding hue sent from the information amount determining unit 5. By matching the hue of the video to be displayed with the surrounding hue, the video can be assimilated to the surroundings, and the user's degree of concentration (realism) can be reduced.
 (3)映像を縮小表示して、映像の画角を小さくする方法。
  単位時間当たりの情報量を低減するために、情報量決定部5は映像を縮小表示することを決定し、情報量変更部6は映像受信部2から受信した映像に対して、縮小表示する処理を施す。具体的には、映像受信部2から受信した映像を縦方向及び横方向、もしくはいずれか一方向に縮小し、縮小した映像を情報量変更部6が出力する映像表示領域の一部に配置する。縮小映像が配置されなかった表示領域は、黒色等の単一の色に変換することが望ましい。映像を縮小表示して映像の画角を小さくすることにより、映像に対するユーザの視覚の変化を抑え、映像に対するユーザの集中度(臨場感)を緩和させることができる。
(3) A method of reducing the angle of view of the image by reducing the image.
In order to reduce the amount of information per unit time, the information amount determination unit 5 determines to reduce and display the video, and the information amount change unit 6 performs processing to reduce and display the video received from the video reception unit 2 Apply. Specifically, the video received from the video receiver 2 is reduced in the vertical and / or horizontal directions, and the reduced video is arranged in a part of the video display area output by the information amount changing unit 6. . It is desirable to convert the display area where the reduced video is not arranged into a single color such as black. By reducing the video and reducing the angle of view of the video, it is possible to suppress a change in the user's vision with respect to the video and reduce the user's degree of concentration (realism) with respect to the video.
 (4)映像の一部分を表示して、映像の画角を小さくする方法。
  単位時間当たりの情報量を低減するために、情報量決定部5は映像の一部分を表示することを決定し、情報量変更部6は映像受信部2から受信した映像に対して、その映像の一部分を表示する処理を施す。具体的には、映像受信部2から受信した映像のうち一部分の映像を切り出して出力する。切り出さなかった残りの表示領域は、黒色等の単一の色に変換することが望ましい。映像の一部分を表示して映像の画角を小さくすることにより、映像に対するユーザの視覚の変化を抑え、映像に対するユーザの集中度(臨場感)を緩和させることができる。
(4) A method of displaying a part of an image and reducing the angle of view of the image.
In order to reduce the amount of information per unit time, the information amount determination unit 5 determines to display a part of the video, and the information amount change unit 6 applies to the video received from the video reception unit 2 with respect to the video. A process of displaying a part is performed. Specifically, a part of the video received from the video receiver 2 is cut out and output. The remaining display area that is not cut out is preferably converted into a single color such as black. By displaying a part of the video and reducing the angle of view of the video, it is possible to suppress changes in the user's vision with respect to the video and to reduce the user's degree of concentration (realism) with respect to the video.
 (5)表示映像の輝度を低くする方法。
  単位時間当たりの情報量を低減するために、情報量決定部5は映像の輝度を低くすることを決定し、情報量変更部6は映像受信部2から受信した映像に対して、その映像の輝度を低くする処理を施す。具体的には、映像受信部2から受信した映像の各フレームの各ピクセルの輝度、色相、彩度を求め、色相及び彩度は維持しながら、輝度を下げる処理を施し、当該色相と彩度及び変更後の輝度を、そのピクセルの色データとする。映像の輝度を低下させることにより、映像に対するユーザの視覚の変化を抑え、映像に対するユーザの集中度(臨場感)を緩和させることができる。
(5) A method of reducing the brightness of the displayed image.
In order to reduce the amount of information per unit time, the information amount determination unit 5 determines to reduce the luminance of the video, and the information amount change unit 6 applies to the video received from the video reception unit 2 with respect to the video. A process for reducing the luminance is performed. Specifically, the luminance, hue, and saturation of each pixel of each frame of the image received from the image receiving unit 2 are obtained, and the hue and saturation are processed while maintaining the hue and saturation. The changed luminance is used as the color data of the pixel. By reducing the luminance of the video, it is possible to suppress changes in the user's vision with respect to the video, and to reduce the user's degree of concentration (realism) with respect to the video.
 (6)表示映像の輝度を低くする方法。
  上記(5)では、映像受信部2から受信した映像に対して、情報量変更部6がその映像の輝度を低くする処理を施した。その変形として、情報量変更部6でなく映像出力部7にて輝度を下げることもできる。
(6) A method of reducing the brightness of the displayed image.
In the above (5), the information amount changing unit 6 performs processing for lowering the luminance of the video received from the video receiving unit 2. As a modification, the luminance can be lowered by the video output unit 7 instead of the information amount changing unit 6.
 図5は、映像表示装置の変形例を示すブロック図である。映像表示装置10aでは、図1における情報量変更部6を削除し、映像出力部7は映像受信部2から映像を受け取り、情報量決定部5が決定した輝度の制御信号により映像の輝度を変更する構成としている。なお、映像表示部8は映像を表示するための光源(バックライトなど)(図示せず)を備え、光源からの照射光の強度を変えることで輝度を変更する。情報量決定部5が輝度を低くすることを決定した場合には、映像出力部7は映像表示部8に対し、光源からの照射光強度を下げるように駆動信号を送る。これにより映像表示部8で表示する映像の輝度が低下する。この構成によれば、光源での消費電力を削減する二次的効果もある。 FIG. 5 is a block diagram showing a modification of the video display device. In the video display device 10a, the information amount changing unit 6 in FIG. 1 is deleted, the video output unit 7 receives the video from the video receiving unit 2, and changes the luminance of the video according to the luminance control signal determined by the information amount determining unit 5. It is configured to do. The video display unit 8 includes a light source (backlight or the like) (not shown) for displaying a video, and changes the luminance by changing the intensity of light emitted from the light source. When the information amount determination unit 5 determines to lower the luminance, the video output unit 7 sends a drive signal to the video display unit 8 so as to reduce the intensity of light emitted from the light source. Thereby, the brightness | luminance of the image | video displayed on the image | video display part 8 falls. According to this configuration, there is a secondary effect of reducing power consumption at the light source.
 実施例2では、単位時間当たりの情報量を低減させるいくつかの方法を述べたが、これらの方法は単独で用いるだけでなく、複数方法を組み合わせたり、実施例1の方法と組み合わせて用いることができる。例えば、単位時間当たりの情報量を低減させるため、映像のフレームレートを低下させること(実施例1)と、映像の一部分を表示すること(上記(4)の方法)を組み合わせることで、違和感が少ない映像とすることができる。 In the second embodiment, several methods for reducing the amount of information per unit time have been described, but these methods are not only used alone, but also a plurality of methods may be combined or used in combination with the method of the first embodiment. Can do. For example, in order to reduce the amount of information per unit time, a combination of reducing the frame rate of the video (Example 1) and displaying a part of the video (the method (4) above) may cause a sense of incongruity. There can be few images.
 前記実施例1、2では、ユーザの移動を検出するために撮像部3により取得された画像を用いたが、実施例3では、撮像部3が取得した画像に加えて、別のセンサで測定した結果を用いることで、ユーザの移動状態を精度良く検出する構成とした。ここにセンサは例えば3次元加速度センサであり、これによりユーザの歩行を検出する。 In the first and second embodiments, the image acquired by the imaging unit 3 is used to detect the movement of the user. However, in the third embodiment, the measurement is performed by another sensor in addition to the image acquired by the imaging unit 3. By using the result, a configuration in which the movement state of the user is detected with high accuracy is adopted. Here, the sensor is, for example, a three-dimensional acceleration sensor, and thereby detects the user's walk.
 図6は、実施例3に係る映像表示装置のブロック図である。映像表示装置10bは、新たにセンサ11を追加している。実施例1(図1)と同一の要素には同一の符号を付して、重複する説明を省略する。 FIG. 6 is a block diagram of the video display apparatus according to the third embodiment. The video display device 10b newly adds a sensor 11. The same elements as those in the first embodiment (FIG. 1) are denoted by the same reference numerals, and redundant description is omitted.
 センサ11は例えば3次元加速度センサであり、移動検出部4は、撮像部3が取得した画像と、センサ11が測定した結果の両方を用いてユーザの移動状態を検出する。その際移動検出部4は、2つの情報(撮像画像、センサ測定結果)を用いることで、ユーザが移動している確率値をそれぞれ求め、2つの確率値から統計的にユーザの移動状態の最終判定を行う。あるいは、一方の情報による移動判定基準を、他方の情報に応じて変化させることができる。以下、本実施例の動作を説明する。 The sensor 11 is, for example, a three-dimensional acceleration sensor, and the movement detection unit 4 detects the movement state of the user using both the image acquired by the imaging unit 3 and the result measured by the sensor 11. At that time, the movement detection unit 4 uses the two pieces of information (captured image and sensor measurement result) to obtain the probability values that the user is moving, respectively, and statistically determines the final movement state of the user from the two probability values. Make a decision. Alternatively, the movement determination criterion based on one information can be changed according to the other information. The operation of this embodiment will be described below.
 撮像部3からの画像を用いたユーザの移動の検出は、基本的には実施例1と同様である。ただし、画像内の前記した特徴物の移動追跡により、ユーザが移動しているか否かを2値的に判定するのではなく、ユーザが移動している確率を求める。例えば、撮像部3から取得した異なる時刻の2枚以上の撮像画像を比較して特徴物の移動速度を求め、その移動速度からユーザが移動している確率を求めることができる。 The detection of the movement of the user using the image from the imaging unit 3 is basically the same as in the first embodiment. However, the probability that the user is moving is obtained instead of binaryly determining whether or not the user is moving by tracking the movement of the above-described feature in the image. For example, the moving speed of the feature can be obtained by comparing two or more captured images obtained at different times acquired from the imaging unit 3, and the probability that the user is moving can be obtained from the moving speed.
 次に、センサ(3次元加速度センサ)11を用いたユーザの移動の検出について説明する。ユーザが歩行(または走行)することに伴う身体の動きを加速度センサにより測定する。3次元加速度センサは、HMDを頭部に装着してユーザが起立し正面を向いた際に、3軸方向の加速度を計測できるように、HMDに設置する。以下、ユーザの上下方向(Sz)を第1軸、ユーザの左右方向(Sy)を第2軸、ユーザの前後方向(Sx)を第3軸とする。 Next, detection of user movement using the sensor (three-dimensional acceleration sensor) 11 will be described. The body motion associated with the user walking (or running) is measured by an acceleration sensor. The three-dimensional acceleration sensor is installed on the HMD so that when the HMD is mounted on the head and the user stands up and faces the front, the acceleration in three axial directions can be measured. Hereinafter, the user's up and down direction (Sz) is the first axis, the user's left and right direction (Sy) is the second axis, and the user's front and rear direction (Sx) is the third axis.
 図7は、ユーザの歩行動作に対する3次元加速度センサの検出信号を示す図である。ユーザの歩行(または走行)動作は、次の4つの状態(W1)~(W4)を1周期として繰り返す。
(W1)右足が胴体より進行方向前側で、左足が胴体より進行方向後側の状態。
(W2)左右両足が胴体の直下で、右足が地面に接地する一方、左足が地面に接していない状態。
(W3)左足が胴体より進行方向前側で、右足が胴体より進行方向後側の状態。
(W4)左右両足が胴体の直下で、左足が地面に接している一方、右足が地面に接していない状態。
FIG. 7 is a diagram illustrating a detection signal of the three-dimensional acceleration sensor with respect to the user's walking motion. The user's walking (or running) operation is repeated with the following four states (W1) to (W4) as one cycle.
(W1) A state in which the right foot is on the front side in the traveling direction from the trunk and the left foot is on the rear side in the traveling direction from the trunk.
(W2) A state in which both the left and right feet are directly under the torso and the right foot is in contact with the ground while the left foot is not in contact with the ground.
(W3) A state in which the left foot is on the front side in the traveling direction from the trunk and the right foot is on the rear side in the traveling direction from the trunk.
(W4) A state in which both the left and right feet are directly under the torso and the left foot is in contact with the ground, while the right foot is not in contact with the ground.
 上記歩行の1周期(T)の間に、第1軸及び第3軸方向の加速度センサ(Sz,Sx)は2周期の信号を出力し、第2軸方向の加速度センサ(Sy)は1周期の信号を出力する。従って、第1軸方向及び第3軸方向の加速度センサの出力の周波数(fz,fx)が、第2軸方向の加速度センサの出力の周波数(fy)の2倍となり、さらに第2軸方向の加速度センサの出力の周波数(fy)が、歩行の1周期に要する時間の逆数(1/T)となる関係にある。 During one cycle (T) of the walking, the acceleration sensors (Sz, Sx) in the first axis direction and the third axis direction output signals of two cycles, and the acceleration sensor (Sy) in the second axis direction has one cycle. The signal is output. Accordingly, the frequency (fz, fx) of the output of the acceleration sensor in the first axis direction and the third axis direction is twice the frequency (fy) of the output of the acceleration sensor in the second axis direction, and further the frequency in the second axis direction. The frequency (fy) of the output of the acceleration sensor has a relationship that is the reciprocal (1 / T) of the time required for one cycle of walking.
 これより歩行検出では、第1軸及び第3軸方向の加速度センサ(Sz,Sx)の出力は、歩行の1周期に要する時間の逆数の2倍の周波数成分を含む(つまりfz=fx=2/Tとなる)ことを利用する。同様に、第2軸方向の加速度センサ(Sy)の出力は、歩行の1周期に要する時間の逆数の周波数成分を含む(つまりfy=1/Tとなる)ことを利用する。従って、第1軸及び第2軸方向の加速度センサの出力の周波数比(fz/fy)、あるいは第3軸及び第2軸方向の加速度センサの出力の周波数比(fx/fy)は、いずれも2の値となることを歩行判定条件とする。しかし、実際の加速度センサの出力にはそれら以外の周波数成分のノイズが含まれている。従って、加速度センサの出力の周波数比(fz/fy、あるいはfx/fy)からユーザが歩行しているかを判断する際には、判定条件に幅を持たせ、周波数比が例えば1.7~2.3の範囲のときユーザが歩行している可能性が高いと判断するのがよい。 Accordingly, in walking detection, the outputs of the acceleration sensors (Sz, Sx) in the first and third axes include a frequency component that is twice the reciprocal of the time required for one cycle of walking (that is, fz = fx = 2). / T). Similarly, the output of the acceleration sensor (Sy) in the second axis direction uses a frequency component that is the reciprocal of the time required for one cycle of walking (that is, fy = 1 / T). Therefore, the frequency ratio (fz / fy) of the output of the acceleration sensor in the first axis and second axis directions, or the frequency ratio (fx / fy) of the output of the acceleration sensor in the third axis and second axis directions are both A walking determination condition is a value of 2. However, the actual output of the acceleration sensor includes noise of other frequency components. Therefore, when determining whether the user is walking from the frequency ratio (fz / fy or fx / fy) of the output of the acceleration sensor, the determination condition is widened, and the frequency ratio is, for example, 1.7-2. It is better to judge that the possibility that the user is walking is in the range of .3.
 また、歩行(走行)の1周期に要する時間は、ユーザに個人差があり、また歩行(走行)環境に応じて変化する。よって、第2軸方向の加速度センサ(Sy)の出力の周波数(fy)からユーザの歩行を判断する際には、判定条件に幅を持たせ、周波数が例えば0.8Hz~1.2Hzの範囲のときユーザが歩行している可能性が高いと判断する。また、周波数が例えば2Hz以上のときは、ユーザが走行している可能性が高いと判断する。 Also, the time required for one cycle of walking (running) varies depending on the user, and changes depending on the walking (running) environment. Therefore, when determining the user's walking from the output frequency (fy) of the acceleration sensor (Sy) in the second axis direction, the determination condition is widened, and the frequency ranges from 0.8 Hz to 1.2 Hz, for example. It is determined that there is a high possibility that the user is walking. Further, when the frequency is, for example, 2 Hz or more, it is determined that the user is likely to travel.
 上記ではユーザの歩行判定を行うため、3軸方向の加速度センサ(Sz,Sy,Sx)間の出力の周波数比(fz/fy,fx/fy)と、第2軸方向の加速度センサ(Sy)の出力の周波数(fy)の両方を用いたが、一方のみを利用してもよい。また、上記では、第2軸方向の加速度センサ(Sy)の出力の周波数(fy)を用いたが、第1軸または第3軸の加速度センサ(Sz,Sx)、あるいは第1軸~第3軸のうち複数の加速度センサの出力の周波数を用いてもよい。1つの軸方向のみの加速度を利用する場合、あるいは2つの軸方向の加速度のみを利用する場合には、センサ11として1次元加速度センサあるいは2次元加速度センサを用いればよい。 In the above, in order to determine the user's walking, the frequency ratio (fz / fy, fx / fy) of the outputs between the acceleration sensors (Sz, Sy, Sx) in the triaxial direction and the acceleration sensor (Sy) in the second axial direction Both of the output frequencies (fy) are used, but only one of them may be used. In the above description, the output frequency (fy) of the acceleration sensor (Sy) in the second axis direction is used, but the first axis or third axis acceleration sensor (Sz, Sx), or the first to third axes. You may use the frequency of the output of several acceleration sensors among axes. When using acceleration in only one axial direction or using only acceleration in two axial directions, a one-dimensional acceleration sensor or a two-dimensional acceleration sensor may be used as the sensor 11.
 移動検出部4は、撮像部3の撮像画像から求めたユーザが移動している確率、及びセンサ11の測定結果から求めたユーザが移動している確率を基に、ユーザの移動状態を判定する。 The movement detection unit 4 determines the movement state of the user based on the probability of the user moving from the captured image of the imaging unit 3 and the probability of the user moving determined from the measurement result of the sensor 11. .
 図8は、ユーザの移動状態の判定の例を示す図である。ここでは、移動状態として「歩行」の場合を示す。撮像部3の撮像画像から求めた歩行確率をP1、センサ11の測定結果から求めた歩行確率をP2とする。2つの確率P1,P2からその相加平均をPav求める。相加平均Pavが閾値Pth以上であればユーザは歩行しており、閾値Pth未満であればユーザは歩行していないと判定する。ここでは閾値Pth=60%に設定している。 FIG. 8 is a diagram illustrating an example of determination of the movement state of the user. Here, the case of “walking” is shown as the movement state. The walking probability obtained from the captured image of the imaging unit 3 is P1, and the walking probability obtained from the measurement result of the sensor 11 is P2. The arithmetic average is obtained from the two probabilities P1 and P2. If the arithmetic average Pav is equal to or greater than the threshold value Pth, the user is walking, and if the arithmetic average Pav is less than the threshold value Pth, it is determined that the user is not walking. Here, the threshold value Pth is set to 60%.
 例えば、撮像部3から求めた歩行確率P1とセンサ11から求めた歩行確率P2が共に80%の場合、相加平均Pavも80%になるため、ユーザは歩行していると判定する。一方、撮像部3から求めた歩行確率P1が80%、センサ11から求めた歩行確率P2が20%のとき、相加平均Pavは50%となるため、ユーザは歩行していないと判定する。 For example, when the walking probability P1 obtained from the imaging unit 3 and the walking probability P2 obtained from the sensor 11 are both 80%, the arithmetic average Pav is also 80%, so that it is determined that the user is walking. On the other hand, when the walking probability P1 obtained from the imaging unit 3 is 80% and the walking probability P2 obtained from the sensor 11 is 20%, the arithmetic average Pav is 50%, so that it is determined that the user is not walking.
 なお、2つの確率P1,P2の平均Pavを求めるとき、相加平均でなく重み付け平均を採用してもよい。ユーザの移動状態が「走行」の場合も同様に判定することができる。 In addition, when calculating | requiring the average Pav of two probability P1, P2, you may employ | adopt a weighted average instead of an arithmetic average. The same determination can be made when the movement state of the user is “running”.
 上記例では、ユーザの移動検出のためのセンサ11として3次元加速度センサを用いたが、他のセンサで置き換えることができる。以下、適用可能な他のセンサを列記する。
  (1)振動検出スイッチ
  ユーザの第1軸方向(上下方向)の動きを計測するために、振動検出スイッチをHMDに設置する。このように振動検出スイッチをHMDに設置することで、振動検出スイッチはユーザ歩行の1周期に、2回の振動を検知することになる。この振動を検知することで、移動検出部4はユーザが歩行しているか否かを判定する。
In the above example, the three-dimensional acceleration sensor is used as the sensor 11 for detecting the movement of the user, but it can be replaced with another sensor. The other applicable sensors are listed below.
(1) Vibration detection switch A vibration detection switch is installed in the HMD in order to measure the movement of the user in the first axis direction (vertical direction). By installing the vibration detection switch in the HMD as described above, the vibration detection switch detects two vibrations in one cycle of user walking. By detecting this vibration, the movement detector 4 determines whether or not the user is walking.
 (2)位置計測装置
  HMDに、例えばGPS信号等を利用した位置計測装置を設置する。移動検出部4は、位置計測装置が算出した位置情報から、ユーザの移動速度を求める。その値が例えば時速3km~6kmであれば、ユーザは歩行している可能性が高いと判断する。
(2) Position measuring device A position measuring device using, for example, a GPS signal is installed in the HMD. The movement detection unit 4 obtains the moving speed of the user from the position information calculated by the position measurement device. If the value is 3 km to 6 km per hour, for example, it is determined that the user is likely to be walking.
 (3)ユーザ目線用撮像手段
  周囲の画像を取得する前記撮像部3とは別に、ユーザの目線の動きを検知する撮像手段(カメラ)を設置する。ユーザが歩行している際には、周囲の環境を把握するために目線の動きが活発となる。目線の動きを検知し、ユーザの目線が活発に動いている場合、ユーザが歩行している可能性が高いと判断する。
(3) Imaging means for user eyes Apart from the imaging section 3 that acquires surrounding images, imaging means (camera) that detects the movement of the user's eyes is installed. When the user is walking, the movement of the eyes becomes active in order to grasp the surrounding environment. When the movement of the line of sight is detected and the user's line of sight is actively moving, it is determined that the user is likely to be walking.
 (4)発汗センサ
  ユーザの身体に発汗センサを取り付ける。ユーザが歩行している際には、発汗作用が活発となることから、単位時間当たりの発汗量が多いほど、ユーザが歩行している可能性が高いと判断する。
(4) Sweating sensor A sweating sensor is attached to the user's body. Since the sweating action is active when the user is walking, it is determined that the possibility that the user is walking is higher as the sweating amount per unit time is larger.
 (5)脈拍計
  ユーザの身体に脈拍計を取り付ける。ユーザが歩行している際には、心拍数が増加することから、心拍数が多いほど、ユーザが歩行している可能性が高いと判断する。
(5) Pulse meter A pulse meter is attached to the user's body. Since the heart rate increases when the user is walking, it is determined that the higher the heart rate, the higher the possibility that the user is walking.
 上記の各センサの説明では、ユーザの歩行の検出を行うものとしたが、走行の検出も同様に可能であることは言うまでもない。
  また、上記説明ではセンサ11がHMDに搭載されているものとしたが、外部機器からセンサの測定結果を入力する構成でもよい。
In the description of each sensor described above, it is assumed that the user's walking is detected, but it is needless to say that traveling can also be detected.
In the above description, the sensor 11 is assumed to be mounted on the HMD. However, the sensor measurement result may be input from an external device.
 図9は、映像表示装置の変形例を示すブロック図である。映像表示装置(HMD)10cでは、センサ11の代わりに、外部機器12に搭載されているセンサの測定結果を入力するセンサ結果入力部13を設けている。移動検出部4は、センサ結果入力部13に入力したセンサの測定結果を利用してユーザの移動の判定を行う。外部機器12としては、例えばスマートフォンなどを用いることができる。また、外部機器12とHMD10cとの通信手段には、例えばBluetooth(登録商標)などを用いることができる。Bluetoothを用いる場合には、センサ結果入力部13はBluetooth受信機である。 FIG. 9 is a block diagram showing a modification of the video display device. In the video display device (HMD) 10 c, a sensor result input unit 13 for inputting a measurement result of a sensor mounted on the external device 12 is provided instead of the sensor 11. The movement detection unit 4 determines the movement of the user using the sensor measurement result input to the sensor result input unit 13. As the external device 12, for example, a smartphone can be used. For example, Bluetooth (registered trademark) can be used as a communication unit between the external device 12 and the HMD 10c. In the case of using Bluetooth, the sensor result input unit 13 is a Bluetooth receiver.
 実施例3によれば、移動検出部4は、撮像部3が取得した画像とセンサ11が測定した結果の両方を用いてユーザの移動を判定することにより、撮像部3が取得した画像のみを用いる場合に比べて、移動状態の検出精度を高めることができる。 According to the third embodiment, the movement detection unit 4 determines only the image acquired by the imaging unit 3 by determining the movement of the user using both the image acquired by the imaging unit 3 and the result measured by the sensor 11. Compared with the case where it uses, the detection accuracy of a movement state can be improved.
 前記実施例1~3では、映像受信部2は映像提供部1から映像を受信し、情報量変更部6は情報量決定部5の決定に従い、受信した映像の単位時間当たりの情報量を変更するものであった。これに対し実施例4では、情報量決定部5の決定に従い、映像提供部2に対し決定した単位時間当たりの情報量を有する映像を提供するように要求する構成とした。 In the first to third embodiments, the video receiving unit 2 receives video from the video providing unit 1, and the information amount changing unit 6 changes the information amount per unit time of the received video according to the determination of the information amount determining unit 5. It was something to do. On the other hand, in the fourth embodiment, in accordance with the determination by the information amount determining unit 5, the video providing unit 2 is requested to provide a video having the determined information amount per unit time.
 図10は、実施例4に係る映像表示装置のブロック図である。映像表示装置10dは、情報量変更部6を削除し新たに映像要求部14を備えている。実施例1(図1)と同一の要素には同一の符号を付して、重複する説明を省略する。 FIG. 10 is a block diagram of a video display apparatus according to the fourth embodiment. The video display device 10 d is provided with a video request unit 14 by deleting the information amount changing unit 6. The same elements as those in the first embodiment (FIG. 1) are denoted by the same reference numerals, and redundant description is omitted.
 情報量決定部5は、ユーザの移動状態に応じて表示する映像の単位時間当たりの情報量を決定すると、これを実現するための映像の仕様(例えばフレームレート)を設定して映像要求部14に送る。映像要求部14は、情報量決定部5から受けた映像仕様に基づき、映像提供部1に対して提供する映像に関する要求信号を送出する。映像提供部1は、当該要求に応じた映像を映像受信部2に送信する。これにより映像出力部7と映像表示部8は、情報量決定部5によって決定された単位時間当たりの情報量を有する映像を表示することができる。つまり、映像要求部14から映像提供部1に対する要求信号は、移動検出部4で検出したユーザの移動状態を反映したものとなっている。 When the information amount determination unit 5 determines the information amount per unit time of the video to be displayed according to the movement state of the user, the information amount determination unit 5 sets a video specification (for example, a frame rate) for realizing this, and the video request unit 14 Send to. The video request unit 14 sends a request signal related to the video to be provided to the video providing unit 1 based on the video specification received from the information amount determination unit 5. The video providing unit 1 transmits a video corresponding to the request to the video receiving unit 2. As a result, the video output unit 7 and the video display unit 8 can display a video having an information amount per unit time determined by the information amount determination unit 5. That is, the request signal from the video request unit 14 to the video providing unit 1 reflects the movement state of the user detected by the movement detection unit 4.
 ここに映像提供部1は、例えば外部のデータセンタに配置された記憶装置で、映像要求部14からの要求に応じて映像を変換して提供可能な装置である。なお、映像提供部1は、映像表示装置10dに内蔵した構成、例えば撮像装置であってもよい。 Here, the video providing unit 1 is a storage device arranged in an external data center, for example, and is a device that can convert and provide a video in response to a request from the video requesting unit 14. The video providing unit 1 may be a configuration built in the video display device 10d, for example, an imaging device.
 上記例では、移動検出部4は撮像部3により取得された画像を基にユーザの移動を検出したが、実施例3(図6)のように、撮像部3の他にセンサ11が測定した結果を用いてユーザの移動を検出する構成でもよい。 In the above example, the movement detection unit 4 detects the movement of the user based on the image acquired by the imaging unit 3. However, as in Example 3 (FIG. 6), the sensor 11 measured in addition to the imaging unit 3. A configuration in which the movement of the user is detected using the result may be used.
 実施例4によれば、映像提供部1から映像受信部2へ送信する映像のデータ量を最小限に抑える効果がある。特に、映像提供部1が外部のデータセンタに配置された記憶装置の場合、ネットワークを伝送するデータ量が削減し、伝送路の負荷が軽減する。また、映像提供部1が撮像装置の場合、装置内の撮像素子の稼動回数が削減し、装置の省電力及び長寿命化の二次的な効果がある。 According to the fourth embodiment, there is an effect of minimizing the amount of video data transmitted from the video providing unit 1 to the video receiving unit 2. In particular, when the video providing unit 1 is a storage device arranged in an external data center, the amount of data transmitted over the network is reduced, and the load on the transmission path is reduced. Further, when the video providing unit 1 is an imaging device, the number of operations of the imaging element in the device is reduced, and there is a secondary effect of power saving and longer life of the device.
 本発明は上記した各実施例に限定されるものではなく、様々な変形例が含まれる。上記では、ユーザが移動しながら使用する映像表示装置としてHMDを例に説明したが、これに限定するものではない。例えば、透過型映像表示器であるヘッドアップディスプレイ(Head-Up Display、以下HUDと略す)などでも、同様に用いることができる。車両に搭載するHUDでは、ユーザの移動状態として車両の走行状態を検出して、これに応じて表示する映像の単位時間当たりの情報量を変更する。これにより、車両走行時におけるユーザの映像に対する注視度を緩和させ、安全性を確保したHUDを提供することができる。 The present invention is not limited to the above-described embodiments, and includes various modifications. In the above description, the HMD has been described as an example of the video display device used while the user is moving. However, the present invention is not limited to this. For example, a head-up display (hereinafter abbreviated as HUD), which is a transmissive video display, can be used similarly. In the HUD mounted on the vehicle, the traveling state of the vehicle is detected as the movement state of the user, and the information amount per unit time of the video to be displayed is changed accordingly. Thereby, the gaze degree with respect to the user's image at the time of vehicle travel can be eased, and HUD which secured safety can be provided.
 上記した実施例では、本発明を分かりやすく説明するために各部の構成を詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されない。また、ある実施例の構成の一部を他の実施例の構成に置き換えることや、ある実施例の構成に他の実施例の構成を追加することも可能である。 In the above-described embodiments, the configuration of each part is described in detail in order to explain the present invention in an easy-to-understand manner, and the present invention is not necessarily limited to one having all the configurations described. Further, a part of the configuration of a certain embodiment can be replaced with the configuration of another embodiment, or the configuration of another embodiment can be added to the configuration of a certain embodiment.
 1:映像提供部、
 2:映像受信部、
 3:撮像部、
 4:移動検出部、
 5:情報量決定部、
 6:情報量変更部、
 7:映像出力部、
 8:映像表示部、
 10,10a~10d:映像表示装置、
 11:センサ、
 12:外部機器、
 13:センサ結果入力部、
 14:映像要求部。
1: Video provider
2: Video receiver,
3: Imaging unit,
4: Movement detector
5: Information amount determination unit,
6: Information amount changing part,
7: Video output unit,
8: Video display section
10, 10a to 10d: video display device,
11: sensor,
12: External equipment
13: Sensor result input part,
14: Image request unit.

Claims (13)

  1.  ユーザが移動しながら使用可能な映像表示装置において、
     映像提供部から提供された映像に対し単位時間当たりの情報量を変更する情報量変更部と、
     前記情報量変更部にて情報量を変更した映像を表示する映像表示部と、
     前記ユーザの周囲の被写体を撮像する撮像部と、
     前記撮像部が撮像した画像に基づいて、前記ユーザの移動状態を検出する移動検出部と、
     前記移動検出部にて検出したユーザの移動状態に応じて、前記情報量変更部に対する単位時間当たりの映像の情報量の変更値を決定する情報量決定部と、を備えることを特徴とする映像表示装置。
    In a video display device that a user can use while moving,
    An information amount changing unit for changing the amount of information per unit time for the video provided from the video providing unit;
    A video display unit for displaying a video whose information amount has been changed by the information amount changing unit;
    An imaging unit for imaging a subject around the user;
    A movement detection unit that detects a movement state of the user based on an image captured by the imaging unit;
    An information amount determining unit that determines a change value of the information amount of the image per unit time for the information amount changing unit according to the movement state of the user detected by the movement detecting unit; Display device.
  2.  ユーザが移動しながら使用可能な映像表示装置において、
     映像提供部に対して提供する映像に関する要求信号を送出する映像要求部と、
     前記要求信号に従い前記映像提供部から提供された映像を表示する映像表示部と、
     前記ユーザの周囲の被写体を撮像する撮像部と、
     前記撮像部が撮像した画像に基づいて、前記ユーザの移動状態を検出する移動検出部と、
     前記移動検出部にて検出したユーザの移動状態に応じて、前記映像要求部の送出する要求信号に対し、前記映像提供部が提供する映像の単位時間当たりの情報量の変更値を決定する情報量決定部と、を備えることを特徴とする映像表示装置。
    In a video display device that a user can use while moving,
    A video requesting unit for sending a request signal related to the video to be provided to the video providing unit;
    A video display unit for displaying the video provided from the video providing unit according to the request signal;
    An imaging unit for imaging a subject around the user;
    A movement detection unit that detects a movement state of the user based on an image captured by the imaging unit;
    Information for determining a change value of the information amount per unit time of the video provided by the video providing unit in response to a request signal sent from the video requesting unit according to the movement state of the user detected by the movement detecting unit An image display device comprising: an amount determination unit.
  3.  請求項1または2に記載の映像表示装置において、
     前記ユーザの状態を測定するセンサを備え、
     前記移動検出部は、前記撮像部から取得した画像とともに、前記センサによる測定結果に基づいて、前記ユーザの移動状態を検出することを特徴とする映像表示装置。
    The video display device according to claim 1 or 2,
    A sensor for measuring the state of the user;
    The said movement detection part detects the movement state of the said user based on the measurement result by the said sensor with the image acquired from the said imaging part, The video display apparatus characterized by the above-mentioned.
  4.  請求項1または2に記載の映像表示装置において、
     外部機器のセンサで測定した前記ユーザの状態の測定結果を入力するセンサ結果入力部を備え、
     前記移動検出部は、前記撮像部から取得した画像とともに、前記センサ結果入力部にて入力したセンサ測定結果に基づいて、前記ユーザの移動状態を検出することを特徴とする映像表示装置。
    The video display device according to claim 1 or 2,
    A sensor result input unit for inputting a measurement result of the user state measured by a sensor of an external device;
    The said movement detection part detects the movement state of the said user based on the sensor measurement result input in the said sensor result input part with the image acquired from the said imaging part, The video display apparatus characterized by the above-mentioned.
  5.  請求項1または2に記載の映像表示装置において、
     当該映像表示装置を前記ユーザの頭部に装着する装着部を有することを特徴とする映像表示装置。
    The video display device according to claim 1 or 2,
    A video display device comprising a mounting portion for mounting the video display device on the user's head.
  6.  請求項1乃至5のいずれか1項に記載の映像表示装置において、
     前記情報量決定部は前記移動検出部の検出結果に応じて、前記ユーザが移動している場合は前記ユーザが移動していない場合よりも、映像の単位時間当たりの情報量の変更値を小さく決定することを特徴とする映像表示装置。
    The video display device according to any one of claims 1 to 5,
    According to the detection result of the movement detection unit, the information amount determination unit reduces a change value of the information amount per unit time of the video when the user is moving compared to when the user is not moving. An image display device characterized by determining.
  7.  請求項6に記載の映像表示装置において、
     前記移動検出部が前記ユーザが移動していることを検出したとき、前記情報量決定部は、前記ユーザの移動速度が大きい場合は移動速度が小さい場合よりも、映像の単位時間当たりの情報量の変更値を小さく決定することを特徴とする映像表示装置。
    The video display device according to claim 6,
    When the movement detection unit detects that the user is moving, the information amount determination unit determines the amount of information per unit time of the video when the movement speed of the user is large, compared to when the movement speed is low. An image display device characterized in that the change value of the image is determined to be small.
  8.  請求項7に記載の映像表示装置において、
     前記移動検出部が前記ユーザが移動しており、かつ車両を運転中であることを検出したとき、前記情報量決定部は映像の単位時間当たりの情報量の変更値を、前記ユーザが車両を運転していない場合よりも小さく決定することを特徴とする映像表示装置。
    The video display device according to claim 7,
    When the movement detection unit detects that the user is moving and is driving the vehicle, the information amount determination unit displays a change value of the information amount per unit time of the video, and the user selects the vehicle. An image display device characterized in that it is determined to be smaller than that when not in operation.
  9.  請求項1乃至5のいずれか1項に記載の映像表示装置において、
     前記映像の単位時間当たりの情報量とは、前記映像表示部にて表示する映像のフレームレートであることを特徴とする映像表示装置。
    The video display device according to any one of claims 1 to 5,
    The amount of information per unit time of the image is a frame rate of the image displayed on the image display unit.
  10.  請求項1乃至5のいずれか1項に記載の映像表示装置において、
     前記映像の単位時間当たりの情報量を小さくするとは、前記映像表示部にて表示する映像をカラー表示から白黒表示、もしくはグレースケール表示に変更することであることを特徴とする映像表示装置。
    The video display device according to any one of claims 1 to 5,
    Reducing the amount of information per unit time of the video is to change the video displayed on the video display unit from color display to black and white display or gray scale display.
  11.  請求項1乃至5のいずれか1項に記載の映像表示装置において、
     前記映像の単位時間当たりの情報量とは、前記映像表示部にて表示する映像の画角の大きさであることを特徴とする映像表示装置。
    The video display device according to any one of claims 1 to 5,
    The amount of information per unit time of the image is the size of the angle of view of the image displayed on the image display unit.
  12.  請求項1乃至5のいずれか1項に記載の映像表示装置において、
     前記映像の単位時間当たりの情報量を小さくするとは、前記映像表示部にて表示する映像の輝度を低下させることであることを特徴とする映像表示装置。
    The video display device according to any one of claims 1 to 5,
    Reducing the amount of information per unit time of the video means reducing the luminance of the video displayed on the video display unit.
  13.  請求項3または4に記載の映像表示装置において、
     前記センサは、3次元加速度センサ、振動検出スイッチ、位置計測装置のいずれかを含むことを特徴とする映像表示装置。
    The video display device according to claim 3 or 4,
    The image display device, wherein the sensor includes any one of a three-dimensional acceleration sensor, a vibration detection switch, and a position measurement device.
PCT/JP2014/058074 2014-03-24 2014-03-24 Video display device WO2015145541A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2014/058074 WO2015145541A1 (en) 2014-03-24 2014-03-24 Video display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2014/058074 WO2015145541A1 (en) 2014-03-24 2014-03-24 Video display device

Publications (1)

Publication Number Publication Date
WO2015145541A1 true WO2015145541A1 (en) 2015-10-01

Family

ID=54194143

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/058074 WO2015145541A1 (en) 2014-03-24 2014-03-24 Video display device

Country Status (1)

Country Link
WO (1) WO2015145541A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017125883A (en) * 2016-01-12 2017-07-20 株式会社デンソー Eyeglass information display device
WO2017145753A1 (en) * 2016-02-22 2017-08-31 シャープ株式会社 Display control device, display control method and program
CN107343392A (en) * 2015-12-17 2017-11-10 松下电器(美国)知识产权公司 Display methods and display device
WO2020044916A1 (en) * 2018-08-29 2020-03-05 ソニー株式会社 Information processing device, information processing method, and program
US10819428B2 (en) 2016-11-10 2020-10-27 Panasonic Intellectual Property Corporation Of America Transmitting method, transmitting apparatus, and program
US10951309B2 (en) 2015-11-12 2021-03-16 Panasonic Intellectual Property Corporation Of America Display method, non-transitory recording medium, and display device
CN112650212A (en) * 2019-10-11 2021-04-13 丰田自动车株式会社 Remote automatic driving vehicle and vehicle remote indicating system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10206787A (en) * 1997-01-20 1998-08-07 Honda Motor Co Ltd Head-mounted display device for vehicle
JP2006217520A (en) * 2005-02-07 2006-08-17 Konica Minolta Photo Imaging Inc Video display device and glasses type video display device
JP2007101618A (en) * 2005-09-30 2007-04-19 Konica Minolta Photo Imaging Inc Display device
JP2007336211A (en) * 2006-06-14 2007-12-27 Mitsubishi Electric Corp On-board broadcast receiver
JP2011091789A (en) * 2009-09-24 2011-05-06 Brother Industries Ltd Head-mounted display
JP2012222628A (en) * 2011-04-08 2012-11-12 Brother Ind Ltd Image display device
WO2013111185A1 (en) * 2012-01-25 2013-08-01 三菱電機株式会社 Mobile body information apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10206787A (en) * 1997-01-20 1998-08-07 Honda Motor Co Ltd Head-mounted display device for vehicle
JP2006217520A (en) * 2005-02-07 2006-08-17 Konica Minolta Photo Imaging Inc Video display device and glasses type video display device
JP2007101618A (en) * 2005-09-30 2007-04-19 Konica Minolta Photo Imaging Inc Display device
JP2007336211A (en) * 2006-06-14 2007-12-27 Mitsubishi Electric Corp On-board broadcast receiver
JP2011091789A (en) * 2009-09-24 2011-05-06 Brother Industries Ltd Head-mounted display
JP2012222628A (en) * 2011-04-08 2012-11-12 Brother Ind Ltd Image display device
WO2013111185A1 (en) * 2012-01-25 2013-08-01 三菱電機株式会社 Mobile body information apparatus

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10951309B2 (en) 2015-11-12 2021-03-16 Panasonic Intellectual Property Corporation Of America Display method, non-transitory recording medium, and display device
CN107343392B (en) * 2015-12-17 2020-10-30 松下电器(美国)知识产权公司 Display method and display device
CN107343392A (en) * 2015-12-17 2017-11-10 松下电器(美国)知识产权公司 Display methods and display device
JP2017125883A (en) * 2016-01-12 2017-07-20 株式会社デンソー Eyeglass information display device
JPWO2017145753A1 (en) * 2016-02-22 2018-08-02 シャープ株式会社 Display control apparatus, display control method, and program
WO2017145753A1 (en) * 2016-02-22 2017-08-31 シャープ株式会社 Display control device, display control method and program
US10819428B2 (en) 2016-11-10 2020-10-27 Panasonic Intellectual Property Corporation Of America Transmitting method, transmitting apparatus, and program
WO2020044916A1 (en) * 2018-08-29 2020-03-05 ソニー株式会社 Information processing device, information processing method, and program
JPWO2020044916A1 (en) * 2018-08-29 2021-09-24 ソニーグループ株式会社 Information processing equipment, information processing methods and programs
US11726320B2 (en) 2018-08-29 2023-08-15 Sony Corporation Information processing apparatus, information processing method, and program
JP7400721B2 (en) 2018-08-29 2023-12-19 ソニーグループ株式会社 Information processing device, information processing method and program
CN112650212A (en) * 2019-10-11 2021-04-13 丰田自动车株式会社 Remote automatic driving vehicle and vehicle remote indicating system
JP2021064118A (en) * 2019-10-11 2021-04-22 トヨタ自動車株式会社 Remote autonomous vehicle and vehicle remote command system
JP7310524B2 (en) 2019-10-11 2023-07-19 トヨタ自動車株式会社 Remote self-driving vehicle and vehicle remote command system

Similar Documents

Publication Publication Date Title
WO2015145541A1 (en) Video display device
US10559065B2 (en) Information processing apparatus and information processing method
US10984756B2 (en) Adaptive parameters in image regions based on eye tracking information
US10310595B2 (en) Information processing apparatus, information processing method, computer program, and image processing system
US11037532B2 (en) Information processing apparatus and information processing method
JP7173126B2 (en) Information processing device, information processing method, and recording medium
US20140152530A1 (en) Multimedia near to eye display system
US20170123747A1 (en) System and Method for Alerting VR Headset User to Real-World Objects
JP2015114757A (en) Information processing apparatus, information processing method, and program
KR102346386B1 (en) A mobile sensor device for a head-worn visual output device usable in a vehicle, and a method for operating a display system
JP6571767B2 (en) Video display device and control method
JP2010050645A (en) Image processor, image processing method, and image processing program
CN110998666A (en) Information processing apparatus, information processing method, and program
CN111630852A (en) Information processing apparatus, information processing method, and program
KR20180038175A (en) Server, device and method for providing virtual reality service
US20200258316A1 (en) Method of garbling real-world image for see-through head mount display and see-through head mount display with realworld image garbling function
US10602116B2 (en) Information processing apparatus, information processing method, and program for performing display control
JP7078568B2 (en) Display device, display control method, and display system
US11589001B2 (en) Information processing apparatus, information processing method, and program
KR101331055B1 (en) Visual aid system based on the analysis of visual attention and visual aiding method for using the analysis of visual attention
JPH11237581A (en) Head-mount display device
JPWO2017199495A1 (en) Image processing system, image processing apparatus, and program
WO2022004130A1 (en) Information processing device, information processing method, and storage medium
KR102360557B1 (en) See-through head mount display with real-world image garbling function
KR102185519B1 (en) Method of garbling real-world image for direct encoding type see-through head mount display and direct encoding type see-through head mount display with real-world image garbling function

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14887312

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 14887312

Country of ref document: EP

Kind code of ref document: A1