JP6421543B2 - Head-mounted display device, method for controlling head-mounted display device, computer program - Google Patents

Head-mounted display device, method for controlling head-mounted display device, computer program Download PDF

Info

Publication number
JP6421543B2
JP6421543B2 JP2014212728A JP2014212728A JP6421543B2 JP 6421543 B2 JP6421543 B2 JP 6421543B2 JP 2014212728 A JP2014212728 A JP 2014212728A JP 2014212728 A JP2014212728 A JP 2014212728A JP 6421543 B2 JP6421543 B2 JP 6421543B2
Authority
JP
Japan
Prior art keywords
virtual object
augmented reality
display mode
display
processing unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2014212728A
Other languages
Japanese (ja)
Other versions
JP2016081339A (en
Inventor
辰典 ▲高▼橋
辰典 ▲高▼橋
薫 山口
薫 山口
高野 正秀
正秀 高野
Original Assignee
セイコーエプソン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by セイコーエプソン株式会社 filed Critical セイコーエプソン株式会社
Priority to JP2014212728A priority Critical patent/JP6421543B2/en
Priority claimed from US14/870,659 external-priority patent/US10140768B2/en
Publication of JP2016081339A publication Critical patent/JP2016081339A/en
Application granted granted Critical
Publication of JP6421543B2 publication Critical patent/JP6421543B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to a head-mounted display device.

  A technique called augmented reality (AR) is known in which information is additionally presented using a computer to a real object that is an object that exists in the real world. In augmented reality, information additionally displayed on a real object is also called a “virtual object”. Augmented reality is mounted on, for example, a head-mounted display (hereinafter also referred to as “HMD” or “head-mounted display device”).

  The HMD captures an outside scene with a camera, recognizes an image obtained by the imaging, and generates or acquires a virtual object. In a non-transparent HMD in which the user's field of view is blocked when the HMD is worn, the captured image and the virtual object are superimposed and made visible to the user. In a transmissive HMD in which the user's view is not blocked when the HMD is worn, only the virtual object is visually recognized by the user. A user wearing a transparent HMD can experience augmented reality by looking at both real and virtual objects in the real world. Patent Document 1 describes a technique for realizing augmented reality in a transmissive HMD.

JP 2010-67083 A JP 2005-38008 A

  In many cases, the virtual object described above is arranged so as to be superimposed on the real object or in the vicinity of the real object. For this reason, there has been a problem that display of a virtual object in a non-transparent / transparent HMD may hinder a user from visually recognizing a real object. The techniques described in Patent Documents 1 and 2 do not consider such problems at all. In addition, even if it is not necessary to display the virtual object, the virtual object is displayed, which may hinder the user from visually recognizing the real object, and the user may feel annoyed. There were also challenges.

  For this reason, a head-mounted display device has been desired in which the display of the virtual object does not easily hinder the visual recognition of the real object.

SUMMARY An advantage of some aspects of the invention is to solve at least a part of the problems described above, and the invention can be implemented as the following forms. According to one aspect of the present invention, a head-mounted display device that allows a user to visually recognize a virtual image is provided. The head-mounted display device includes: an image display unit that allows the user to visually recognize the virtual image; and the image display unit that includes the virtual object that is additionally displayed on a real object that exists in the real world. An augmented reality processing unit formed on the real object; the augmented reality processing unit; in response to continuation of the attention motion over a predetermined reference time for the real object; An associated virtual object that forms the virtual image including the virtual object in a first display mode; the augmented reality processing unit further includes the virtual object in the first display mode; In the case where the virtual image including the virtual object in the second display mode is formed prior to the formation of the virtual image; In response to the continuation of the attention movement over the reference time for any one of the virtual object and the real object at least; the virtual object on which the attention movement is performed or a virtual object related to the real object The virtual image including the virtual object in the first display mode is formed; and the visibility inhibition degree of the virtual object in the second display mode with respect to the real object is Lower than the visibility inhibition degree of the virtual object in the first display mode; the virtual object in the second display mode is a character suggesting the content of the virtual object in the first display mode And / or a figure, a picture, a symbol, or a combination thereof Zureka may include a.

(1) According to one aspect of the present invention, a head-mounted display device in which a user can visually recognize a virtual image is provided. The head-mounted display device includes: an image display unit that allows the user to visually recognize the virtual image; and the image display unit that includes the virtual object that is additionally displayed on a real object that exists in the real world. An augmented reality processing unit to be formed; the augmented reality processing unit; after forming the virtual image including the virtual object in the first display mode; after elapse of a predetermined maintenance time, The virtual image including the virtual object in the second display mode is formed, and the visibility inhibition degree of the virtual object in the second display mode with respect to the real object is the first display for the real object. It is lower than the visibility inhibition degree of the virtual object in the aspect.
According to the head-mounted display device of this aspect, the augmented reality processing unit forms the virtual image including the virtual object in the first display mode on the image display unit, and then after the maintenance time has elapsed, A virtual image including a virtual object in the second display mode having a lower visibility hindrance than the first display mode is formed on the image display unit. In this way, the visibility hindrance of the virtual object occupying the displayed virtual image is automatically reduced after the maintenance time has elapsed, so that the user can easily view the real object that exists in the real world. As a result, it is possible to provide a head-mounted display device in which the display of the virtual object does not easily hinder the visual recognition of the real object or its background.

(2) In the head-mounted display device of the above aspect; the maintenance time may be a variable length.
According to the head-mounted display device of this aspect, the maintenance time for transitioning the display mode from the first display mode having a high degree of visibility inhibition to the second display mode having a low degree of visibility inhibition is obtained. For example, it can be changed according to various conditions.

(3) The head-mounted display device of the above aspect further includes: a maintenance time acquisition unit that acquires the maintenance time used in the past in the augmented reality processing unit; A statistical value of the past maintenance time is obtained; the maintenance time used in the current process may be changed based on the obtained statistical value.
According to the head-mounted display device of this aspect, the augmented reality processing unit is configured to maintain the maintenance time used for the past processing in the augmented reality processing unit (the time until the visibility impediment is automatically lowered). The maintenance time used in the current process can be automatically changed based on the statistical value, that is, the tendency of the maintenance time used in the past process.

(4) In the head-mounted display device according to the above aspect; the augmented reality processing unit; obtains an information amount of the virtual object in the first display mode; based on the obtained information amount, a current process The maintenance time used in the above may be changed.
According to the head-mounted display device of this aspect, the augmented reality processing unit uses the maintenance time used in the current processing based on the information amount of the virtual object in the first display mode with a high degree of visibility inhibition. (Time until the visibility impediment degree is automatically lowered) can be changed. In this way, the augmented reality processing unit, for example, when the information amount of the virtual object in the first display mode is large, in other words, the user needs a lot of time to confirm the contents of the virtual object. Then, since the maintenance time when estimated is longer than the maintenance time when the amount of information is small, the convenience for the user can be improved.

(5) In the head-mounted display device according to the above aspect; the augmented reality processing unit; even if the method for obtaining the information amount is changed according to the type of the virtual object in the first display mode Good.
According to the head-mounted display device of this aspect, the augmented reality processing unit can obtain the information amount of the virtual object by a method suitable for the type of the virtual object in the first display mode. It is possible to grasp the information amount of the virtual object.

(6) The head-mounted display device according to the above aspect further includes: a maintenance time acquisition unit that acquires the user's setting for the maintenance time; the augmented reality processing unit; the acquired user Based on the setting, the maintenance time used in the current process may be changed.
According to the head-mounted display device of this aspect, the augmented reality processing unit uses the maintenance time used for the current processing according to the user's preference (the time until the visibility hindrance is automatically lowered). Can be changed.

(7) In the head-mounted display device of the above aspect, further: the maintenance time used in the past in the augmented reality processing unit, and the information amount of the virtual object in the first display mode at that time A maintenance time acquisition unit that acquires maintenance time information in association with identification information for identifying the user at that time; and the augmented reality processing unit; The maintenance time used in the current process may be changed based on the information amount of the virtual object in the first display mode.
According to this form of the head-mounted display device, the augmented reality processing unit can determine the amount of information that the user can recognize per unit time using the maintenance time information. Therefore, the augmented reality processing unit is based on, for example, the obtained information amount (information amount that the user can recognize per unit time) and the information amount of the virtual object in the first display mode. Thus, it is possible to change the maintenance time used in the current process (the time until the visibility hindrance is automatically lowered). In this way, the augmented reality processing unit, for example, when the amount of information that the user can recognize per unit time is small, in other words, the user spends a lot of time checking the contents of the virtual object. Can be made longer than the maintenance time when the amount of information is large. As a result, since the augmented reality processing unit can change the maintenance time according to individual differences among users, it is possible to improve user convenience.

(8) In the head-mounted display device of the above aspect; the virtual object in the second display mode is a character, a figure, and a pattern suggesting the contents of the virtual object in the first display mode And / or a symbol or a combination thereof.
According to the head-mounted display device of this form, the contents of the virtual object in the first display mode can be suggested using the virtual object in the second display mode with a low degree of visibility inhibition.

(9) In the head-mounted display device according to the above aspect; the augmented reality processing unit is configured to receive the first request from the user while waiting for the maintenance time to elapse. The transition from the display mode to the second display mode may be stopped.
According to the head-mounted display device of this form, the augmented reality processing unit stops the transition from the first display mode to the second display mode in response to the first request from the user. Therefore, convenience for the user can be improved.

(10) In the head-mounted display device according to the above aspect; the augmented reality processing unit; the standby time when the second request from the user is acquired while waiting for the maintenance time to elapse. Even before elapse of time, the transition from the first display mode to the second display mode may be performed.
According to the head-mounted display device of this form, the augmented reality processing unit can change the second display from the first display mode to the second display even if the maintenance time has not elapsed in response to the second request from the user. Since the display mode can be forcibly shifted to the display mode, the convenience for the user can be improved.

(11) In the head-mounted display device of the above aspect; a request made by at least one of the user's hand, foot, voice, head, or a combination thereof, You may provide the request | requirement acquisition part acquired as a said 1st request | requirement or a said 2nd request | requirement.
According to the head-mounted display device of this aspect, the user can make the first request or the request by at least one of the hand, the foot, the voice, the head, or a combination thereof. A second request can be made.

(12) In the head-mounted display device of the above aspect; the augmented reality processing unit may change the transition from the first display mode to the second display mode in a stepwise manner.
According to the head-mounted display device of this aspect, the augmented reality processing unit is used in accordance with the change of the display mode in order to change the transition from the first display mode to the second display mode in a stepwise manner. Discomfort given to the person can be reduced.

(13) According to an aspect of the present invention, a head-mounted display device that allows a user to visually recognize a virtual image and an outside scene is provided. The head-mounted display device includes: an image display unit that allows the user to visually recognize the virtual image; and the image display unit that includes the virtual object that is additionally displayed on a real object that exists in the real world. An augmented reality processing unit formed on the real object; the augmented reality processing unit; according to continuation of the attention motion over a predetermined reference time for the real object; at least the real object on which the attention motion has been performed An associated virtual object, wherein the virtual image including the virtual object in the first display mode is formed.
According to the head-mounted display device of this aspect, the augmented reality processing unit is a virtual object related to at least a real object that has been subjected to the attention motion according to the continuation of the attention motion over a predetermined reference time. Then, a virtual image including the virtual object in the first display mode is formed on the image display unit. In this way, the augmented reality processing unit displays the virtual object according to the user's intention to continue the attention motion, so that the user does not perform the attention motion continuously until the real object in the real world exists. It is possible to maintain a state where it is easy to visually recognize. As a result, it is possible to provide a head-mounted display device in which the display of the virtual object does not easily hinder the visual recognition of the real object or its background.

(14) In the head-mounted display device according to the above aspect; the augmented reality processing unit further includes: a second display mode prior to the formation of the virtual image including the virtual object in the first display mode. In the case where the virtual image including the virtual object is formed, the attention movement over the reference time with respect to either the virtual object or the real object in the second display mode Depending on the continuation; at least the virtual object on which the attention movement is performed or a virtual object related to the real object, the virtual image including the virtual object in the first display mode may be formed . The visibility inhibition degree of the virtual object in the second display mode with respect to the real object may be lower than the visibility inhibition degree of the virtual object in the first display mode with respect to the real object.
According to the head-mounted display device of this aspect, the augmented reality processing unit adds to the real object after the virtual image including the virtual object in the second display mode is formed on the image display unit. The virtual object in the first display mode having a higher degree of visibility inhibition than the second display mode is included in accordance with the continuation of the attention movement over the predetermined reference time for the virtual object in the second display mode. A virtual image can be formed on the image display unit. In addition, since the display state of the virtual object transitions from the second display state to the first display state due to the user's intention to continue the attention operation, and the visibility inhibition degree increases, the user can Unless it is continuously performed, it is possible to maintain a state in which a real object existing in the real world is easily visible. In other words, the user can control the visibility obstruction degree of the virtual object with his / her intention. As a result, it is possible to provide a head-mounted display device in which the display of the virtual object does not easily hinder the visual recognition of the real object or its background.

(15) The head-mounted display device according to the above aspect may further include: a line-of-sight acquisition unit that acquires a movement of the line of sight of the user as the attention movement.
According to this form of the head-mounted display device, the user can perform the attention action using the movement of the line of sight without moving the hand or foot. For this reason, the user can easily perform the attention operation even in a scene such as during work where it is difficult for the user to release his / her hand.

(16) The head-mounted display device of the above aspect may further include a movement acquisition unit that acquires movement of the user's hand as the attention movement.
According to this form of the head-mounted display device, the user can easily perform the attention operation using the hand movement familiar to the normal operation.

(17) In the head-mounted display device of the above aspect; the reference time may be a variable length.
According to this form of the head-mounted display device, the reference time for changing the display mode to the first display mode can be changed according to various conditions, for example.

(18) The head-mounted display device according to the above aspect further includes: a reference time acquisition unit that acquires the reference time used in the past in the augmented reality processing unit; the augmented reality processing unit; A past statistical value of the reference time is obtained; based on the obtained statistical value, the reference time used in the current process may be changed.
According to this form of the head-mounted display device, the augmented reality processing unit includes the statistical value of the reference time used in the past processing in the augmented reality processing unit, that is, the reference time used in the past processing. The reference time used in the current process can be automatically changed based on the tendency.

(19) In the head-mounted display device according to the above aspect; the augmented reality processing unit; obtains an information amount of the virtual object in the first display mode; based on the obtained information amount, a current process The reference time used in step 1 may be changed.
According to the head-mounted display device of this aspect, the augmented reality processing unit uses the reference time used in the current processing based on the information amount of the virtual object in the first display mode having a high degree of visibility inhibition. Can be changed. In this way, the augmented reality processing unit, for example, when the information amount of the virtual object in the first display mode is large, in other words, displays the virtual object accompanying the transition to the first display mode. Since the reference time when it is likely to be an obstacle when visually recognizing the real object can be made longer than the reference time when the amount of information is small, the convenience for the user can be improved.

(20) In the head-mounted display device according to the above aspect; the augmented reality processing unit may change the method for obtaining the information amount according to the type of the virtual object in the first display mode. Good.
According to the head-mounted display device of this aspect, the augmented reality processing unit can obtain the information amount of the virtual object by a method suitable for the type of the virtual object in the first display mode. It is possible to grasp the information amount of the virtual object.

(21) In the head-mounted display device according to the above aspect; comprising a reference time acquisition unit that acquires the user's setting for the reference time; the augmented reality processing unit; Based on the above, the reference time used in the current process may be changed.
According to this form of the head-mounted display device, the augmented reality processing unit can change the reference time used in the current processing according to the user's preference.

(22) In the head-mounted display device according to the above aspect, further: the reference time used in the past in the augmented reality processing unit, and the information amount of the virtual object in the first display mode at that time And a reference time acquisition unit that acquires reference time information that associates identification information for identifying the user at that time; and the augmented reality processing unit; The reference time used in the current process may be changed based on the information amount of the virtual object in the first display mode.
According to the head-mounted display device of this aspect, the augmented reality processing unit can obtain the amount of information that the user can focus on per unit time using the reference time information. For this reason, the augmented reality processing unit is based on, for example, the obtained information amount (information amount that the user can pay attention to per unit time) and the information amount of the virtual object in the first display mode. Thus, the reference time used in the current process can be changed. In this way, the augmented reality processing unit, for example, when the amount of information that the user can focus on per unit time is small, in other words, the virtual object associated with the transition to the first display mode. The reference time when the display is likely to hinder the visual recognition of the real object can be set longer than the reference time when the amount of information is large. As a result, since the augmented reality processing unit can change the reference time according to individual differences among users, it is possible to improve user convenience.

(23) In the head-mounted display device of the above aspect; the virtual object in the second display mode is a character, a figure, and a picture suggesting the content of the virtual object in the first display mode And / or a symbol or a combination thereof.
According to the head-mounted display device of this form, the contents of the virtual object in the first display mode can be suggested using the virtual object in the second display mode with a low degree of visibility inhibition.

(24) In the head-mounted display device according to the aspect described above; the augmented reality processing unit; when the first request from the user is acquired while waiting for the elapse of the reference time, Transition to the display mode may be stopped.
According to the head-mounted display device of this aspect, the augmented reality processing unit can cancel the transition to the first display mode in response to the first request from the user. Convenience can be improved.

(25) In the head mounted display device according to the above aspect; the augmented reality processing unit; the standby time when the second request from the user is acquired while waiting for the passage of the reference time; The virtual image including the virtual object in the first display mode may be formed even before the time elapses.
According to the head-mounted display device of this form, the augmented reality processing unit is configured to execute the virtual display in the first display mode even before the reference time elapses in response to the second request from the user. Since the object can be displayed, the convenience for the user can be improved.

(26) In the head-mounted display device of the above form; a request made by at least one of the user's hand, foot, voice, head, or a combination thereof, You may provide the request | requirement acquisition part acquired as a said 1st request | requirement or a said 2nd request | requirement.
According to the head-mounted display device of this aspect, the user can make the first request or the request by at least one of the hand, the foot, the voice, the head, or a combination thereof. A second request can be made.

(27) According to one aspect of the present invention, a head-mounted display device that allows a user to visually recognize a virtual image is provided. The head-mounted display device includes: an image display unit that allows the user to visually recognize the virtual image; and the image display unit that includes the virtual object that is additionally displayed on a real object that exists in the real world. An augmented reality processing unit to be formed; and wherein the augmented reality processing unit is associated with at least the real object when a predetermined action by the user is not started within a predetermined reference time It is a virtual object, and the virtual image including the virtual object in the first display mode is formed.
According to the head-mounted display device of this aspect, the augmented reality processing unit displays the virtual object in the first display mode when the predetermined operation by the user is not started within the predetermined reference time. The included virtual image is formed on the image display unit. In other words, the augmented reality processing unit does not display the virtual object in the first display mode when the predetermined operation by the user is started within the reference time. Therefore, for example, when the user is performing a predetermined action (for example, some work), the possibility that the virtual object in the first display mode is displayed and blocks the user's eyes is reduced. Can do. As a result, it is possible to provide a head-mounted display device in which the display of the virtual object does not easily hinder the visual recognition of the real object or its background.

  A plurality of constituent elements of each embodiment of the present invention described above are not essential, and some or all of the effects described in the present specification are to be solved to solve part or all of the above-described problems. In order to achieve the above, it is possible to appropriately change, delete, replace with a new component, and partially delete the limited contents of some of the plurality of components. In order to solve some or all of the above-described problems or achieve some or all of the effects described in this specification, technical features included in one embodiment of the present invention described above. A part or all of the technical features included in the other aspects of the present invention described above may be combined to form an independent form of the present invention.

  For example, one embodiment of the present invention can be realized as an apparatus including some or all of the two elements of the image display unit and the augmented reality processing unit. That is, this apparatus may or may not have an image display unit. Moreover, this apparatus may or may not have an augmented reality processing unit. Such a device can be realized, for example, as a head-mounted display device, but can also be realized as a device other than the head-mounted display device. Any or all of the technical features of each form of the head-mounted display device described above can be applied to this device. For example, an apparatus according to an embodiment of the present invention has a problem that display of a virtual object does not easily interfere with viewing a real object. However, there are other demands for this device such as downsizing of the device, improvement in convenience, cost reduction during device manufacture, resource saving, and ease of manufacture.

  The present invention can be realized in various modes. For example, a head-mounted display device, a method for controlling the head-mounted display device, a system including the head-mounted display device, and these methods The present invention can be realized in the form of a computer program for realizing the functions of the apparatus and system, a storage medium storing the computer program, and the like.

It is explanatory drawing which shows schematic structure of the head mounted display apparatus in one Embodiment of this invention. It is a block diagram functionally showing the configuration of the head-mounted display device. It is explanatory drawing which shows an example of the virtual image visually recognized by the user. It is a state transition diagram of augmented reality processing. It is a flowchart which shows the procedure of a normal display process. It is explanatory drawing which shows an example of the image for normal display visually recognized by the user. It is a flowchart which shows the procedure of a simple display process. It is explanatory drawing which shows an example of non-display. It is explanatory drawing which shows an example of an edge part icon display. It is explanatory drawing which shows an example of a neighborhood icon display. It is explanatory drawing which shows an example of a highlight display. It is a flowchart which shows the procedure of monitoring establishment of transition condition 1. FIG. It is a flowchart which shows the procedure of monitoring establishment of transition condition 2. FIG. It is explanatory drawing which shows the structure of the external appearance of the head mounted display apparatus in a modification.

A. Embodiment:
A-1. Configuration of head mounted display device:
FIG. 1 is an explanatory diagram showing a schematic configuration of a head-mounted display device according to an embodiment of the present invention. The head-mounted display device 100 of the present embodiment is a display device that is mounted on the head, and is also called a head mounted display (HMD). The HMD 100 is an optically transmissive head mounted display that allows a user to visually recognize a virtual image and at the same time also visually recognize an outside scene.

  The HMD 100 of the present embodiment can perform augmented reality (AR) processing that adds information to a “real object” that is an object that exists in the real world using the CPU of the HMD 100. Here, the object means an arbitrary person, an arbitrary animal or plant, an arbitrary object (including an artificial object, a natural object, or the like). In augmented reality processing, information displayed in addition to a real object is referred to as a “virtual object”. The HMD 100 according to the present embodiment switches the display mode of the virtual object to be additionally presented in the augmented reality processing between the “first display mode” and the “second display mode”, thereby displaying the virtual object. It is possible to realize augmented reality processing that is difficult to obstruct when a real object is viewed. Here, the virtual object in the first display mode has a higher visibility inhibition degree than the virtual object in the second display mode. In other words, the virtual object in the second display mode has a lower visibility impediment than the virtual object in the first display mode.

  The visibility hindrance means “the degree of hindering the visibility of the user” when the user visually recognizes the real world through a virtual image including a virtual object. In other words, the visibility inhibition degree can also be expressed as a visibility suppression degree.

In the present embodiment, the virtual object of the first display mode and the virtual object of the second display mode improve / decrease the visibility inhibition degree by taking any of the modes listed below. Can do.
(A) Increase / decrease the area occupied by the virtual object in the virtual image. Here, the “area occupied by the virtual object in the virtual image” means an area occupied by the virtual object with respect to a range in which the virtual image can appear in front of the user. In this case, the virtual object in the second display mode includes a virtual object whose area occupied by the virtual image is “0”.
(B) Decrease / increase the transmittance of at least a part of the virtual object. In this case, the virtual object in the second display mode displays only the mode in which the overall transparency of the virtual object in the first display mode is increased or the shadow of the virtual object in the first display mode. And a mode in which only the contour of the virtual object in the first display mode is displayed (in which the transmittance of the portion other than the contour is increased).
(C) Binocular display / single-eye display of a virtual image including a virtual object. Here, the binocular display means that image light is emitted from the left and right image light generation units toward both eyes of the user, and the one-eye display is an image from the right or left image light generation unit. It means emitting light toward one eye of the user.

  In the following examples, the case where the aspect A is adopted as a method for improving / decreasing the visibility impediment between the virtual object of the first display aspect and the virtual object of the second display aspect will be exemplified. explain. Details of the augmented reality processing and details of each display mode will be described later.

  The HMD 100 includes an image display unit 20 that allows a user to visually recognize a virtual image while being mounted on the user's head, and a control unit (controller) 10 that controls the image display unit 20. In the following description, a virtual image visually recognized by the user with the HMD 100 is also referred to as a “display image” for convenience. In addition, the HMD 100 emitting the image light generated based on the image data is also referred to as “displaying an image”.

A-1-1. Configuration of image display:
FIG. 2 is a block diagram functionally showing the configuration of the HMD 100. The image display unit 20 is a wearing body that is worn on the user's head, and has a glasses shape in the present embodiment (FIG. 1). The image display unit 20 includes a right holding unit 21, a right display driving unit 22, a left holding unit 23, a left display driving unit 24, a right optical image display unit 26, a left optical image display unit 28, and a camera 61. And a line-of-sight detection unit 62 and a nine-axis sensor 66. Hereinafter, the positional relationship and function of each unit of the image display unit 20 in a state where the user wears the image display unit 20 will be described.

  As shown in FIG. 1, the right optical image display unit 26 and the left optical image display unit 28 are disposed so as to be positioned in front of the user's right eye and in front of the left eye, respectively. One end of the right optical image display unit 26 and one end of the left optical image display unit 28 are connected at a position corresponding to the eyebrow of the user. As shown in FIG. 2, the right optical image display unit 26 includes a right light guide plate 261 and a light control plate (not shown). The right light guide plate 261 is formed of a light transmissive resin material or the like, and guides the image light output from the right display driving unit 22 to the right eye RE of the user while reflecting the image light along a predetermined optical path. The light control plate is a thin plate-like optical element, and is disposed so as to cover the front side of the image display unit 20 (the side opposite to the user's eye side). The light control plate protects the light guide plate 261 and suppresses damage to the light guide plate 261 and adhesion of dirt. Further, by adjusting the light transmittance of the light control plate, it is possible to adjust the external light quantity entering the user's eyes and adjust the ease of visual recognition of the virtual image. The light control plate can be omitted.

  The left optical image display unit 28 includes a left light guide plate 262 and a light control plate (not shown). These details are the same as those of the right optical image display unit 26. Note that the right optical image display unit 26 and the left optical image display unit 28 are collectively referred to simply as “optical image display unit”. The optical image display unit can use any method as long as it forms a virtual image in front of the user's eyes using image light. For example, the optical image display unit may be realized using a diffraction grating, or a transflective film may be used. It may be realized by using.

  As shown in FIG. 1, the right holding unit 21 extends from the other end ER of the right optical image display unit 26 to a position corresponding to the user's temporal region. The left holding unit 23 is provided to extend from the other end EL of the left optical image display unit 28 to a position corresponding to the user's temporal region. The right holding unit 21 and the left holding unit 23 hold the image display unit 20 on the user's head like a temple of glasses. The right holding unit 21 and the left holding unit 23 are also collectively referred to simply as “holding unit”.

  As shown in FIG. 1, the right display driving unit 22 is disposed inside the right holding unit 21 (the side facing the user's head). The left display driving unit 24 is disposed inside the left holding unit 23. As shown in FIG. 2, the right display driving unit 22 functions as a receiving unit (Rx) 53, a right backlight (BL) control unit 201 and a right backlight (BL) 221 that function as a light source, and a display element. A right LCD (Liquid Crystal Display) control unit 211, a right LCD 241, and a right projection optical system 251 are provided. The right backlight control unit 201, the right LCD control unit 211, the right backlight 221 and the right LCD 241 are also collectively referred to as “image light generation unit”. The receiving unit 53 functions as a receiver for serial transmission between the control unit 10 and the image display unit 20. The right backlight control unit 201 drives the right backlight 221 based on the input control signal. The right backlight 221 is a light emitter such as an LED (Light Emitting Diode) or electroluminescence (EL). The right LCD control unit 211 drives the right LCD 241 based on the clock signal PCLK, the vertical synchronization signal VSync, the horizontal synchronization signal HSync, and the right eye image data Data1 input via the reception unit 53. The right LCD 241 is a transmissive liquid crystal panel in which a plurality of pixels are arranged in a matrix. The right projection optical system 251 is a collimating lens that converts the image light emitted from the right LCD 241 to light beams in a parallel state.

  The left display driving unit 24 includes a receiving unit (Rx) 54, a left backlight (BL) control unit 202 and a left backlight (BL) 222 that function as a light source, a left LCD control unit 212 and a left that function as a display element. An LCD 242 and a left projection optical system 252 are provided. These details are the same as those of the right display drive unit 22. Note that the right display drive unit 22 and the left display drive unit 24 are collectively referred to simply as a “display drive unit”.

  As shown in FIG. 1, the camera 61 is a stereo camera disposed at a position corresponding to the upper left and right corners of the user. The left and right cameras 61 each capture an outside scene (external scenery) in the front side direction of the image display unit 20, in other words, the user's viewing direction when the HMD 100 is worn, and two outside scene images corresponding to the left and right sides. To get. The camera 61 is a so-called visible light camera, and the outside scene image acquired by the camera 61 is an image representing the shape of the object from the visible light emitted from the object. By recognizing the two outside scene images obtained by the camera 61, the CPU 140 of the control unit 10 can detect and acquire the movement of the user's hand. In order to improve the accuracy of image recognition, the CPU 140 uses a fingertip of the user, a ring attached to the user's hand, a specific tool held by the user as a mark for detection, and the like. can do. In this case, the camera 61 and the CPU 140 function as a “movement acquisition unit” that acquires the movement of the user's hand. In addition, although the camera 61 in this embodiment is a stereo camera, it is good also as a monocular camera.

  As shown in FIG. 1, the line-of-sight detection unit 62 is disposed at a position corresponding to the lower part of the left and right corners of the user. The left and right line-of-sight detection units 62 each include an infrared light emitting unit and an infrared light receiving unit (not shown). The right gaze detection unit 62 receives the infrared light emitted from the infrared light emitting unit and reflected by the right eye of the user. The CPU 140 of the control unit 10 acquires the movement of the line of sight of the user's right eye based on the intensity of infrared light received by the right line of sight detection unit 62. Similarly, the left line-of-sight detection unit 62 receives the infrared light emitted from the infrared light emitting unit and reflected by the left eye of the user. The CPU 140 detects the line of sight of the user's left eye based on the intensity of infrared light received by the left line-of-sight detection unit 62. The reflectance of infrared rays is different depending on whether the infrared ray hits the iris (black eye), the infrared ray hits the eyelid, or the infrared ray hits the white eye. Specifically, the reflectance of infrared rays is lowest when the infrared rays hit the iris, and then increases in the order of eyelids and white eyes. Therefore, the CPU 140 can acquire the movement of the user's line of sight based on the intensity of infrared rays received by the line-of-sight detection unit 62. In this case, the line-of-sight detection unit 62 and the CPU 140 function as a “line-of-sight acquisition unit” that acquires the movement of the user's line of sight. In addition, although the gaze detection unit 62 in the present embodiment is provided on each of the left and right sides, either the left or right side may be provided.

  As shown in FIG. 1, the 9-axis sensor 66 is disposed at a position corresponding to the temple on the right side of the user. The 9-axis sensor 66 is a motion sensor that detects acceleration (3 axes), angular velocity (3 axes), and geomagnetism (3 axes). Since the 9-axis sensor 66 is provided in the image display unit 20, when the image display unit 20 is mounted on the head, the movement of the user's head can be detected and acquired. Here, the movement of the head includes changes in the speed, acceleration, angular velocity, direction, and direction of the head.

  As shown in FIG. 1, the image display unit 20 includes a connection unit 40 for connecting the image display unit 20 and the control unit 10. The connection unit 40 includes a main body cord 48 connected to the control unit 10, a right cord 42 and a left cord 44 branched from the main body cord 48, and a connecting member 46 provided at the branch point. The right cord 42 is connected to the right display driving unit 22, and the left cord 44 is connected to the left display driving unit 24. The connecting member 46 is provided with a jack for connecting the earphone plug 30. A right earphone 32 and a left earphone 34 extend from the earphone plug 30. A connector (not shown) is provided at the end of the main body cord 48 opposite to the connecting member 46. This connector realizes connection / disconnection between the control unit 10 and the image display unit 20 by fitting / releasing with a connector (not shown) provided in the control unit 10. The image display unit 20 and the control unit 10 transmit various signals via the connection unit 40. For the right cord 42, the left cord 44, and the main body cord 48, for example, a metal cable or an optical fiber can be adopted.

A-1-2. Configuration of control unit:
The control unit 10 is a device for controlling the HMD 100. As shown in FIG. 1, the control unit 10 includes a determination key 11, a lighting unit 12, a display switching key 13, a track pad 14, a luminance switching key 15, a direction key 16, a menu key 17, and a power source. And a switch 18. The determination key 11 detects a pressing operation and outputs a signal for determining the content operated in the control unit 10. The lighting unit 12 is realized by, for example, an LED, and notifies the operation state of the HMD 100 (for example, ON / OFF of the power source) by the light emission state. The display switching key 13 detects a pressing operation, and outputs a signal for switching the display mode of the content moving image between 3D and 2D, for example. The track pad 14 detects the operation of the user's finger on the operation surface of the track pad 14 and outputs a signal corresponding to the detected content. As the track pad 14, various methods such as an electrostatic method, a pressure detection method, and an optical method can be adopted. The luminance switching key 15 detects a pressing operation and outputs a signal for increasing or decreasing the luminance of the image display unit 20. The direction key 16 detects a pressing operation on a key corresponding to the up / down / left / right direction, and outputs a signal corresponding to the detected content. The power switch 18 switches the power-on state of the HMD 100 by detecting a slide operation of the switch.

  2, the control unit 10 includes an input information acquisition unit 110, a storage unit 120, a power supply 130, a wireless communication unit 132, a GPS module 134, a CPU 140, an interface 180, and a transmission unit (Tx ) 51 and 52, and the respective parts are connected to each other by a bus (not shown).

  The input information acquisition unit 110 acquires signals corresponding to operation inputs to the enter key 11, the display switching key 13, the track pad 14, the luminance switching key 15, the direction key 16, the menu key 17, and the power switch 18. The input information acquisition unit 110 can acquire operation inputs using various methods other than those described above. For example, an operation input using a foot switch (a switch operated by a user's foot) may be acquired. If it is possible to acquire an operation input using a foot switch, the input information acquisition unit 110 can acquire an operation input from the user even in a task in which it is difficult for the user to release his / her hand.

  The storage unit 120 includes a ROM, a RAM, a DRAM, a hard disk, and the like. The storage unit 120 stores various computer programs including an operating system (OS). The storage unit 120 stores in advance a display state 121, a past maintenance time 122, a maintenance time setting 123, a past reference time 124, a reference time setting 125, and a simple display mode 126.

  The display state 121 stores information for indicating whether the display mode of the virtual object in the current augmented reality process is the first display mode or the second display mode. In the display state 121, for example, the type of display mode may be stored using a flag, or the type of display mode may be stored using a number or a character string.

  The past maintenance time 122 stores a history of maintenance times used in past augmented reality processing. Here, the “maintenance time” means the time until the display mode of the virtual object is changed from the first display mode to the second display mode in the augmented reality processing. The past maintenance time 122 is associated with the maintenance time used in the past augmented reality processing, the information amount of the virtual object in the first display state at that time, and the user identifier at that time Can be stored.

  The maintenance time setting 123 stores the maintenance time set by the user. As the content of the maintenance time setting 123, some initial value may be stored when the HMD 100 is manufactured. The contents of the maintenance time setting 123 may be appropriately changed by the user.

  The past reference time 124 stores a history of reference times used in past augmented reality processing. Here, the “reference time” means a time until the display mode of the virtual object is changed from the second display mode to the first display mode in the augmented reality process. The past reference time 124 is associated with the reference time used in the past augmented reality processing, the information amount of the virtual object in the first display state at that time, and the identifier of the user at that time Can be stored.

  The reference time setting 125 stores a reference time set by the user. As the content of the reference time setting 125, some initial value may be stored when the HMD 100 is manufactured. The contents of the reference time setting 125 may be appropriately changed by the user.

  The simple display mode 126 stores information for representing a specific display mode that is employed in the second display mode of the virtual object. In the present embodiment, as specific display modes employed in the second display mode, (Mode 1) end icon display, (Mode 2) neighborhood icon display, (Mode 3) highlight display, Aspect 4) There is non-display. Detailed description of each aspect will be described later. In the simple display mode 126, information representing any one of the above-described modes 1 to 4 is stored. In the simple display mode 126, for example, a specific display mode may be stored using a flag, or a specific display mode may be stored using a number or a character string.

  The power supply 130 supplies power to each part of the HMD 100. As the power supply 130, for example, a secondary battery can be used.

  The wireless communication unit 132 performs wireless communication with an external device in accordance with a predetermined wireless communication standard. The predetermined wireless communication standard is, for example, infrared, short-range wireless communication exemplified by Bluetooth (registered trademark), wireless LAN exemplified by IEEE 802.11, or the like.

  The GPS module 134 detects the current position of the user of the HMD 100 by receiving a signal from a GPS satellite, and generates current position information representing the current position information of the user. The current position information can be realized by coordinates representing latitude and longitude, for example.

  The CPU 140 functions as the augmented reality processing unit 142, the OS 150, the image processing unit 160, the sound processing unit 170, and the display control unit 190 by reading out and executing the computer program stored in the storage unit 120.

  The augmented reality processing unit 142 executes augmented reality processing. The augmented reality processing unit 142 further includes a normal display processing unit 144 and a simple display processing unit 146. The normal display processing unit 144 causes the image display unit 20 to form a virtual image including the virtual object in the first display mode by executing a normal display process described later. The simple display processing unit 146 causes the image display unit 20 to form a virtual image including a virtual object in the second display mode by executing a simple display process described later. The augmented reality processing unit 142 switches between normal display processing by the normal display processing unit 144 and simple display processing by the simple display processing unit 146 based on transition conditions described later. That is, in the present embodiment, the normal display process and the simple display process are executed as a subroutine of the augmented reality process.

  The image processing unit 160 generates a signal based on content (video) input via the interface 180 or the wireless communication unit 132. For example, when the content is in a digital format, the image processing unit 160 generates a clock signal PCLK and image data Data. In the case of the digital format, since the clock signal PCLK is output in synchronization with the image signal, generation of the vertical synchronization signal VSync and the horizontal synchronization signal HSync and A / D conversion of the analog image signal are unnecessary. The image processing unit 160 transmits the generated clock signal PCLK, vertical synchronization signal VSync, horizontal synchronization signal HSync, and image data Data stored in the DRAM in the storage unit 120 to the image via the transmission units 51 and 52. Transmit to the display unit 20. The image data Data transmitted via the transmission unit 51 is also referred to as “right eye image data Data1”, and the image data Data transmitted via the transmission unit 52 is also referred to as “left eye image data Data2”. Note that the image processing unit 160 performs image processing such as resolution conversion processing, various tone correction processing such as adjustment of luminance and saturation, and keystone correction processing on the image data Data stored in the storage unit 120. May be.

  The display control unit 190 generates control signals for controlling the right display drive unit 22 and the left display drive unit 24. Specifically, the display control unit 190 turns on and off the left and right LCDs 241 and 242 by the left and right LCD control units 211 and 212 and the left and right backlights by the left and right backlight control units 201 and 202 according to the control signal. The generation and emission of image light by the right display drive unit 22 and the left display drive unit 24 are controlled by individually controlling the driving ON / OFF of the 221 and 222, respectively. The display control unit 190 transmits these control signals to the image display unit 20 via the transmission units 51 and 52.

  The audio processing unit 170 acquires an audio signal included in the content, amplifies the acquired audio signal, and supplies the acquired audio signal to a speaker (not shown) of the right earphone 32 and a speaker (not shown) of the left earphone 34.

  The interface 180 communicates with the external device OA in accordance with a predetermined wired communication standard. The predetermined wired communication standards include, for example, MicroUSB (Universal Serial Bus), USB, HDMI (High Definition Multimedia Interface, HDMI is a registered trademark), DVI (Digital Visual Interface), VGA (Video Graphics Array), Composite, RS- The wired LAN is exemplified by 232C (Recommended Standard 232) and IEEE802.3. As the external device OA, for example, a personal computer PC, a mobile phone terminal, a game terminal, or the like can be used.

  FIG. 3 is an explanatory diagram illustrating an example of a virtual image visually recognized by the user. FIG. 3A illustrates the user's visual field VR when the augmented reality processing is not executed. As described above, the image light guided to both eyes of the user of the HMD 100 forms an image on the retina of the user, so that the user visually recognizes the virtual image VI. In the example of FIG. 3A, the virtual image VI is a standby screen of the OS 150 of the HMD 100. The user views the outside scene SC through the right optical image display unit 26 and the left optical image display unit 28. As described above, the user of the HMD 100 of the present embodiment can see the virtual image VI and the outside scene SC behind the virtual image VI for the portion of the visual field VR where the virtual image VI is displayed. In addition, the portion of the visual field VR where the virtual image VI is not displayed can be seen through the optical scene display portion and directly through the outside scene SC.

  FIG. 3B illustrates the visual field VR of the user when the augmented reality process is executed. By executing the augmented reality processing described later, the user visually recognizes the virtual image VI including the virtual objects VO1 to VO3. The virtual object VO1 is balloon shape information displayed in the vicinity of a real world mountain (real object) in the outside scene SC. The virtual objects VO2 and VO3 are notebook-shaped information displayed so as to be superimposed on the real world tree (real object) in the outside scene SC. In this way, the user can experience augmented reality by viewing both the virtual objects VO1 to VO3 included in the virtual image VI and the real object in the outside scene SC that can be seen behind the virtual image VI. be able to.

A-2. Augmented reality processing:
The augmented reality processing is processing for additionally presenting information (virtual object) to a real object that actually exists in the real world. In augmented reality processing, the augmented reality processing unit 142 has received an instruction to start augmented reality processing from the OS 150 or another application, or the augmented reality processing unit 142 has received information that the power of the HMD 100 has been turned on. It starts with it.

A-2-1. State transition of augmented reality processing:
FIG. 4 is a state transition diagram of the augmented reality process. The augmented reality processing of the present embodiment can take a normal display state ST1 and a simple display state ST2. In the normal display state ST1, the augmented reality processing unit 142 causes the normal display processing unit 144 to perform normal display processing. As a result, a virtual image including the virtual object in the first display mode is formed on the image display unit 20. On the other hand, in the simple display state ST2, the augmented reality processing unit 142 causes the simple display processing unit 146 to execute the simple display process. As a result, a virtual image including the virtual object in the second display mode is formed on the image display unit 20.

  After the augmented reality processing is started, the augmented reality processing unit 142 monitors whether the transition condition 1 is satisfied. The transition condition 1 is a condition for transitioning the state of the augmented reality processing from the state after the start to the normal display state ST1, in other words, for displaying the virtual object in the first display state. Is the condition. Since the augmented reality processing unit 142 can use a plurality of conditions as the transition condition 1, details of the transition condition 1 will be described later.

  In the normal display state ST1, the augmented reality processing unit 142 monitors the establishment of the transition condition 2 and the invalidation action. Transition condition 2 is a condition for transitioning the state of augmented reality processing from the normal display state ST1 to the simple display state ST2, in other words, for displaying the virtual object in the second display state. It is a condition. Since the augmented reality processing unit 142 can use a plurality of conditions as the transition condition 2, details of the transition condition 2 will be described later.

  The invalidation action is a predetermined operation performed by the user in order to cancel the state transition of the augmented reality process. In the present embodiment, a “waving hand” is adopted as the invalidation action. In the augmented reality processing unit 142, the movement of the user's hand acquired by the motion detection unit (the camera 61 and the augmented reality processing unit 142 of the CPU 140) matches a pre-stored movement pattern of the hand. It is determined whether or not. If they match, the augmented reality processing unit 142 determines that the invalidation action has been performed, and if they do not match, the augmented reality processing unit 142 determines that the invalidation action has not been performed. In this case, the augmented reality processing unit 142 functions as a “request acquisition unit”, and the invalidation action functions as a “first request”.

  Note that as the invalidation action, another operation performed by at least one of the user's hand, foot, voice, head, or a combination thereof may be employed. For example, as the invalidation action, an operation of making the hand a predetermined shape, a cancel input operation to the control unit 10, a cancel input operation by voice through a microphone (not shown), or the like may be employed.

  In the normal display state ST1, when the transition condition 2 is satisfied and the invalidation action is not detected, the augmented reality processing unit 142 changes the state of the augmented reality processing from the normal display state ST1 to the simplified display state ST2. Transition to. On the other hand, when the transition condition 2 is satisfied and the invalidation action is detected in the normal display state ST1, the augmented reality processing unit 142 keeps the state of the augmented reality processing in the normal display state ST1.

  In this way, the augmented reality processing unit 142 transitions from the first display mode to the second display mode in response to the first request (invalidation action) from the user, in other words, Since the transition from the normal display state ST1 to the simple display state ST2 can be stopped, the convenience for the user can be improved. Furthermore, the augmented reality processing unit 142 functioning as a request acquisition unit receives a user's hand, foot, voice, head, or a combination thereof from the user. The request can be obtained as a first request.

  In the simple display state ST2, the augmented reality processing unit 142 monitors the establishment of the transition condition 3 and the invalidation action. Transition condition 3 is a condition for transitioning the state of augmented reality processing from the simple display state ST2 to the normal display state ST1, in other words, for displaying the virtual object in the first display state. It is a condition. Since the augmented reality processing unit 142 can use a plurality of conditions as the transition condition 3, details of the transition condition 3 will be described later. Further, the invalidation action is the same as the invalidation action in the normal display state ST1.

  When the transition condition 3 is satisfied in the simple display state ST2 and no invalidation action is detected, the augmented reality processing unit 142 changes the state of the augmented reality processing from the simple display state ST2 to the normal display state ST1. Transition to. On the other hand, when the transition condition 3 is satisfied and the invalidation action is detected in the simple display state ST2, the augmented reality processing unit 142 keeps the state of the augmented reality processing in the simplified display state ST2.

  In this way, the augmented reality processing unit 142 makes a transition from the second display mode to the first display mode, in other words, the simple display state, in response to a request (invalidation action) from the user. Since the transition from ST2 to the normal display state ST1 can be stopped, the convenience for the user can be improved. Furthermore, the augmented reality processing unit 142 functioning as a request acquisition unit receives a user's hand, foot, voice, head, or a combination thereof from the user. The request can be acquired, and the transition from the second display mode to the first display mode in the augmented reality processing unit 142 can be stopped.

A-2-2. Normal display processing:
FIG. 5 is a flowchart showing a procedure of normal display processing. The normal display process is a process for causing the image display unit 20 to form a virtual image representing a virtual object in the first display mode. The normal display process is started based on an instruction from the augmented reality processing unit 142 in the normal display state ST1 (FIG. 4), and is executed by the normal display processing unit 144.

  In step S100, the normal display processing unit 144 sets “0” to a variable i used in the processing. In step S102, the normal display processing unit 144 causes the camera 61 to acquire an outside scene image.

  In step S104, the normal display processing unit 144 extracts the characteristics of the target object from the acquired outside scene image. Here, the “target object” is “a real object that has entered the user's field of view” or “attention” when the transition condition 1 or the transition condition 3 is satisfied among a plurality of real objects included in the outside scene image. It means a real object that is the target of the action. The “attention operation” means an operation in which the user pays attention to a specific point. The attention movement can be specified by using the user's line of sight acquired by the line-of-sight acquisition unit (FIG. 2) or the movement of the user's hand acquired by the movement acquisition unit (FIG. 2). A method for acquiring the attention motion will be described later.

Specifically, in step S104, the normal display processing unit 144 extracts the characteristics of the target object included in the acquired outside scene image using an image recognition method such as a1 and a2 illustrated below. Note that method a1 and method a2 may be combined.
(A1) The edge (characteristic part) of the target object is detected.
(A2) A marker (feature part) attached in advance to the target object is detected. Note that various types of markers can be used as the marker attached to the object, and for example, a tape, a seal, a magic, a laser marker, a magic tape (registered trademark), or the like can be used. The number of markers attached to the object is arbitrary.

  In step S106, the augmented reality processing unit 142 acquires the position of the target object in the field of view of the user of the HMD 100 and the distance between the HMD 100 and the target object. Specifically, the augmented reality processing unit 142 sets the position of the feature extracted in step S104 as the position of the target object in the field of view. Further, the augmented reality processing unit 142 identifies what the target object is and the size of the target object in the entire outside scene image from the features extracted in step S104. The augmented reality processing unit 142 estimates how far the target object is from the HMD 100 (the distance between the two) based on the identified object and size. When the HMD 100 includes a depth sensor or a distance measurement sensor, the augmented reality processing unit 142 may acquire the distance between the HMD 100 and the target object using the measurement values of these sensors in step S106. . Then, the augmented reality processing unit 142 can acquire a more accurate distance.

  In step S108, the augmented reality processing unit 142 acquires one or more virtual objects corresponding to the target object. The augmented reality processing unit 142 may acquire a virtual object from a database (not shown) in the HMD 100 or may acquire a virtual object from a database (not shown) in another device (such as a server) connected to the HMD 100 via a network. May be.

  In step S110, the augmented reality processing unit 142 arranges virtual objects according to the position and distance of the target object, and generates a normal display image. Specifically, the augmented reality processing unit 142 processes the virtual object (character or image) acquired in step S108 into a size that matches the distance of the target object acquired in step S106, and acquired in step S106. Place it at a position that matches the position of the target object. In addition, the augmented reality processing unit 142 adds black data to an area in which the virtual object is not arranged in the normal display image in order to improve the visibility of the outside scene SC when the image is displayed. Deploy.

  FIG. 6 is an explanatory diagram illustrating an example of a normal display image visually recognized by the user. In step S112 of FIG. 5, the augmented reality processing unit 142 displays a normal display image. Specifically, the augmented reality processing unit 142 transmits the normal display image generated in step S110 to the image processing unit 160. The image processing unit 160 that has received the image executes the display process described with reference to FIG. As a result, as shown in FIG. 6, the user can visually recognize the virtual image VI representing the normal display image NI including the virtual objects VO1 to VO3 in the visual field VR. Further, the user can visually recognize the target object (real object) decorated by the virtual objects VO1 to VO3 in the outside scene SC behind the virtual image VI.

  In the example of FIG. 6, the target object is a table cutter placed on the work site. The virtual object VO1 is a map image representing the location of a part, the virtual object VO2 is a character representing a work content instruction to the user, and the virtual object VO3 is an arrow that assists the work content instruction. It is an image. The virtual objects VO1 and VO2 are arranged in the vicinity of the table cutter, and the virtual object VO3 is arranged so as to be superimposed on the table cutter. As described above, the augmented reality processing unit 142 may display the virtual object in the vicinity of the target object, or may display the virtual object superimposed on the target object. In the example of FIG. 6, an example is shown in which three virtual objects are associated with one target object. However, the number of virtual objects associated with one target object is arbitrary, and may be one or plural.

  In step S114 of FIG. 5, the augmented reality processing unit 142 starts measuring the display time of the normal display image NI when the variable i is “0”. In step S116, the augmented reality processing unit 142 sets “1” to the variable i. Thereafter, the augmented reality processing unit 142 shifts the processing to step S102 and repeats the above-described processing.

  As described above, in the normal display process (FIG. 5), the normal display processing unit 144 displays the virtual image VI including the virtual objects VO <b> 1 to VO <b> 3 for giving augmented reality to the user of the HMD 100. Can be displayed. The virtual objects VO <b> 1 to VO <b> 3 displayed in the normal display process give priority to the visibility of the virtual object of the user, in other words, the “first display mode” having a high degree of visibility inhibition.

A-2-3. Simple display processing:
FIG. 7 is a flowchart showing the procedure of the simple display process. The simple display process is a process for causing the image display unit 20 to form a virtual image representing a virtual object in the second display mode. The simple display process is started based on an instruction from the augmented reality processing unit 142 in the simple display state ST2 (FIG. 4), and is executed by the simple display processing unit 146.

  In step S200, the simple display processing unit 146 ends the measurement of the display time of the normal display image NI started in step S114 of the normal display process (FIG. 5). The simple display processing unit 146 stores the measured display time in the past maintenance time 122 in a manner in which existing data can be distinguished.

  In step S202, the simple display processing unit 146 acquires the simple display mode 126 (specific display mode employed in the second display mode). In steps S202 and S206, the simple display processing unit 146 refers to the acquired value of the simple display mode 126.

  FIG. 8 is an explanatory diagram showing an example of non-display. In FIG. 7, when the simple display mode 126 indicates “non-display” (step S202: non-display), in step S204, the simple display processing unit 146 hides the normal display image NI (FIG. 6). Display. Specifically, the simple display processing unit 146 can hide the normal display image NI by using any of the methods b1 to b4 exemplified below.

(B1) Erase the virtual object suddenly:
The simple display processing unit 146 hides the normal display image NI in such a manner that the virtual objects VO1 to VO3 are suddenly erased. Specifically, the simple display processing unit 146 stops transmission of the normal display image NI to the image processing unit 160. Alternatively, the simple display processing unit 146 transmits a request to the display control unit 190 to stop driving the display driving unit (LCD or backlight). Thereby, the display of the virtual image VI representing the normal display image NI by the image display unit 20 is stopped.

(B2) Fade out the virtual object out of the frame:
The simple display processing unit 146 causes the virtual objects VO1 to VO3 to fade out of the frame of the image and hides the normal display image NI. Specifically, the simple display processing unit 146 generates the normal display image while each of the virtual objects VO1 to VO3 fades out of the frame of the normal display image NI, and the image processing unit 160 of the generated image. It is only necessary to repeat transmission to. Thereby, as shown in FIG. 8, each time the process is repeated, the position of the virtual object gradually moves toward the outside of the frame of the image (the direction indicated by the arrow in FIG. 8). As a result, the virtual object appears to the user as it disappears step by step toward the outside of the frame, so that it is possible to reduce the uncomfortable feeling given to the user as the display mode changes.

(B3) Increase the transparency of the virtual object to fade out:
The simple display processing unit 146 hides the normal display image NI in a mode of fading out by gradually increasing the transmittance of each of the virtual objects VO1 to VO3. Specifically, the simple display processing unit 146 generates a normal display image obtained by removing n dots (n is an arbitrary integer) from each of the virtual objects VO1 to VO3, and outputs the generated image to the image processing unit 160. The transmission may be repeated. Thus, each time the process is repeated, n dots of the virtual object are reduced by n. As a result, since it seems to the user that the transparency of the virtual object increases and disappears in stages, it is possible to reduce the uncomfortable feeling given to the user as the display mode changes. The simple display processing unit 146 may replace the dots of the virtual object with black dots instead of extracting the dots of the virtual object, or may replace the virtual object with a virtual object that displays only the outline. The α value of NI may be increased, and the saturation of the normal display image NI may be decreased.

(B4) Fading out of the frame while increasing the transparency of the virtual object:
It is a combination of method b2 and method b3. As a result, each time the process is repeated, n dots of the virtual object are reduced by n, and the position of the virtual object gradually moves out of the frame of the image (in the direction indicated by the arrow in FIG. 8). As a result, the transparency of the virtual object is increased for the user, and the virtual object seems to disappear step by step toward the outside of the frame, thereby reducing the uncomfortable feeling given to the user as the display mode changes. be able to.

  FIG. 9 is an explanatory diagram illustrating an example of the end icon display. In FIG. 7, when the simple display mode 126 indicates “end part icon display” (step S202: other, step S206: end part icon display), in step S210, the simple display processing part 146 displays the normal display. One or a plurality of icon images corresponding to the respective virtual objects VO1 to VO3 displayed in the image NI (FIG. 6) are acquired. The simple display processing unit 146 may acquire an icon image from a database (not shown) in the HMD 100, or may acquire an icon image from a database (not shown) in another device (such as a server) connected to the HMD 100 via a network. Also good. The icon image may be associated with the virtual object in a one-to-one relationship, or may be associated in a one-to-many or many-to-one relationship.

  In step S212, the simple display processing unit 146 arranges all the icon images acquired in step S210 at the end, and generates a simple display image. Here, the “end portion” may be anywhere up, down, left, or right. However, the effective field of view with excellent information receiving ability is within the range of about 30 ° horizontal and about 20 ° vertical, or the stable field of view where the gazing point looks stable quickly is horizontal 60 ° to 90 °, vertical 45 °. It is preferable to avoid within the range of ~ 70 °. In addition, the simple display processing unit 146 arranges black data for the area where the icon image is not arranged in the simple display image in order to improve the visibility of the outside scene SC when the image is displayed. To do.

  In step S240, the simple display processing unit 146 displays the generated simple display image instead of the normal display image NI (FIG. 6). As a result, as shown in FIG. 9, the user can visually recognize the virtual image VI representing the simple display image SI including the virtual objects (icon images) VO4 and VO5 in the visual field VR. Further, the user can visually recognize the target object (real object) decorated by the virtual objects VO4 and VO5 in the outside scene SC behind the virtual image VI.

  In the example of FIG. 9, the target object is a table cutter placed at the work site, as in FIG. The virtual object VO4 is a map icon image, and the virtual object VO5 is a manual icon image. The virtual object VO4 is associated with the virtual object VO1. Virtual object VO5 is associated with virtual objects VO2 and VO3. The virtual objects VO4 and VO5 are both arranged at the lower right corner of the simple display image SI.

  FIG. 10 is an explanatory diagram illustrating an example of the vicinity icon display. In FIG. 7, when the simple display mode 126 indicates “neighbor icon display” (step S202: other, step S206: neighborhood icon display), in step S220, the simple display processing unit 146 displays the normal display image. One or a plurality of icon images corresponding to the virtual objects VO1 to VO3 displayed on the NI (FIG. 6) are acquired. Details are the same as in step S210.

  In step S222, the simple display processing unit 146 causes the camera 61 to acquire an outside scene image. In step S224, the simple display processing unit 146 extracts features of the target object from the acquired outside scene image. Details are the same as step S104 in FIG. In step S226, the simple display processing unit 146 acquires the position and distance of the target object. Details are the same as step S106 of FIG.

  In step S228, the simple display processing unit 146 arranges icon images according to the position and distance of the target object, and generates a simple display image. Specifically, the simple display processing unit 146 processes the icon image acquired in step S220 into a size that matches the distance of the target object acquired in step S226, and matches the position of the target object acquired in step S226. Placed in the vicinity (near). In addition, the simple display processing unit 146 arranges black data for the area where the icon image is not arranged in the simple display image in order to improve the visibility of the outside scene SC when the image is displayed. To do.

  In step S240, the simple display processing unit 146 displays the generated simple display image instead of the normal display image NI (FIG. 6). As a result, as shown in FIG. 10, the user can visually recognize the virtual image VI representing the simple display image SI including the virtual objects (icon images) VO4 and VO5 in the visual field VR. In the example of FIG. 10, the target object and the virtual objects VO4 and VO5 are the same as those in FIG. FIG. 10 is different from FIG. 9 in that the virtual object is arranged in the vicinity of the target object (real object), not the end of the simple display image.

  FIG. 11 is an explanatory diagram illustrating an example of highlight display. In FIG. 7, when the simple display mode 126 indicates “highlighted display” (step S202: other, step S206: highlighted display), the simple display processing unit 146 displays an outside scene image on the camera 61 in step S230. Get it. In step S232, 136 extracts features of the target object from the acquired outside scene image. Details are the same as step S104 in FIG. In step S234, the simple display processing unit 146 acquires the position and distance of the target object. Details are the same as step S106 of FIG.

  In step S236, the simple display processing unit 146 generates a decoration image of the target object. Specifically, the simple display processing unit 146 decorates at least a part of the target object according to the characteristics of the target object specified in step S232 and the position and distance of the target object acquired in step S234. Is generated. Here, “decoration” means emphasis. For this reason, an “image for decoration” refers to an image for making at least a part of the target object appear to emit light (including lighting and blinking), an image for bordering at least a part of the target object, It means an image or the like for raising and showing at least a part of the target object.

  In step S238, the simple display processing unit 146 arranges the decoration image generated in step S236 in accordance with the position and distance of the target object, and generates a simple display image. In addition, the simple display processing unit 146 adds black data to the area where the decoration image is not arranged in the simple display image in order to improve the visibility of the outside scene SC when the image is displayed. Deploy.

  In step S240, the simple display processing unit 146 displays the generated simple display image instead of the normal display image NI (FIG. 6). As a result, as shown in FIG. 11, the user can visually recognize the virtual image VI representing the simple display image SI including the virtual object (decorative image) VO6 in the visual field VR.

  In the example of FIG. 11, the target object (real object) is a table cutter placed on the work site, as in FIG. The virtual object VO6 is a decorative image for emphasizing a part of the table cutter, that is, a blade part.

  After step S240 in FIG. 7 ends, the simple display processing unit 146 shifts the process to step S202 and repeats the above-described process.

  As described above, in the simple display process (FIG. 7), when the simple display processing unit 146 is any one of “end part icon display”, “neighbor icon display”, and “highlight display”, The simple display processing unit 146 can cause the image display unit 20 to display a virtual image VI including virtual objects VO4 to VO6 for giving augmented reality to the user of the HMD 100. The virtual objects VO4 to VO6 displayed in the simple display process give priority to the visibility of the outside scene of the user, in other words, the “second display mode” having a low visibility inhibition degree.

  As described above, according to the simple display process (FIG. 7), the area of the virtual object occupied in the virtual image VI is small (visibility) compared to the virtual objects of the first display mode (FIG. 6, VO1 to VO3). Using the virtual object (FIGS. 8, 9, 10, 11, and VO4 to VO6) of the second display mode with a low degree of inhibition may suggest the contents of the virtual object in the first display mode. it can.

  In each of the descriptions of the end icon display, the neighborhood icon display, and the highlight display described above, the transition between the normal display image NI (FIG. 6) and the simple display image SI (FIGS. 9, 10, and 11) is as follows. It was supposed to happen suddenly. However, the transition from the normal display image NI to the simple display image SI may be performed gradually. Specifically, a method similar to the methods b2 to b4 in step S204 can be used. In this way, since the normal display image NI appears to the user to gradually change to the simple display image SI, it is possible to reduce the uncomfortable feeling given to the user as the display mode changes. it can.

  Further, in each description of the end icon display and the neighborhood icon display described above, it is assumed that the virtual objects VO4 and VO5 are icon images. In the description of the highlight display, the virtual object VO6 is assumed to be a graphic image. However, the virtual object in the second display mode is not necessarily an icon image or a graphic image. The virtual object in the second display mode is a character, a figure, a pattern, a symbol, or a combination thereof that suggests the content of the virtual object (VO1, VO1 to VO3) in the first display mode. As long as it consists of, arbitrary aspects can be employ | adopted. For example, the virtual object in the second display mode may be a simple character string, a simple symbol, or a combination of a picture and a character string.

  Hereinafter, a procedure for monitoring the establishment of transition conditions 1 to 3 performed by the augmented reality processing unit 142 in the augmented reality processing will be described.

A-2-4. Monitoring the establishment of transition condition 1:
FIG. 12 is a flowchart showing a procedure for monitoring whether the transition condition 1 is satisfied. In the augmented reality processing unit 142, at least one of a plurality of conditions 1-1 to 1-5 listed below (may be plural) is preset as the transition condition 1. The augmented reality processing unit 142 determines that the transition condition 1 is satisfied when at least one of the set conditions (1-1 to 1-5) is satisfied.

(1-1) When a real object that is a display target of a virtual object is in the user's field of view (1-2) The attention action by the user is greater than or equal to the statistical value of the reference time in past augmented reality processing When continued (1-3) When the attention action by the user is continued for more than the time obtained from the information amount of the virtual object (1-4) The attention action by the user is set by the user When continued for more than the set value (1-5) When the user's attention movement continues for more than the time determined in consideration of the individual difference of the user and the information amount of the virtual object

  Hereinafter, a specific procedure for the augmented reality processing unit 142 to determine whether the conditions 1-1 to 1-5 are satisfied will be described with reference to FIG.

(1-1) CASE (in view): When a real object that is a display target of a virtual object enters in the user's view

  In step S <b> 310, the augmented reality processing unit 142 acquires an outside scene image using the camera 61. In step S312, the augmented reality processing unit 142 recognizes the acquired outside scene image, and determines whether the outside scene image includes a real object that is a display target of the virtual object. The “real object that is the display target of the virtual object” is the “target object” in the normal display process (FIG. 5).

  When the target object is included (step S312: YES), in step S314, the augmented reality processing unit 142 determines that the transition condition 1 is satisfied. When the target object is not included, the augmented reality processing unit 142 continues to monitor the establishment of the conditions 1-1 to 1-5.

  As described above, when the condition 1-1 is used, the augmented reality processing unit 142 satisfies the transition condition 1 when the real object (target object) that is the display target of the virtual object enters the user's field of view. The state of augmented reality processing (FIG. 4) can be transitioned from the initial state to the normal display state ST1. As a result, the normal display processing (FIG. 5) by the normal display processing unit 144 is executed, and the HMD 100 can make the user visually recognize the virtual image VI (FIG. 6) including the virtual object in the first display state.

(1-2) CASE (past reference time): When the user's attention movement continues for more than the statistical value of the reference time in past augmented reality processing

  In step S320, the augmented reality processing unit 142 detects the start of the attention operation (operation in which the user pays attention to a specific point) by the user. In the present embodiment, when the user's line of sight acquired by the line-of-sight acquisition unit (the line-of-sight detection unit 62 and the augmented reality processing unit 142 of the CPU 140) does not move from a certain point for a predetermined time or more, the attention action Can be determined to have started. The predetermined time can be arbitrarily determined. Further, in determining whether or not it has moved from a certain point, it is preferable to allow blurring within a predetermined range in consideration of blurring of the line of sight caused by eyeball shaking. Note that the attention movement is started when the movement of the user's hand acquired by the movement detection unit (the camera 61 and the augmented reality processing unit 142 of the CPU 140) does not move from one point over a predetermined time. It may be determined that Also in this case, it is preferable to allow blurring within a predetermined range in consideration of camera shake in the determination of “whether or not it has moved from a certain point”.

  In step S322, the augmented reality processing unit 142 stores the history of the reference time used in the past augmented reality processing stored in the past reference time 124 (hereinafter also referred to as “past reference time”). Acquire all, including the history of other users. In step S322, the augmented reality processing unit 142 functions as a “reference time acquisition unit”, and the past reference time 124 functions as “reference time information”.

  In step S324, the augmented reality processing unit 142 obtains a statistical value of the past reference time using the acquired history. The statistical value can be obtained by any statistical method, and can be, for example, an average value, a mode value, a median value, or the like. The augmented reality processing unit 142 sets the obtained statistical value as “a reference time used in the current process”.

  In step S326, the augmented reality processing unit 142 determines whether or not the duration of the attention operation of the user who has started detection from step S320 is equal to or longer than the reference time used in the current process (statistic value in step S324). judge.

  If the value is equal to or greater than the statistical value (step S326: YES), the augmented reality processing unit 142 determines in step S328 that the transition condition 1 is satisfied. Then, the augmented reality processing unit 142 obtains the duration of the attention operation of the actual user who started the detection from step S320, the information amount of the virtual object in the first display mode, and the identifier of the user. It is stored in the past reference time 124. A method for obtaining the information amount of the virtual object in the first display mode will be described in step S332.

  Further, in step S328, the augmented reality processing unit 142 identifies the real object that is the target of the attention motion. Specifically, the augmented reality processing unit 142 collates the direction of the user's line of sight detected in step S320 with the outside scene image obtained by the camera 61, so that An object can be specified. The “real object that is the target of the target action” is the “target object” in the normal display process (FIG. 5) and the simple display process (FIG. 7). In addition, when using a user's hand movement in step S320, the augmented reality processing unit 142 performs image analysis on the outside scene image obtained by the camera 61, and an object indicated by the user's hand (for example, a fingertip). May be a real object that is the target of the attention motion.

  Note that, when the notable action ends before reaching the statistical value or more, the augmented reality processing unit 142 continues to monitor the establishment of the conditions 1-1 to 1-5.

  As described above, when the condition 1-2 is used, the augmented reality processing unit 142 causes the statistical value of the reference time (past reference time) used in the past augmented reality processing, that is, the tendency of the past reference time. Based on the above, it is possible to automatically change the reference time used in the current augmented reality processing. The augmented reality processing unit 142 determines that the transition condition 1 is satisfied when the duration of the user's attention motion is equal to or longer than the reference time used in the current augmented reality processing, and augmented reality. The state of the feeling process (FIG. 4) can be changed from the initial state to the normal display state ST1.

(1-3) CASE (amount of information): When the user's attention movement is continued for a time determined from the amount of information of the virtual object or more

  In step S330, the augmented reality processing unit 142 detects the start of the attention motion by the user. Details are the same as in step S320.

  In step S332, the augmented reality processing unit 142 acquires the information amount of the virtual object. Specifically, the augmented reality processing unit 142 identifies the real object (that is, the target object) that is detected in step S320 and is the target of the attention motion. The details are the same as in step S326. The augmented reality processing unit 142 acquires one or more virtual objects (virtual objects in the first display mode) corresponding to the identified target object. Details are the same as step S108 in FIG. The augmented reality processing unit 142 obtains the information amount of the acquired one or more virtual objects. The augmented reality processing unit 142 can obtain the information amount of the virtual object using, for example, any one of the methods c1 to c3 exemplified below. When a plurality of virtual objects are acquired, a plurality of average values may be used as the information amount, or a plurality of total values may be used as the information amount.

(C1) Virtual object file size: The augmented reality processing unit 142 is used when the virtual object is a combination of a character and an image, when the virtual object is a video, or when the type of the virtual object is unknown. It is preferable to adopt the method c1.
(C2) Number of characters included in the virtual object: The augmented reality processing unit 142 preferably employs the method c2 when the virtual object is a character.
(C3) Ratio of black dots when binarizing a virtual object: The augmented reality processing unit 142 preferably adopts the method c3 when the virtual object is an image.

  If the methods c1 to c3 described above are properly used, the augmented reality processing unit 142 can obtain the information amount of the virtual object by a method suitable for the type of the virtual object in the first display mode. The information amount of the virtual object can be grasped.

  In step S334, the augmented reality processing unit 142 obtains a threshold value of the reference time used in the current process from the information amount of the virtual object acquired in step S332. The threshold value can be obtained by an arbitrary method. For example, the threshold value may be obtained by multiplying the information amount by a predetermined coefficient, or the threshold value may be obtained using a table in which information amount candidates and threshold candidates are associated with each other. Also good. The augmented reality processing unit 142 sets the obtained threshold as “a reference time used in the current process”.

  In step S336, the augmented reality processing unit 142 determines whether or not the duration of the attention operation of the user who has started detection from step S330 is equal to or longer than the reference time used in the current process (the threshold in step S334). To do.

  When the threshold value is exceeded (step S336: YES), the augmented reality processing unit 142 determines in step S338 that the transition condition 1 is satisfied. The processing after the transition condition 1 is satisfied (stored in the past reference time 124 and identification of the real object that is the target of the target action) is the same as in step S328. Note that, when the notable action ends before the threshold value is exceeded, the augmented reality processing unit 142 continues to monitor the establishment of the conditions 1-1 to 1-5.

  As described above, using Condition 1-3, the augmented reality processing unit 142 uses the virtual object (FIG. 6, VO1 to VO1) in the first display mode in which the area occupied in the virtual image VI is large (the degree of visibility inhibition is high). Based on the information amount of VO3), the reference time used in the current augmented reality processing can be changed. In this way, the augmented reality processing unit 142, for example, when the information amount of the virtual object in the first display mode is large, in other words, the virtual reality associated with the transition from the initial state to the first display mode. Since the reference time when the display of the object tends to hinder the visual recognition of the real object can be made longer than the reference time when the amount of information is small, the convenience for the user can be improved. The augmented reality processing unit 142 determines that the transition condition 1 is satisfied when the duration of the user's attention motion is equal to or longer than the reference time used in the current augmented reality processing, and augmented reality. The state of the feeling process (FIG. 4) can be changed to the normal display state ST1.

(1-4) CASE (user setting): When the user's attention action continues for more than the set value set by the user

  In step S340, the augmented reality processing unit 142 detects the start of the attention motion by the user. Details are the same as in step S320.

  In step S342, the augmented reality processing unit 142 acquires the set value of the reference time set by the user and stored in the reference time setting 125. In step S342, the augmented reality processing unit 142 functions as a “reference time acquisition unit”. The augmented reality processing unit 142 sets the acquired setting value as “a reference time used in the current process”.

  In step S344, the augmented reality processing unit 142 determines whether or not the duration of the attention operation of the user who has started detection from step S340 is equal to or longer than the reference time used in the current process (set value in step S342). judge.

  If it is equal to or greater than the set value (step S344: YES), the augmented reality processing unit 142 determines in step S346 that the transition condition 1 is satisfied. The processing after the transition condition 1 is satisfied (stored in the past reference time 124 and identification of the real object that is the target of the target action) is the same as in step S328. Note that if the notable action ends before the set value is exceeded, the augmented reality processing unit 142 continues to monitor the establishment of the conditions 1-1 to 1-5.

  As described above, when the condition 1-4 is used, the augmented reality processing unit 142 uses the reference value used in the current augmented reality processing according to the user's favorite setting value stored in the reference time setting 125. You can change the time. The augmented reality processing unit 142 determines that the transition condition 1 is satisfied when the duration of the user's attention motion is equal to or longer than the reference time used in the current augmented reality processing, and augmented reality. The state of the feeling process (FIG. 4) can be changed from the initial state to the normal display state ST1.

(1-5) CASE (individual difference among users): When the attention movement by the user is continued for more than the time determined in consideration of the individual difference of the user and the amount of information of the virtual object

  In step S350, the augmented reality processing unit 142 detects the start of the attention motion by the user. Details are the same as in step S320.

  In step S <b> 352, the augmented reality processing unit 142 acquires the history of the current user of the HMD 100 from the history of the reference time used in the past augmented reality processing stored in the past reference time 124. The augmented reality processing unit 142 may search the past reference time 124 using the user identifier as a key. In step S322, the augmented reality processing unit 142 functions as a “reference time acquisition unit”.

  In step S354, the augmented reality processing unit 142 divides the “information amount” of the acquired history by the “duration of attention operation”, thereby obtaining an information amount that the user of the HMD 100 can focus on per unit time. Ask. Next, the augmented reality processing unit 142 divides the information amount of the virtual object in the first display mode by the obtained information amount (information amount that can be noticed per unit time), thereby obtaining the ideal reference time. Ask for. The augmented reality processing unit 142 sets the obtained ideal reference time as “a reference time used in the current process”. The method for obtaining the information amount of the virtual object in the first display mode is the same as that in step S332.

  In step S356, the augmented reality processing unit 142 determines whether or not the duration of the attention operation of the user who has started detection from step S320 is equal to or longer than the reference time used in the current process (ideal reference time in step S354). Determine.

  If it is equal to or longer than the ideal reference time (step S356: YES), in step S358, the augmented reality processing unit 142 determines that the transition condition 1 is satisfied. The processing after the transition condition 1 is satisfied (stored in the past reference time 124 and identification of the real object that is the target of the target action) is the same as in step S328. Note that, when the attention operation ends before the ideal reference time is reached, the augmented reality processing unit 142 continues to monitor the establishment of the conditions 1-1 to 1-5.

  As described above, using Condition 1-5, the augmented reality processing unit 142 uses the reference time information (past reference time 124) to obtain the amount of information that the user can focus on per unit time. Can do. For this reason, the augmented reality processing unit 142, for example, obtains the obtained information amount (the information amount that the user can pay attention to per unit time) and the virtual object (FIG. 6, VO1 to VO1) in the first display mode. The reference time used in the current augmented reality processing can be changed based on the information amount of VO3). In this way, the augmented reality processing unit 142, for example, when there is a small amount of information that can be noticed per unit time by the user, in other words, transitions from the initial state to the first display mode. The reference time when the accompanying virtual object display tends to hinder the real object can be made longer than the reference time when the amount of information is large. As a result, the augmented reality processing unit 142 can change the reference time according to individual differences among users, and thus can improve user convenience. The augmented reality processing unit 142 determines that the transition condition 1 is satisfied when the duration of the user's attention motion is equal to or longer than the reference time used in the current augmented reality processing, and augmented reality. The state of the feeling process (FIG. 4) can be changed from the initial state to the normal display state ST1.

  As described above, according to the augmented reality processing (transition from the initial state to the normal display state ST1 when the transition condition 1 is established), the augmented reality processing unit 142 performs the attention motion over a predetermined reference time. In response to the continuation, at least for the real object (target object) on which the target action has been performed, the virtual image VI (NI) including the virtual objects (FIG. 6, VO1 to VO3) in the first display mode is displayed on the image display unit 20. Let it form. In this way, the virtual object is displayed according to the user's intention to continue the attention movement, so that the user can easily see the real object in the real world unless the attention movement is continued. can do. As a result, it is possible to provide the head-mounted display device (HMD 100) in which the display of the virtual object is unlikely to hinder the visual recognition of the real object or its background.

  Furthermore, according to the augmented reality processing (monitoring of the establishment of transition condition 1), the augmented reality processing unit 142 has a large area of the virtual objects VO1 to VO3 occupying the virtual image VI after the augmented reality processing is started. The reference time for transitioning the display mode to the first display mode (FIG. 6) (with a high degree of visibility inhibition) is set to various conditions such as those listed in Conditions 1-2 to 1-5, for example. It can be changed accordingly.

  Furthermore, in the augmented reality process (monitoring of the establishment of the transition condition 1), if the user's line of sight acquired by the line-of-sight acquisition unit (the line-of-sight detection unit 62 and the augmented reality processing unit 142 of the CPU 140) is used, The user can perform the attention action using the line of sight without moving the hand or foot. For this reason, the user can easily perform the attention operation even in a scene such as during work where it is difficult for the user to release his / her hand. Further, in the augmented reality processing (monitoring of the establishment of the transition condition 1), if the movement of the user's hand acquired by the motion detection unit (the camera 61 and the augmented reality processing unit 142 of the CPU 140) is used, The user can easily perform the attention operation using the hand movement familiar in the normal operation.

A-2-5. Monitoring the establishment of transition condition 3:
The procedure for monitoring the establishment of the transition condition 3 is almost the same as the transition condition 1 shown in FIG. Hereinafter, differences will be described. In the augmented reality processing unit 142, at least one (or a plurality of) of the conditions 3-1 to 3-5 listed below may be set in advance as the transition condition 3. The augmented reality processing unit 142 determines that the transition condition 3 is satisfied when at least one of the set conditions (3-1 to 3-5) is satisfied.

(3-1) When a real object that is a display target of a virtual object enters the user's field of view and the user performs a predetermined action (3-2) The attention action by the user is a past extension When it is continued for more than the statistical value of the reference time in the realistic processing (3-3) When the attention action by the user is continued for more than the time obtained from the information amount of the virtual object (3-4) By the user When the attention movement continues for more than the set value set by the user (3-5) The time for which the attention movement by the user is longer than the time determined in consideration of the individual difference of the user and the information amount of the virtual object If continued

(3-1) When a real object that is a display target of a virtual object enters the user's field of view and the user performs a predetermined action

  Steps S310 and S312 are the same as in FIG. If the target object is included in step S312 (step S312: YES), the augmented reality processing unit 142 monitors whether or not a predetermined operation has been performed by the user. As the predetermined operation, an arbitrary operation can be adopted as long as the operation is different from the invalidation action described in FIG. 4. For example, a specific “gesture” can be adopted. The method for acquiring the gesture is the same as the method for acquiring the invalidation action. When the predetermined operation is performed, in step S314, the augmented reality processing unit 142 determines that the transition condition 3 is satisfied.

  As described above, when the condition 3-1 is used, the augmented reality processing unit 142 causes the real object (target object) that is the display target of the virtual object to enter the user's field of view, and the user performs a predetermined operation. When it is performed, it is determined that the transition condition 3 is satisfied, and the state of the augmented reality processing (FIG. 4) can be transitioned from the simple display state ST2 to the normal display state ST1.

(3-2) When attention movement by the user is continued for more than the statistical value of the reference time in the past augmented reality processing

Steps S320 to S326 are the same as those in FIG. In step S328, the augmented reality processing unit 142 performs storage in the past reference time 124 (similar to step S328) after determining that the transition condition 3 is satisfied. Thereafter, the augmented reality processing unit 142 specifies one of the real objects d1 and d2 below. The specifying method is the same as that in step S328.
(D1) Real object targeted for attention motion (d2) Real object associated with virtual object in second display mode targeted for attention motion

  As described above, using the condition 3-2, the augmented reality processing unit 142 automatically, based on the tendency of the statistical value of the past reference time (the past reference time), similarly to the condition 1-2. The reference time used in the current augmented reality processing can be changed. Further, the augmented reality processing unit 142 determines that the transition condition 3 is satisfied when the duration of the user's attention motion is equal to or longer than the reference time used in the current augmented reality processing, and augmented reality The state of the feeling process (FIG. 4) can be changed from the simple display state ST2 to the normal display state ST1.

(3-3) When the attention movement by the user is continued for more than the time obtained from the information amount of the virtual object

  Steps S330, S334, and S336 are the same as those in FIG. For Steps S332 and S338, the description “the real object that is the target of the action of interest” may be read as “the real object of either d1 or d2 described in Condition 3-2”.

  As described above, when the condition 3-3 is used, the augmented reality processing unit 142 uses the first display mode in which the area occupied in the virtual image VI is large (the degree of visibility inhibition is high), similarly to the condition 1-3. Based on the information amount of the virtual object (FIG. 6, VO1 to VO3), the reference time used in the current augmented reality process can be changed. In this way, the augmented reality processing unit 142, for example, if the information amount of the virtual object in the first display mode is large, in other words, the second display mode (FIGS. 8, 9, and 10). , FIG. 11) The reference time when the display of the virtual object accompanying the transition from the first display mode is likely to hinder the real object is made longer than the reference time when the amount of information is small. Therefore, convenience for the user can be improved. Further, the augmented reality processing unit 142 determines that the transition condition 3 is satisfied when the duration of the user's attention motion is equal to or longer than the reference time used in the current augmented reality processing, and augmented reality The state of the feeling process (FIG. 4) can be changed from the simple display state ST2 to the normal display state ST1.

(3-4) When the user's attention movement continues for more than the set value set by the user

  Steps S340 to S344 are the same as those in FIG. For step S346, the description “the real object that is the target of the action of interest” may be read as “the real object of either d1 or d2 described in Condition 3-2”.

  As described above, when the condition 3-4 is used, the augmented reality processing unit 142, as in the condition 1-4, according to the user's favorite setting value stored in the reference time setting 125, The reference time used in augmented reality processing can be changed. Further, the augmented reality processing unit 142 determines that the transition condition 3 is satisfied when the duration of the user's attention motion is equal to or longer than the reference time used in the current augmented reality processing, and augmented reality The state of the feeling process (FIG. 4) can be changed from the simple display state ST2 to the normal display state ST1.

(3-5) When the user's attention movement is continued for more than the time determined in consideration of the individual difference of the user and the information amount of the virtual object

  Steps S350 to S356 are the same as those in FIG. In step S358, the description “the real object that is the target of the action of interest” may be read as “the real object of either d1 or d2 described in Condition 3-2”.

  As described above, when the condition 3-5 is used, the augmented reality processing unit 142, for example, in the same manner as the condition 1-5, the amount of information that the user can pay attention to per unit time, and the first display Based on the information amount of the virtual objects (VO1 to VO3 in FIG. 6) in the aspect, the reference time used in the current augmented reality processing can be changed. In this way, the augmented reality processing unit 142, for example, when the amount of information that the user can pay attention to per unit time is small, in other words, the second display mode (FIG. 8, FIG. 9, The reference time when the display of the virtual object accompanying the transition from the FIG. 10 and FIG. 11) to the first display mode is likely to hinder the real object is longer than the reference time when the amount of information is large. can do. As a result, the augmented reality processing unit 142 can change the reference time according to individual differences among users, and thus can improve user convenience. Further, the augmented reality processing unit 142 determines that the transition condition 3 is satisfied when the duration of the user's attention motion is equal to or longer than the reference time used in the current augmented reality processing, and augmented reality The state of the feeling process (FIG. 4) can be changed from the simple display state ST2 to the normal display state ST1.

  As described above, according to the augmented reality processing (transition from the simple display state ST2 to the normal display state ST1 due to the establishment of the transition condition 3), the augmented reality processing unit 142 performs the second display mode. After the virtual image VI (SI) including the virtual object (FIGS. 8, 9, 10, 11, VO4 to VO6) is formed on the image display unit 20, in addition to the real object (above d1), In accordance with the continuation of the attention movement over the predetermined reference time for the virtual object (d2) in the second display mode, the area of the virtual object occupying the virtual image is larger than the second display mode (visibility inhibition degree is low). A high virtual image VI (NI) including the virtual objects (VO1 to VO3 in FIG. 6) in the first display mode can be formed on the image display unit 20. Further, the display state of the virtual object transitions from the second display state to the first display state due to the user's intention to continue the attention motion, and the area of the virtual object occupied in the displayed virtual image increases (visual recognition). Therefore, the user can maintain a state in which a real object existing in the real world can be easily visually recognized unless the attention movement is continuously performed. In other words, the user can control the visibility obstruction degree of the virtual object with his / her intention. As a result, it is possible to provide the head-mounted display device (HMD 100) in which the display of the virtual object is unlikely to hinder the visual recognition of the real object or its background.

  Furthermore, according to the augmented reality processing (monitoring of the establishment of the transition condition 3), the augmented reality processing unit 142 is the second in which the areas of the virtual objects VO4 to VO6 occupying the virtual image VI are small (visibility inhibition degree is low). From the display mode (FIGS. 8, 9, 10, and 11) to the first display mode (FIG. 6) in which the areas of the virtual objects VO1 to VO3 occupying the virtual image VI are large (visibility inhibition is high). The reference time for changing the display mode can be changed according to various conditions such as those listed in Conditions 3-2 to 3-5.

A-2-6. Monitoring the establishment of transition condition 2:
FIG. 13 is a flowchart showing a procedure for monitoring whether the transition condition 2 is satisfied. In the augmented reality processing unit 142, at least one (or more) of the conditions 2-1 to 2-5 listed below may be set in advance as the transition condition 2. The augmented reality processing unit 142 determines that the transition condition 2 is satisfied when at least one of the set conditions (2-1 to 2-5) is satisfied.

(2-1) When a real object that is a display target of a virtual object is out of the user's field of view (2-2) After displaying the virtual object in the first display mode, in past augmented reality processing When the time equal to or longer than the statistical value of the maintenance time has elapsed (2-3) After the virtual object is displayed in the first display mode, when the time more than the time obtained from the information amount of the virtual object has elapsed (2-4) ) When a time longer than the set value set by the user has elapsed after displaying the virtual object in the first display mode (2-5) After displaying the virtual object in the first display mode, When more than the time calculated considering the individual difference and the amount of information of the virtual object has passed

  Hereinafter, a specific procedure for the augmented reality processing unit 142 to determine whether the conditions 2-1 to 2-5 are satisfied will be described with reference to FIG.

(2-1) CASE (out of sight): When the real object that is the display target of the virtual object is out of the user's sight

  In step S <b> 410, the augmented reality processing unit 142 acquires an outside scene image using the camera 61. In step S412, the augmented reality processing unit 142 recognizes the acquired outside scene image to determine whether the outside object includes a real object for the virtual object displayed in the first display mode. To do. The “real object that is the display target of the virtual object” becomes the “target object” in the simple display process (FIG. 7).

  When the target object is not included (step S412: NO), in step S414, the augmented reality processing unit 142 determines that the transition condition 2 is satisfied. When the target object is included, the augmented reality processing unit 142 continues to monitor the establishment of the conditions 2-1 to 2-5.

  As described above, when the condition 2-1 is used, the augmented reality processing unit 142 causes the real object (target object) of the virtual object displayed in the first display mode to be out of the user's view. Therefore, it is determined that the transition condition 2 is satisfied, and the state of the augmented reality processing (FIG. 4) can be transitioned from the normal display state ST1 to the simple display state ST2. As a result, the simple display processing (FIG. 7) is executed by the simple display processing unit 146, and the HMD 100 displays the virtual image VI (FIGS. 8, 9, 10, and 11) including the virtual object in the second display mode. It can be visually recognized by the user.

(2-2) CASE (Past maintenance time): When a time equal to or greater than the statistical value of the maintenance time in the past augmented reality processing has elapsed after the virtual object is displayed in the first display mode.

  In step S <b> 420, the augmented reality processing unit 142 stores the history of the maintenance time used in the past augmented reality processing stored in the past maintenance time 122 (hereinafter, also referred to as “past maintenance time”). Acquire all, including the history of other users. In step S420, the augmented reality processing unit 142 functions as a “maintenance time acquisition unit”, and the past maintenance time 122 functions as “maintenance time information”.

  In step S422, the augmented reality processing unit 142 obtains a statistical value of the past maintenance time using the acquired history. The statistical value can be obtained by any statistical method, and can be, for example, an average value, a mode value, a median value, or the like. The augmented reality processing unit 142 sets the obtained statistical value as “a maintenance time used in the current process”.

  In step S424, the augmented reality processing unit 142 determines that the display time of the normal display image NI (FIG. 6) whose measurement is started from step S114 in FIG. 5 is the maintenance time used in the current process (statistics in step S422). Value) or more. Note that the display time of the normal display image NI (FIG. 6) is synonymous with the display time of the virtual object in the first display mode.

  If the value is equal to or greater than the statistical value (step S424: YES), in step S426, the augmented reality processing unit 142 determines that the transition condition 2 is satisfied. When the display time is smaller than the statistical value, the augmented reality processing unit 142 continues to monitor the establishment of the conditions 2-1 to 2-5.

  In this way, using the condition 2-2, the augmented reality processing unit 142, the augmented reality processing unit 142, the statistical value of the maintenance time (past maintenance time) used in the past augmented reality processing, That is, the maintenance time used in the current augmented reality process can be automatically changed based on the past maintenance time trend. Further, the augmented reality processing unit 142 has a display time of the normal display image NI (display time of the virtual objects VO1 to VO3 in the first display mode) equal to or longer than a maintenance time used in the current augmented reality processing. In this case, it is determined that the transition condition 2 is satisfied, and the state of the augmented reality processing (FIG. 4) can be transitioned from the normal display state ST1 to the simple display state ST2.

(2-3) CASE (information amount): When a time equal to or greater than the time obtained from the information amount of the virtual object has elapsed after the display of the virtual object in the first display mode

  In step S430, the augmented reality processing unit 142 acquires the information amount of the virtual object displayed in the first display mode. The method for acquiring the information amount of the virtual object is the same as the methods c1 to c3 in step S332 in FIG.

  In step S432, the augmented reality processing unit 142 obtains a threshold for the maintenance time used in the current process from the information amount of the virtual object acquired in step S430. The threshold value can be obtained by an arbitrary method. For example, the threshold value may be obtained by multiplying the information amount by a predetermined coefficient (a coefficient different from that in step S334 in FIG. 12), or the information amount candidate and the threshold candidate. And the threshold value may be obtained using a table that is associated with (a table different from step S334 in FIG. 12). The augmented reality processing unit 142 sets the obtained threshold as “a maintenance time used in the current process”.

  In step S434, the augmented reality processing unit 142 determines that the display time of the normal display image NI (FIG. 6) whose measurement is started from step S114 in FIG. 5 is the maintenance time used in the current process (threshold value in step S432). ) It is determined whether or not the above is reached.

  When the threshold value is exceeded (step S434: YES), the augmented reality processing unit 142 determines in step S436 that the transition condition 2 is satisfied. When the display time is smaller than the threshold value, the augmented reality processing unit 142 continues to monitor the establishment of the conditions 2-1 to 2-5.

  As described above, using the condition 2-3, the augmented reality processing unit 142 uses the virtual object (FIG. 6, VO1 to VO1) in the first display mode in which the area occupied in the virtual image VI is large (the degree of visibility inhibition is high). Based on the information amount of VO3), the maintenance time used in the current augmented reality processing can be changed. In this way, the augmented reality processing unit 142, for example, when the information amount of the virtual object in the first display mode is large, in other words, the user spends a lot of time checking the contents of the virtual object. Since the maintenance time when estimated to be necessary can be made longer than the maintenance time when the amount of information is small, the convenience for the user can be improved. Further, the augmented reality processing unit 142 determines that the transition condition 2 is satisfied when the display time of the virtual object in the first display mode is equal to or longer than the maintenance time used in the current augmented reality processing. The state of the augmented reality processing (FIG. 4) can be transitioned from the normal display state ST1 to the simple display state ST2.

(2-4) CASE (user setting): After the display of the virtual object in the first display mode, a time longer than the setting value set by the user has elapsed.

  In step S440, the augmented reality processing unit 142 acquires the set value of the maintenance time set by the user and stored in the maintenance time setting 123. In step S440, the augmented reality processing unit 142 functions as a “maintenance time acquisition unit”. The augmented reality processing unit 142 sets the acquired setting value as “maintenance time used in the current process”.

  In step S442, the augmented reality processing unit 142 determines that the display time of the normal display image NI (FIG. 6) whose measurement is started from step S114 in FIG. 5 is the maintenance time used in the current process (setting in step S440). Value) or more.

  If it is equal to or greater than the set value (step S442: YES), in step S444, the augmented reality processing unit 142 determines that the transition condition 2 is satisfied. When the display time is smaller than the set value, the augmented reality processing unit 142 continues to monitor the establishment of the conditions 2-1 to 2-5.

  In this way, using Condition 2-3, the augmented reality processing unit 142 maintains the current augmented reality processing used in the augmented reality processing according to the user's favorite setting value stored in the maintenance time setting 123. You can change the time. Further, the augmented reality processing unit 142 determines that the transition condition 2 is satisfied when the display time of the virtual object in the first display mode is equal to or longer than the maintenance time used in the current augmented reality processing. The state of the augmented reality processing (FIG. 4) can be transitioned from the normal display state ST1 to the simple display state ST2.

(2-5) CASE (individual difference of user): After the virtual object is displayed in the first display mode, a time longer than the time determined in consideration of the individual difference of the user and the information amount of the virtual object When it has passed

  In step S450, the augmented reality processing unit 142 acquires a history of the current user of the HMD 100 from the history of the maintenance time used in the past augmented reality processing stored in the past maintenance time 122. The augmented reality processing unit 142 may search the past maintenance time 122 using the user identifier as a key. In step S322, the augmented reality processing unit 142 functions as a “maintenance time acquisition unit”, and the past maintenance time 122 functions as “maintenance time information”.

  In step S452, the augmented reality processing unit 142 divides the “information amount” of the acquired history by the “maintenance time” to obtain an information amount that can be recognized per unit time by the user of the HMD 100. Next, the augmented reality processing unit 142 divides the information amount of the virtual object in the first display mode by the obtained information amount (information amount that can be recognized per unit time), thereby obtaining the ideal maintenance time. Ask for. The augmented reality processing unit 142 sets the calculated ideal maintenance time as “maintenance time used in the current process”. The method for acquiring the information amount of the virtual object is the same as the methods c1 to c3 in step S332 in FIG.

  In step S454, the augmented reality processing unit 142 determines that the display time of the normal display image NI (FIG. 6) whose measurement is started from step S114 of FIG. 5 is the maintenance time used in the current process (ideal of step S452). It is determined whether or not the maintenance time has been exceeded.

  If it is equal to or longer than the ideal maintenance time (step S454: YES), the augmented reality processing unit 142 determines in step S456 that the transition condition 2 is satisfied. When the display time is smaller than the ideal maintenance time, the augmented reality processing unit 142 continues to monitor the establishment of the conditions 2-1 to 2-5.

  Thus, using Condition 2-5, the augmented reality processing unit 142 uses the maintenance time information (past maintenance time 122) to obtain the amount of information that the user can recognize per unit time. Can do. Therefore, the augmented reality processing unit 142, for example, obtains the obtained information amount (information amount that can be recognized per unit time by the user) and the virtual object (FIG. 6, VO1 to VO1) in the first display mode. The maintenance time used in the current augmented reality processing can be changed based on the information amount of VO3). In this way, the augmented reality processing unit 142, for example, when the amount of information that the user can recognize per unit time is small, in other words, the user can view the virtual object in the first display mode. The maintenance time when it is estimated that it takes a lot of time to confirm the contents of can be made longer than the maintenance time when the amount of information is large. As a result, the augmented reality processing unit 142 can change the maintenance time according to the individual difference of the user, so that the convenience of the user can be improved. Further, the augmented reality processing unit 142 determines that the transition condition 2 is satisfied when the display time of the virtual object in the first display mode is equal to or longer than the maintenance time used in the current augmented reality processing. The state of the augmented reality processing (FIG. 4) can be transitioned from the normal display state ST1 to the simple display state ST2.

  As described above, according to the augmented reality processing (transition from the normal display state ST1 to the simplified display state ST2 when the transition condition 2 is satisfied), the augmented reality processing unit 142 is configured to display the first display mode. After the virtual image VI (NI) including the virtual object (FIG. 6, VO1 to VO3) is formed on the image display unit 20, the virtual object occupying the virtual image more than the first display mode after a predetermined maintenance time has elapsed. A virtual image VI (SI) including a virtual object (FIGS. 8, 9, 10, 11, and VO4 to VO6) in the second display mode having a small area (low visibility inhibition degree) is displayed on the image display unit 20. To form. In this way, since the area of the virtual object occupied in the displayed virtual image is automatically reduced after the maintenance time has elapsed (visibility impediment is reduced), the user can replace the real object that exists in the real world. Easy to see. As a result, it is possible to provide the head-mounted display device (HMD 100) in which the display of the virtual object is unlikely to hinder the visual recognition of the real object or its background.

  Furthermore, according to the augmented reality processing (monitoring of the establishment of the transition condition 2), the augmented reality processing unit 142 has a large area of the virtual objects VO1 to VO3 occupying the virtual image VI (the degree of visibility inhibition is high). From the display mode (FIG. 6) to the second display mode (FIGS. 8, 9, 10, and 11) in which the areas of the virtual objects VO4 to VO6 occupying the virtual image VI are small (visibility inhibition is low). The maintenance time for changing the display mode can be changed according to various conditions such as those listed in Conditions 2-2 to 2-5.

B. Variation:
In the above embodiment, a part of the configuration realized by hardware may be replaced by software, and conversely, a part of the configuration realized by software may be replaced by hardware. Good. In addition, the following modifications are possible.

・ Modification 1:
In the said embodiment, it illustrated about the structure of HMD. However, the configuration of the HMD can be arbitrarily determined without departing from the gist of the present invention. For example, each component can be added, deleted, converted, and the like.

  The allocation of components to the control unit and the image display unit in the above embodiment is merely an example, and various aspects can be employed. For example, the following aspects may be adopted. (I) A mode in which processing functions such as a CPU and a memory are mounted on the control unit, and only a display function is mounted on the image display unit. (Ii) Processing functions such as a CPU and a memory are provided in both the control unit and the image display unit. (Iii) a mode in which the control unit and the image display unit are integrated (for example, a mode in which the control unit is included in the image display unit and functions as a glasses-type wearable computer), (iv) instead of the control unit A mode in which a smartphone or a portable game machine is used. (V) The control unit and the image display unit are connected by connection via a wireless signal transmission path such as a wireless LAN, infrared communication, or Bluetooth, and the connection unit (code) is connected. Abolished aspect. In this case, power supply to the control unit or the image display unit may be performed wirelessly.

  For example, the configurations of the control unit and the image display unit exemplified in the above embodiment can be arbitrarily changed. Specifically, for example, in the above embodiment, the control unit includes the transmission unit and the image display unit includes the reception unit. However, both the transmission unit and the reception unit have a function capable of bidirectional communication. It may be provided and may function as a transmission / reception unit. Further, for example, a part of an operation interface (such as various keys and a trackpad) provided in the control unit may be omitted. The control unit may be provided with another operation interface such as an operation stick. Moreover, it is good also as what receives input from a keyboard or a mouse | mouth as a structure which can connect devices, such as a keyboard and a mouse | mouth, to a control part. For example, although the secondary battery is used as the power source, the power source is not limited to the secondary battery, and various batteries can be used. For example, a primary battery, a fuel cell, a solar cell, a thermal cell, or the like may be used.

  FIG. 14 is an explanatory diagram showing an external configuration of the HMD in the modification. 14A, the image display unit 20x includes a right optical image display unit 26x instead of the right optical image display unit 26, and a left optical image display unit 28x instead of the left optical image display unit 28. I have. The right optical image display unit 26x and the left optical image display unit 28x are formed smaller than the optical member of the above-described embodiment, and are respectively disposed obliquely above the right eye and the left eye of the user when the HMD is worn. . 14B, the image display unit 20y includes a right optical image display unit 26y instead of the right optical image display unit 26, and a left optical image display unit 28y instead of the left optical image display unit 28. I have. The right optical image display unit 26y and the left optical image display unit 28y are formed smaller than the optical member of the above-described embodiment, and are respectively disposed obliquely below the right eye and the left eye of the user when the HMD is mounted. . Thus, it is sufficient that the optical image display unit is disposed in the vicinity of the user's eyes. The size of the optical member forming the optical image display unit is also arbitrary, and the optical image display unit covers only a part of the user's eyes, in other words, the optical image display unit completely covers the user's eyes. It can also be realized as an HMD in an uncovered form.

  For example, each processing unit (for example, an image processing unit, a display control unit, an augmented reality processing unit, and the like) included in the control unit executes a computer program stored in a ROM or a hard disk by the CPU on the RAM. It was described as being realized. However, these functional units may be configured using an ASIC (Application Specific Integrated Circuit) designed to realize the function. Each processing unit may be arranged in the image display unit instead of the control unit.

  For example, the HMD is a binocular transmissive HMD, but may be a monocular HMD. Further, it may be configured as a non-transparent HMD that blocks the transmission of outside scenes when the user wears the HMD, or may be configured as a video see-through in which a camera is mounted on the non-transparent HMD. Further, for example, instead of the image display unit worn like glasses, an ordinary flat display device (a liquid crystal display device, a plasma display device, an organic EL display device, etc.) may be employed as the image display unit. Also in this case, the connection between the control unit and the image display unit may be a connection via a wired signal transmission path or a connection via a wireless signal transmission path. If it does in this way, a control part can also be utilized as a remote control of a usual flat type display device. For example, instead of the image display unit worn like glasses, an image display unit having another shape such as an image display unit worn like a hat may be adopted as the image display unit. Further, the earphone may be an ear-hook type or a headband type, or may be omitted. Further, for example, it may be configured as a head-up display (HUD) mounted on a vehicle such as an automobile or an airplane, or other transportation means. For example, you may comprise as HMD incorporated in body armor, such as a helmet.

  For example, in the above embodiment, the image light generation unit is configured using a backlight, a backlight control unit, an LCD, and an LCD control unit. However, the above aspect is merely an example. The image light generation unit may include a configuration unit for realizing another method together with or in place of these configuration units. For example, the image light generation unit may include an organic EL (Organic Electro-Luminescence) display and an organic EL control unit. Further, for example, the image generation unit can use a digital micromirror device or the like instead of the LCD. Further, for example, the present invention can be applied to a laser retinal projection type head-mounted display device.

Modification 2
In the above embodiment, an example of augmented reality processing has been described. However, the processing procedure shown in the above embodiment is merely an example, and various modifications are possible. For example, some steps may be omitted, and other steps may be added. Further, the order of the steps to be executed may be changed.

  For example, the augmented reality process starts from the normal display state ST1 (that is, transition from the initial state to the normal display state ST1), but the augmented reality process starts from the simple display state ST2 (that is, the initial state). To the simple display state ST2).

  For example, the augmented reality processing unit may monitor the invalidation action and cancel the state transition similarly to the transition conditions 2 and 3 even while the transition condition 1 is being monitored. For example, the augmented reality processing unit may omit the monitoring of the invalidation action during the monitoring of the transition conditions 2 and 3.

  For example, the augmented reality processing unit replaces the gaze acquisition unit (acquisition of gaze movement) and the movement acquisition unit (acquisition of hand movement) described in the above embodiment during the monitoring of the transition conditions 1 and 3. Or you may monitor the "attention action" implement | achieved by the motion of the user's head acquired by a 9-axis sensor with a gaze acquisition part and a movement acquisition part.

  For example, in the normal display process (FIG. 5), the normal display processing unit is the target of the user's attention action when the transition condition 1 or the transition condition 3 is satisfied among the plurality of real objects included in the outside scene image. Virtual objects may also be displayed for real objects that are not (hereinafter also referred to as “other real objects”). The normal display processing unit may change the display mode of the virtual object to be added to another real object and the display mode of the virtual object to be added to the real object (that is, the target object) that is the target of the attention motion. . The display mode is, for example, size, brightness, saturation, and the like.

  For example, instead of the invalidation action (first request) described above, or together with the invalidation action, a forced transition action (second request) for forcibly changing the state of augmented reality processing may be used. Good. The forced transition action is an action performed by at least one of the user's hand, foot, voice, head, or a combination thereof, and is different from the invalidation action. As long as it is, any operation can be adopted. When the forced transition action is detected when the augmented reality processing is in the normal display state ST1, the augmented reality processing unit transitions the augmented reality processing to the simplified display state ST2. In this way, the augmented reality processing unit forcibly changes from the first display mode to the second display mode in response to the second request from the user even before the maintenance time has elapsed. Since it is possible to make transition, user convenience can be improved. On the other hand, when a forced transition action is detected when the augmented reality processing is in the simple display state ST2, the augmented reality processing unit transitions the augmented reality processing to the normal display state ST1. In this way, the augmented reality processing unit can display the virtual object in the first display mode in response to the second request from the user even before the reference time has elapsed. Therefore, user convenience can be improved.

・ Modification 3:
If the following e1 to e8 modifications are applied to the augmented reality processing exemplified in the above embodiment, work support using the HMD can be realized.

(E1) Information related to work (for example, work content instruction, information for assisting work content instruction, standard time required for work, information for specifying a user's movement during the work, etc. in the storage unit) Including) in advance.
(E2) A 9-axis sensor and a camera are used (may be used alone or in combination. Other sensors may be used in combination) to acquire the movement of the user's body.

(E3) After starting work support, information related to the work in the storage unit (standard time required for work, information for specifying the user's movement during the work), and the user's body acquired by the modification e2 The movement is checked and the progress of the work by the user is monitored.
(E4) When the monitoring result of the deformation e3 is either “work stop” in which the user's body motion has stopped or “work delay” in which the progress has been delayed for a predetermined time or more, augmented reality processing Transition condition 1 (transition condition from the initial state to the normal display state ST1) is satisfied, and the virtual object in the first display mode is displayed. The virtual object to be displayed is information based on information related to work in the storage unit (work content instruction, information assisting work content instruction).

(E5) In the normal display state ST1, when the monitoring result of the deformation e3 is “working” whose progress is as planned, transition condition 2 of the augmented reality process (from the normal display state ST1 to the simple display state ST2) It is determined that the (transition condition) is satisfied, and the virtual object in the second display mode is displayed. As described above, the virtual object in the second display mode may be non-displayed or may be displayed using an icon image, characters, or the like.
(E6) In the normal display state ST1, when the monitoring result of the deformation e3 is work stoppage or work delay, the normal display state ST1 is continued.

(E7) In the simple display state ST2, when the monitoring result of the deformation e3 is work stoppage or work delay, transition condition 3 (transition condition from the simple display state ST2 to the normal display state ST1) of the augmented reality process is satisfied. The virtual object in the first display mode is displayed.
(E8) In the simple display state ST2, when the monitoring result of the deformation e3 is in operation, the simple display state ST2 is continued.

  In this way, the HMD can continue the simple display state ST2 (including display / non-display of the virtual object in the second display mode) in the case of a skilled worker who can proceed with the work smoothly. The normal display state ST1 (display of the virtual object in the first display mode) can be continued for an operator who is unfamiliar with the work. Moreover, even if it is a skilled worker, when work is stopped, such as when an unclear point occurs during the procedure in the middle, the transition is made to the normal display state ST1, and the virtual object in the first display mode is displayed. Can be made. As a result, it is possible to provide an HMD capable of supporting work with improved convenience for the worker. As a result, when it is considered unnecessary to display a virtual object, such as when a skilled worker becomes an HMD user, the display of the virtual object can be omitted (or simplified). . For this reason, the possibility that the visibility of the real object is unnecessarily impaired can be reduced, and the possibility that the user feels bothered can be reduced.

  As described above, in the third modification, the augmented reality processing unit specifies a predetermined action (the user's movement at the time of work) by the user (worker) within a predetermined reference time (standard time required for the work). When the information to start) is not started, a virtual image including the virtual object in the first display mode is formed on the image display unit. In other words, the augmented reality processing unit does not display the virtual object in the first display mode when the predetermined operation by the user is started within the reference time. Therefore, for example, when the user is performing a predetermined action (for example, some work), the possibility that the virtual object in the first display mode is displayed and blocks the user's eyes is reduced. Can do. As a result, it is possible to provide a head-mounted display device (HMD) in which the display of the virtual object does not easily hinder the visual recognition of the real object or its background.

  Also in the modification 3, as in the above embodiment, the transition between the normal display state ST1 and the simple display state ST2 may be made by “continuation of the attention operation over a predetermined reference time”. In this way, when the worker who performs a series of work cannot remember (or does not know) the next operation, and the worker is at a loss, the simple display state ST2 changes to the normal display state ST1. The state can be changed to support the work.

  The above-mentioned “form a virtual image including a virtual object in the first display mode on the image display unit when a predetermined operation by the user is not started within a predetermined reference time” is described above. In the description, if the “predetermined operation” is “an operation that stops (stops) the attention operation”, it matches the contents of the above embodiment.

-Modification 4:
The present invention is not limited to the above-described embodiments, examples, and modifications, and can be realized with various configurations without departing from the spirit thereof. For example, the technical features in the embodiments, examples, and modifications corresponding to the technical features in each embodiment described in the summary section of the invention are to solve some or all of the above-described problems, or In order to achieve part or all of the above-described effects, replacement or combination can be performed as appropriate. Further, if the technical feature is not described as essential in the present specification, it can be deleted as appropriate.

DESCRIPTION OF SYMBOLS 10 ... Control part 11 ... Decision key 12 ... Illumination part 13 ... Display switching key 14 ... Trackpad 15 ... Luminance switching key 16 ... Direction key 17 ... Menu key 18 ... Power switch 20 ... Image display part 21 ... Right holding part 22 ... Right display drive unit 23 ... Left holding unit 24 ... Left display drive unit 26 ... Right optical image display unit 28 ... Left optical image display unit 30 ... Earphone plug 32 ... Right earphone 34 ... Left earphone 40 ... Connection unit 42 ... Right cord 44 ... Left code 46 ... Connecting member 48 ... Body code 51 ... Transmission unit 52 ... Transmission unit 53 ... Reception unit 54 ... Reception unit 61 ... Camera (motion acquisition unit)
62 ... Gaze detection unit (gaze acquisition unit)
66 ... 9-axis sensor 110 ... Input information acquisition unit 100 ... HMD (head-mounted display device)
120 ... storage unit 121 ... display state 122 ... past maintenance time (maintenance time information)
123 ... maintenance time setting 124 ... past reference time (reference time information)
125 ... reference time setting 126 ... simple display mode 130 ... power source 132 ... wireless communication unit 140 ... CPU
142 ... Augmented reality processing unit (augmented reality processing unit, maintenance time acquisition unit, reference time acquisition unit)
144 ... Normal display processing unit 146 ... Simple display processing unit 160 ... Image processing unit 170 ... Audio processing unit 180 ... Interface 190 ... Display control unit 201 ... Right backlight control unit 202 ... Left backlight control unit 211 ... Right LCD control unit 212 ... Left LCD controller 221 ... Right backlight 222 ... Left backlight 241 ... Right LCD
242 ... Left LCD
251 ... Right projection optical system 252 ... Left projection optical system 261 ... Right light guide plate 262 ... Left light guide plate PCLK ... Clock signal VSync ... Vertical sync signal HSync ... Horizontal sync signal Data ... Image data Data1 ... Right eye image data Data2 ... Left Image data for eyes OA ... External device PC ... Personal computer SC ... Outside view VI ... Virtual image VR ... Field of view RE ... Right eye LE ... Left eye ER ... End EL ... End NI ... Normal display image SI ... Simple display image VO1 ... Virtual object (first display mode)
VO2 ... Virtual object (first display mode)
VO3 ... Virtual object (first display mode)
VO4 ... Virtual object (second display mode)
VO5 ... Virtual object (second display mode)
VO6 ... Virtual object (second display mode)
ST1 ... Normal display state ST2 ... Simple display state

Claims (13)

  1. A head-mounted display device that allows a user to visually recognize a virtual image and an outside scene,
    An image display unit for allowing the user to visually recognize the virtual image;
    An augmented reality processing unit that causes the image display unit to form the virtual image including a virtual object to be additionally displayed on a real object that actually exists in the real world;
    With
    The augmented reality processing unit
    According to the continuation of the attention movement over a predetermined reference time for the real object,
    A said virtual objects associated with at least said made-profile operation the real object, to form the virtual image the containing virtual object in the first display mode,
    The augmented reality processing unit further includes:
    In the case where the virtual image including the virtual object in the second display mode is formed prior to the formation of the virtual image including the virtual object in the first display mode,
    In response to continuation of the attention movement over the reference time for either the virtual object or the real object in the second display mode,
    A virtual object related to at least the virtual object or the real object on which the attention movement is performed, and forming the virtual image including the virtual object in the first display mode;
    The visibility inhibition degree of the virtual object in the second display mode for the real object is lower than the visibility inhibition degree of the virtual object in the first display mode for the real object,
    The virtual object in the second display mode is a character, a figure, a picture, a symbol, or a combination thereof that suggests the content of the virtual object in the first display mode. A head-mounted display device including at least one of the above.
  2. The head-mounted display device according to claim 1, further comprising:
    A head-mounted display device comprising a line-of-sight acquisition unit that acquires the movement of the user's line of sight as the attention movement.
  3. The head-mounted display device according to claim 1, further comprising:
    A head-mounted display device comprising a movement acquisition unit that acquires movement of the user's hand as the attention movement.
  4. The head-mounted display device according to any one of claims 1 to 3,
    The head-mounted display device, wherein the reference time has a variable length.
  5. The head-mounted display device according to claim 4, further comprising:
    A reference time acquisition unit that acquires the reference time used in the past in the augmented reality processing unit;
    The augmented reality processing unit
    Obtain the statistical value of the acquired reference time in the past,
    A head-mounted display device that changes the reference time used in the current process based on the obtained statistical value.
  6. The head-mounted display device according to claim 4,
    The augmented reality processing unit
    Obtaining an information amount of the virtual object in the first display mode;
    A head-mounted display device that changes the reference time used in the current processing based on the obtained information amount.
  7. The head-mounted display device according to claim 6,
    The augmented reality processing unit
    A head-mounted display device that changes a method of obtaining the information amount according to a type of the virtual object in the first display mode.
  8. The head-mounted display device according to claim 4,
    A reference time acquisition unit for acquiring the user's setting for the reference time;
    The augmented reality processing unit
    A head-mounted display device that changes the reference time used in the current process based on the acquired setting of the user.
  9. The head-mounted display device according to claim 4, further comprising:
    The reference time used in the past in the augmented reality processing unit, the information amount of the virtual object in the first display mode at that time, and identification information for identifying the user at that time And a reference time acquisition unit that acquires reference time information associated with
    The augmented reality processing unit
    A head-mounted display device that changes the reference time used in the current process based on the acquired reference time information and the information amount of the virtual object in the first display mode.
  10. The head-mounted display device according to any one of claims 1 to 9,
    The augmented reality processing unit
    Canceling the transition to the first display mode when acquiring the first request from the user while waiting for the elapse of the reference time;
    When the second request from the user is acquired while waiting for the elapse of the reference time, the virtual object in the first display mode is included even before the elapse of the reference time. A head-mounted display device that executes at least one of forming the virtual image.
  11. The head-mounted display device according to claim 10,
    The first request, which is an action performed by at least one of the user's hand, foot, voice, head, or a combination thereof, and the first request A head-mounted display device comprising a request acquisition unit that acquires at least one of the second requests that are different operations.
  12. A method for controlling a head-mounted display device,
    A display process that allows the user to visually recognize a virtual image;
    A control step of forming in the display step the virtual image including a virtual object for additionally displaying a real object existing in the real world;
    With
    The control step includes
    According to the continuation of the attention movement over a predetermined reference time for the real object,
    A said virtual objects associated with at least said made-profile operation the real object, to form the virtual image the containing virtual object in the first display mode,
    The control step further includes:
    In the case where the virtual image including the virtual object in the second display mode is formed prior to the formation of the virtual image including the virtual object in the first display mode,
    In response to continuation of the attention movement over the reference time for either the virtual object or the real object in the second display mode,
    A virtual object related to at least the virtual object or the real object on which the attention movement is performed, and forming the virtual image including the virtual object in the first display mode;
    The visibility inhibition degree of the virtual object in the second display mode for the real object is lower than the visibility inhibition degree of the virtual object in the first display mode for the real object,
    The virtual object in the second display mode is a character, a figure, a picture, a symbol, or a combination thereof that suggests the content of the virtual object in the first display mode. A method for controlling a head-mounted display device, comprising at least one of the above .
  13. A computer program,
    A display function that allows the user to visually recognize a virtual image;
    A control function for forming, in the display function, the virtual image including a virtual object for additionally displaying a real object existing in the real world;
    Is a computer program for causing a computer to realize
    The control function is
    According to the continuation of the attention movement over a predetermined reference time for the real object,
    A said virtual objects associated with at least said made-profile operation the real object, to form the virtual image the containing virtual object in the first display mode,
    The control function further includes:
    In the case where the virtual image including the virtual object in the second display mode is formed prior to the formation of the virtual image including the virtual object in the first display mode,
    In response to continuation of the attention movement over the reference time for either the virtual object or the real object in the second display mode,
    A virtual object related to at least the virtual object or the real object on which the attention movement is performed, and forming the virtual image including the virtual object in the first display mode;
    The visibility inhibition degree of the virtual object in the second display mode for the real object is lower than the visibility inhibition degree of the virtual object in the first display mode for the real object,
    The virtual object in the second display mode is a character, a figure, a picture, a symbol, or a combination thereof that suggests the content of the virtual object in the first display mode. A computer program including at least one of the following .
JP2014212728A 2014-10-17 2014-10-17 Head-mounted display device, method for controlling head-mounted display device, computer program Active JP6421543B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2014212728A JP6421543B2 (en) 2014-10-17 2014-10-17 Head-mounted display device, method for controlling head-mounted display device, computer program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014212728A JP6421543B2 (en) 2014-10-17 2014-10-17 Head-mounted display device, method for controlling head-mounted display device, computer program
US14/870,659 US10140768B2 (en) 2014-10-17 2015-09-30 Head mounted display, method of controlling head mounted display, and computer program

Publications (2)

Publication Number Publication Date
JP2016081339A JP2016081339A (en) 2016-05-16
JP6421543B2 true JP6421543B2 (en) 2018-11-14

Family

ID=55958810

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2014212728A Active JP6421543B2 (en) 2014-10-17 2014-10-17 Head-mounted display device, method for controlling head-mounted display device, computer program

Country Status (1)

Country Link
JP (1) JP6421543B2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6581748B1 (en) * 2018-05-21 2019-09-25 楽天株式会社 Display device, display method, program, and non-transitory computer-readable information recording medium

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05324250A (en) * 1992-05-26 1993-12-07 Canon Inc Image forming device
JP2576401B2 (en) * 1993-12-15 1997-01-29 日本電気株式会社 E-book
JP2002157275A (en) * 2000-11-22 2002-05-31 Fuji Photo Film Co Ltd Picture display device and storage medium
JP4026600B2 (en) * 2004-01-20 2007-12-26 マツダ株式会社 Image display device for vehicle
JP2006277192A (en) * 2005-03-29 2006-10-12 Advanced Telecommunication Research Institute International Image display system
JP4757091B2 (en) * 2006-04-28 2011-08-24 本田技研工業株式会社 Operation device for on-vehicle equipment
JP4872451B2 (en) * 2006-05-15 2012-02-08 トヨタ自動車株式会社 Vehicle input device
WO2009093435A1 (en) * 2008-01-25 2009-07-30 Panasonic Corporation Brain wave interface system, brain wave interface device, method and computer program
JP2010061265A (en) * 2008-09-02 2010-03-18 Fujifilm Corp Person retrieval and registration system
JP5168161B2 (en) * 2009-01-16 2013-03-21 ブラザー工業株式会社 Head mounted display
JP5195537B2 (en) * 2009-03-09 2013-05-08 ブラザー工業株式会社 Head mounted display
JP5715842B2 (en) * 2011-02-08 2015-05-13 新日鉄住金ソリューションズ株式会社 Information providing system, information providing method, and program
WO2013175847A1 (en) * 2012-05-22 2013-11-28 ソニー株式会社 Image processing device, image processing method, and program
JP6099342B2 (en) * 2012-09-19 2017-03-22 大学共同利用機関法人情報・システム研究機構 Interactive information search device using gaze interface
EP2936283A1 (en) * 2012-12-21 2015-10-28 Harman Becker Automotive Systems GmbH Input device for a motor vehicle
US10359841B2 (en) * 2013-01-13 2019-07-23 Qualcomm Incorporated Apparatus and method for controlling an augmented reality device
CN105264572B (en) * 2013-04-04 2018-09-21 索尼公司 Information processing equipment, information processing method and program
WO2015107625A1 (en) * 2014-01-15 2015-07-23 日立マクセル株式会社 Information display terminal, information display system, and information display method
WO2015155841A1 (en) * 2014-04-08 2015-10-15 日立マクセル株式会社 Information display method and information display terminal

Also Published As

Publication number Publication date
JP2016081339A (en) 2016-05-16

Similar Documents

Publication Publication Date Title
JP5977922B2 (en) Information processing apparatus, information processing apparatus control method, and transmissive head-mounted display apparatus
EP2652940B1 (en) Comprehension and intent-based content for augmented reality displays
US9727132B2 (en) Multi-visor: managing applications in augmented reality environments
EP2652543B1 (en) Optimized focal area for augmented reality displays
JP6364715B2 (en) Transmission display device and control method of transmission display device
US9217867B2 (en) Head-mounted display device and control method for the head-mounted display device
US9143693B1 (en) Systems and methods for push-button slow motion
US20130176533A1 (en) Structured Light for Eye-Tracking
KR101845350B1 (en) Head-mounted display device, control method of head-mounted display device, and display system
US9448407B2 (en) Head-mounted display device, control method for head-mounted display device, and work supporting system
CN104076512B (en) The control method of head-mount type display unit and head-mount type display unit
US9652036B2 (en) Device, head mounted display, control method of device and control method of head mounted display
KR100943392B1 (en) Apparatus for displaying three-dimensional image and method for controlling location of display in the apparatus
US8670000B2 (en) Optical display system and method with virtual image contrast control
JP6186689B2 (en) Video display system
US9064442B2 (en) Head mounted display apparatus and method of controlling head mounted display apparatus
JP5970872B2 (en) Head-mounted display device and method for controlling head-mounted display device
CN105045375B (en) Head-mounted display device, control method therefor, control system, and computer program
US9223401B1 (en) User interface
US9411160B2 (en) Head mounted display, control method for head mounted display, and image display system
JP6160154B2 (en) Information display system using head-mounted display device, information display method using head-mounted display device, and head-mounted display device
JP6066037B2 (en) Head-mounted display device
US20120200478A1 (en) Head-mounted display device and control method for the head-mounted display device
JP6119228B2 (en) Head-mounted display device, head-mounted display device control method, and work support system
US9959591B2 (en) Display apparatus, method for controlling display apparatus, and program

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20170719

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20180125

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20180206

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20180402

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20180522

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20180822

A911 Transfer of reconsideration by examiner before appeal (zenchi)

Free format text: JAPANESE INTERMEDIATE CODE: A911

Effective date: 20180829

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20180918

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20181001

R150 Certificate of patent or registration of utility model

Ref document number: 6421543

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150