WO2018008210A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2018008210A1
WO2018008210A1 PCT/JP2017/013655 JP2017013655W WO2018008210A1 WO 2018008210 A1 WO2018008210 A1 WO 2018008210A1 JP 2017013655 W JP2017013655 W JP 2017013655W WO 2018008210 A1 WO2018008210 A1 WO 2018008210A1
Authority
WO
WIPO (PCT)
Prior art keywords
real object
user
display
information processing
real
Prior art date
Application number
PCT/JP2017/013655
Other languages
French (fr)
Japanese (ja)
Inventor
浩丈 市川
佐藤 直之
誠司 鈴木
真人 島川
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2018008210A1 publication Critical patent/WO2018008210A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position

Definitions

  • the present disclosure relates to an information processing apparatus, an information processing method, and a program.
  • AR augmented reality
  • Patent Document 2 describes a technique for controlling display of a display object on a transmissive display so that a user can visually recognize an actual object located behind the transmissive display through the transmissive display.
  • JP 2012-155654 A Japanese Patent No. 5830987
  • Patent Document 2 does not disclose changing the display method of the display object on the transmissive display according to the actual object.
  • a new and improved information processing apparatus, information processing method, and information processing apparatus capable of changing the degree of recognition of the real object by the user adaptively to the real object included in the user's field of view, and Suggest a program.
  • An information processing apparatus includes an output control unit that controls display by a display unit so that the degree of object recognition changes.
  • an information processing method including a processor controlling display by a display unit so that a recognition degree of the real object is changed.
  • the computer can be used in a range between the user and the real object based on a determination result of whether or not the real object included in the user's field of view is the first real object.
  • a program is provided for functioning as an output control unit that controls display by a display unit such that the degree of recognition of the real object by the user changes.
  • the recognition degree of the real object by the user can be adaptively changed to the real object included in the user's field of view.
  • the effects described here are not necessarily limited, and may be any of the effects described in the present disclosure.
  • FIG. 3 is a schematic diagram showing how a user visually recognizes a real object and a virtual object through an AR glass 10.
  • FIG. 6 is another schematic diagram illustrating a state in which a user visually recognizes a real object and a virtual object through the AR glass 10. It is the figure which showed the real object and virtual object which are visually recognized through the display part 124 in the condition shown in FIG. It is the functional block diagram which showed the structural example of AR glass 10 by this embodiment. It is the figure which showed a mode that a user visually recognizes through AR glass 10 when the real object 30a is set to avoid concealment.
  • FIG. 6 is a diagram illustrating an example of a position change allowable range set for a virtual object 32.
  • FIG. It is the figure which showed the example of a display of the virtual object in an evacuation route. It is the figure which showed another example of a display of the virtual object in an evacuation route. It is the flowchart which showed the operation example by this embodiment. It is explanatory drawing which showed the hardware structural example of AR glass 10 by this embodiment.
  • a plurality of constituent elements having substantially the same functional configuration may be distinguished by adding different alphabets after the same reference numeral.
  • a plurality of configurations having substantially the same functional configuration are differentiated as needed, such as the AR glass 10a and the AR glass 10b.
  • the AR glass 10a and the AR glass 10b are simply referred to as the AR glass 10 when it is not necessary to distinguish between them.
  • the information processing system includes an AR glass 10, a server 20, and a communication network 22.
  • the AR glass 10 is an example of an information processing apparatus according to the present disclosure.
  • the AR glass 10 is a device that controls display of a virtual object associated with a position in the real world in advance. For example, the AR glass 10 first sends a virtual object located around the position (for example, a certain range in all directions) from the server 20 via the communication network 22 based on the position information of the AR glass 10. get. Then, the AR glass 10 displays, on the display unit 124 described later, a virtual object included in the user's field of view among the acquired virtual objects based on the posture of the AR glass 10 (or the detection result of the user's line-of-sight direction). indicate.
  • the AR glass 10 generates a right-eye image and a left-eye image based on the acquired virtual object, displays the right-eye image on the right-eye display unit 124a, and displays the left-eye image on the left-eye display unit. It is displayed on 124b. Thereby, a user can visually recognize a virtual stereoscopic video.
  • the virtual object is basically a 3D object, but is not limited to such an example, and may be a 2D object.
  • the display unit 124 of the AR glass 10 is configured by a transmissive display.
  • FIG. 2 and FIG. 3 are schematic diagrams showing how the user wears the AR glass 10 and visually recognizes the real object 30 and the virtual object 32.
  • the user simultaneously displays the real object 30 and the virtual object 32 included in the user's view 40 among the plurality of real objects 30 located in the real world through the display unit 124. It can be visually recognized.
  • “real object” includes not only a single real object but also a predetermined area in the real world (for example, the entire building, an intersection, a corridor, etc.).
  • the user's field of view 40 may be defined in various ways. For example, it may be estimated that the user's field of view 40 is approximately the center of the area captured by the camera provided on the outside of the AR glass 10, that is, the front side of the AR glass 10.
  • the gaze direction of the user is estimated based on the eyeball image captured by the camera provided inside the AR glass 10, that is, the rear side of the AR glass 10, and a predetermined three-dimensional space corresponding to the gaze direction is obtained. It may be estimated that it is the user's field of view 40.
  • the three-dimensional shape of the user's field of view 40 may be determined as appropriate, but the three-dimensional shape is preferably defined as a substantially conical shape.
  • the relationship between the user's field of view 40 and the display area of the display unit 124 can be variously determined. For example, as shown in FIGS. 2 and 4, the relationship between the two may be determined so that the area of the user's field of view 40 a that intersects the display area of the display unit 124 is greater than or equal to the entire display area. Alternatively, as illustrated in FIGS. 3 and 4, the relationship between the two may be determined so that the area of the user's field of view 40 b that intersects the display area of the display unit 124 is smaller than the entire display area.
  • FIG. 4 is a diagram illustrating an example of the real object 30 and the virtual object 32 that are visually recognized through the display unit 124 in the situation illustrated in FIGS. 2 and 3.
  • the virtual object 32 is a non-transparent object.
  • the virtual object 32 is located between the real object 30a and the real object 30b and the user. Therefore, as shown in FIG. 4, a part of each of the real object 30 a and the real object 30 b is hidden by the virtual object 32 and visually recognized by the user.
  • the AR glass 10 can communicate with the server 20 via the communication network 22.
  • the server 20 is a device that stores virtual objects in association with real-world position information.
  • the real world position information may be information including latitude and longitude, or may be floor plan information in a predetermined building.
  • the server 20 receives a virtual object acquisition request from another device such as the AR glass 10, for example, the server 20 transmits a virtual object corresponding to the acquisition request to the other device.
  • the communication network 22 is a wired or wireless transmission path for information transmitted from a device connected to the communication network 22.
  • the communication network 22 may include a public line network such as a telephone line network, the Internet, and a satellite communication network, various LANs including the Ethernet (registered trademark), a wide area network (WAN), and the like.
  • the communication network 22 may include a dedicated network such as an IP-VPN (Internet Protocol-Virtual Private Network).
  • the AR glass 10 according to the present embodiment has been created with the above circumstances taken into consideration.
  • the AR glass 10 according to the present embodiment is based on the determination result of whether or not a real object included in the user's field of view is a specific real object, and the real object in the range between the user and the real object.
  • the display by the display unit 124 is controlled so that the degree of recognition changes. For this reason, for example, when a virtual object exists between a specific real object and the user, the AR glass 10 can control the display of the virtual object so that the user can recognize the real object. it can. As a result, it is possible to improve safety when displaying the virtual object.
  • FIG. 5 is a functional block diagram showing a configuration example of the AR glass 10 according to the present embodiment.
  • the AR glass 10 includes a control unit 100, a communication unit 120, a sensor unit 122, a display unit 124, and a storage unit 126.
  • Control unit 100 centrally controls the operation of the AR glass 10 using hardware such as a CPU (Central Processing Unit) 150 and a RAM (Random Access Memory) 154, which will be described later, built in the AR glass 10. .
  • the control unit 100 includes a virtual object acquisition unit 102, a real object determination unit 104, an overlap determination unit 106, and an output control unit 108.
  • the virtual object acquisition unit 102 acquires a virtual object to be displayed from the server 20 based on the measurement result of the position information of the AR glass 10 by the sensor unit 122 described later. For example, first, the virtual object acquisition unit 102 transmits the position information measured by the sensor unit 122 to the server 20 to be positioned around the position information (for example, a certain range in all directions). A plurality of virtual objects are acquired from the server 20. The virtual object acquisition unit 102 then includes a virtual object included in the user's field of view from the plurality of received virtual objects based on the posture of the AR glass 10 or the user's line-of-sight direction measured by the sensor unit 122. Are extracted as virtual objects to be displayed.
  • Real object determination unit 104 determines whether or not individual real objects included in the user's field of view are set to avoid concealment.
  • the real object that is set to avoid concealment is an example of the first real object in the present disclosure
  • the real object that is not set to avoid concealment is an example of the second real object in the present disclosure. .
  • concealment avoidance setting conditions may be registered in advance for each type of real object.
  • the real object determination unit 104 first performs object recognition on each real object included in the user's field of view based on, for example, a captured image in front of the user captured by the sensor unit 122. Then, the real object determination unit 104 determines whether or not the object is concealed based on the concealment avoidance setting condition corresponding to the recognized object type for each real object in the user's field of view. Judging.
  • the concealment avoidance setting condition list may be stored in the storage unit 126 or may be stored in the server 20.
  • the real object determination unit 104 sends an inquiry as to whether or not each recognized real object is a concealment avoidance target to the server 20, and obtains an answer, thereby obtaining a response within the user's field of view. It is specified whether or not each real object is set to avoid concealment.
  • concealment avoidance target For example, a public sign such as a traffic light, a road sign, or a signboard of a construction site can be always set to avoid concealment. Further, concealment avoidance setting can be always set for a predetermined area such as a pedestrian crossing (the entire sky), an intersection (the entire sky), or an evacuation route in a building such as a leisure facility. Thereby, the user can pass more safely or drive the car.
  • a public sign such as a traffic light, a road sign, or a signboard of a construction site can be always set to avoid concealment.
  • concealment avoidance setting can be always set for a predetermined area such as a pedestrian crossing (the entire sky), an intersection (the entire sky), or an evacuation route in a building such as a leisure facility. Thereby, the user can pass more safely or drive the car.
  • the predetermined reference may include a positional relationship between the real object and the user. For example, when the user is located in front of a traffic light or an advertisement, these real objects are set to avoid concealment, and when the user is located on the side or behind these real objects, these real objects are set. Is not set to avoid concealment.
  • the predetermined standard includes a distance between the real object and the user.
  • the predetermined criteria may include a moving direction of the real object, a speed of the real object, and / or an acceleration of the real object.
  • the predetermined criteria may include a user moving direction, a user speed, and / or a user acceleration. For example, when the user is moving toward a certain real object and the speed of the user is equal to or higher than a predetermined threshold, the real object can be set to avoid concealment dynamically.
  • the predetermined standard may include a recognition result of another person's action. For example, when it is detected that another person is facing the user and speaking, the other person can be set to avoid concealment.
  • the predetermined criteria may include the state of the real object.
  • the predetermined criterion includes the temperature of the real object.
  • the predetermined threshold may be determined for each type of real object.
  • the predetermined standard includes a device state of a real object (for example, an electronic device).
  • a device state of a real object for example, an electronic device.
  • the concealment avoidance setting of the television receiver and the PC can be performed only when the power of the television receiver and the PC (Personal Computer) is ON. Further, when it is detected that an electronic device in operation has failed, the electronic device can be set to avoid concealment.
  • the predetermined standard may include the state of sound, light, or smoke from the real object.
  • the real object when a predetermined sensor detects that smoke is generated from a real object, the real object can be set to avoid concealment. Or, for example, opening and closing of doors such as clocks and entrances, knocking of doors, entrance chimes, telephones, kettles, real object collisions (falling, etc.), various electronic equipment timers, or fire alarms, etc. If detected, these real objects can be set to avoid concealment.
  • the output control unit 108 can hide the virtual object located between the sound source and the user. Accordingly, when the user looks in the direction of arrival of the sound, the virtual object on the flow line toward the sound generation source is not displayed, so that the user can clearly perceive the sound generation source.
  • the predetermined criteria can include the presence or absence of a contract or the billing status. For example, if the server 20 registers that an agreement regarding the display of advertisements and products has been exchanged with the AR service operator, the advertisements and products will remain in the contract period (or Concealment avoidance setting can only be set.
  • the overlap determination unit 106 includes a real object determined by the real object determination unit 104 as being concealment avoidance among real objects included in the user's field of view, and a display target acquired by the virtual object acquisition unit 102. It is determined whether or not there is an overlap with the virtual object. For example, the overlap determination unit 106 first specifies distance information (depth map) regarding all virtual objects to be displayed based on the position and orientation of the AR glass 10 (or the detection result of the user's line-of-sight direction).
  • the overlap determination unit 106 specifies distance information (depth map) regarding all real objects that are set to avoid concealment based on the position and orientation of the AR glass 10 (or the detection result of the user's line-of-sight direction). . Then, the overlap determination unit 106 determines whether or not there is an overlap with the virtual object to be displayed for each real object that is set to avoid concealment by comparing the two pieces of distance information. Specifically, the overlap determination unit 106 determines whether or not a virtual object exists between the real object and the AR glass 10 for each real object that is set to avoid concealment, and exists. In this case, all corresponding virtual objects are specified.
  • distance information depth map
  • Output control unit 108 ⁇ (2-1-5-1. Control example 1) Based on the determination result by the real object determination unit 104 and the determination result by the overlap determination unit 106, the output control unit 108 changes the degree of recognition of the real object in a predetermined range located between the real object and the user. Thus, the display by the display unit 124 is controlled. For example, the output control unit 108 determines that, among the real objects included in the user's field of view, a real object that is set to avoid concealment is a real object that is set to avoid concealment. The display by the display unit 124 is controlled so that the degree of recognition increases.
  • the predetermined range may be a range that is located between the real object and the user and does not include the real object.
  • the output control unit 108 may hide all or a part of the virtual object (for example, a portion overlapping with a real object set to avoid concealment and its vicinity).
  • the output control unit 108 may make the virtual object translucent, display only the outline of the virtual object (wire frame display), or blink the virtual object at a predetermined time interval. May be displayed.
  • FIG. 6 is an explanatory diagram showing an example in which the real object 30a is set to avoid concealment.
  • a certain range around the real object 30 a can be set as the concealment avoidance area 50.
  • the concealment avoidance area 50 is an area (space) where concealment by a virtual object is avoided.
  • the output control unit 108 hides the virtual object 32 positioned between the concealment avoidance area 50 and the display unit 124.
  • the entire real object 30 a is visible to the user on the display unit 124 without being hidden by the virtual object 32.
  • the output control unit 108 makes the virtual object 32 translucent.
  • the entire real object 30 a is visible to the user on the display unit 124 without being hidden by the virtual object 32.
  • the output control unit 108 may control the display so as to emphasize the change in the display mode. For example, when changing to display only the contour line of the virtual object, the output control unit 108 may temporarily and gently flash the vicinity of the contour line. Alternatively, when the real object set to avoid concealment is moving, the output control unit 108 displays an animation such as a virtual object located on the movement locus of the real object exploding or collapsing. The virtual object may be hidden. Alternatively, the output control unit 108 may change the display mode of the virtual object while outputting sound or generating vibration to the AR glass 10 or other devices carried by the user. According to these control examples, it is possible to notify the user more emphasized that the display mode of the virtual object has changed.
  • the output control unit 108 causes the virtual object to be displayed while being shifted from the default display position so that the real object set to avoid concealment does not overlap the virtual object. May be.
  • the output control unit 108 shifts and displays the virtual object so that it does not overlap with the real object that is set to avoid concealment, and the position change amount is as small as possible.
  • the output control unit 108 shifts the display position of the virtual object 32 so that the virtual object 32 does not overlap the region where the concealment avoidance region 50 is visually recognized on the display unit 124. .
  • the entire real object 30 a is not hidden by the virtual object 32 on the display unit 124 and is visually recognized by the user.
  • the change allowable range (upper limit) of the position, posture, and size may be set in advance.
  • the change allowable range for the position is ⁇ 50 cm to 50 cm in each direction
  • the change allowable size range can be set to 0.4 times or more and 1.0 times or less.
  • the change allowable range may be defined in a world coordinate system, or may be defined in a user coordinate system (for example, a coordinate system based on the AR glass 10).
  • the change allowable range may be defined in a vector space.
  • FIG. 8 is an explanatory diagram showing an example in which the change allowable range 50 of the virtual object is defined on the vector space. For example, at each point within the change allowable range 50, an allowable rotation amount and expansion / contraction value of each axis can be set.
  • the output control unit 108 may predict the future movement of the user in advance, and may predetermine the time series of the display position of the virtual object based on the prediction result. Thereby, the load at the time of the display update of a virtual object can be reduced.
  • Modification 2 In general, for example, in a crowd, the position of an individual person (that is, a real object) can change sequentially. Therefore, as another modified example, the output control unit 108 predicts the movement of one or more real objects that are set to avoid concealment, thereby predetermining the time series of the display positions of the virtual objects (path finding). Is also possible. This eliminates the need to sequentially calculate the position at which the virtual object is shifted each time a surrounding person moves, thereby reducing the load when updating the display of the virtual object.
  • the output control unit 108 may change how the virtual object is shifted based on, for example, the predetermined criterion described above. For example, the output control unit 108 shifts the virtual object located between the real object and the display unit 124 at a higher speed or a farther distance as the speed of the real object moving in the direction of the user increases. May be. Alternatively, the output control unit 108 may shift the virtual object positioned between the real object and the display unit 124 at higher speed or farther as the temperature of the real object set to avoid concealment is higher. Good. According to these control examples, the magnitude of danger can be notified to the user.
  • the output control unit 108 may newly display another virtual object related to the corresponding real object at a position that does not overlap the virtual object. For example, when the corresponding real object is a traffic light, the output control unit 108 may newly display a virtual traffic light at a position shifted from the virtual object displayed overlapping the traffic light. Alternatively, the output control unit 108 may newly display information (text or image) indicating the current lighting color of the traffic signal, or display the entire display unit 124 lightly with the current lighting color of the traffic signal. You may let them.
  • the output control unit 108 may dynamically change the display mode of the virtual object located between the real object set to avoid concealment and the display unit 124, for example, based on the predetermined criterion described above. .
  • the output control unit 108 dynamically changes the display mode of the virtual object located between the real object and the display unit 124 based on the change in the distance between the real object and the display unit 124.
  • the output control unit 108 increases the transparency of the virtual object or gradually increases the mesh size of the virtual object as the distance between the real object and the display unit 124 decreases.
  • the virtual object may be gradually made into a wire frame.
  • the output control unit 108 is positioned between the traffic signal and the display unit 124 as the distance between the traffic signal located in front of the user and the display unit 124 decreases.
  • the transparency of the virtual object to be performed may be increased.
  • the output control unit 108 may dynamically change the display mode of the virtual object located between the real object and the display unit 124 according to a change in the surrounding situation. For example, it is assumed that the escape route is set to avoid concealment in a predetermined facility such as a leisure facility. In this case, at the time of evacuation guidance, the output control unit 108 performs control so as not to display on the evacuation route, such as hiding a virtual object that becomes an obstacle, or newly displays a virtual object for guidance. May be. Or the output control part 108 may always display the virtual object located on an evacuation route translucently.
  • FIG. 9A is a diagram showing a display example of the virtual object 32 at the normal time in the corridor which is an evacuation route. As shown in FIG. 9A, for example, during normal times, the output control unit 108 displays the virtual object 32a located on the evacuation route in a translucent manner.
  • FIG. 9B is a diagram showing a display example of the virtual object 32 at the time of evacuation guidance in the same corridor. As shown in FIG. 9B, at the time of evacuation guidance, the output control unit 108 hides the virtual object 32a shown in FIG. 9A and newly displays the virtual object 32b for guidance. Note that the output control unit 108 may sequentially update the display position, tilt, or display content of the virtual object 32b in accordance with the movement of the user so as to guide evacuation.
  • the output control unit 108 may change the display mode of one or more virtual objects located in the user's field of view according to the surrounding brightness. For example, when the surroundings are dark (such as when it is night or cloudy), the output control unit 108 displays a portion to be hidden among the virtual objects located between the real object set to avoid concealment and the user. It may be larger than when the surroundings are bright. Alternatively, in this case, the output control unit 108 may further make other virtual objects located around the virtual object semi-transparent, or make all other virtual objects located within the user's field of view semi-transparent. May be.
  • the output control unit 108 determines whether or not the virtual object is positioned between the advertisement and the display unit 124 based on the positional relationship between the user and the advertisement.
  • the display mode may be controlled dynamically. For example, when the user is located in front of the advertisement, the output control unit 108 increases the transparency of the virtual object. In addition, when the user is located on the side or rear of the advertisement, the output control unit 108 further reduces the transparency of the virtual object. Thereby, the fall of the visibility of a virtual object can be suppressed as much as possible, maintaining the recognition degree of an advertisement.
  • the output control unit 108 may change the display mode of the virtual object based on the display position of the virtual object on the display unit 124. For example, as the distance between the display position of the virtual object on the display unit 124 and the center of the display unit 124 is smaller, the output control unit 108 may relatively increase the transparency of the virtual object. Alternatively, the output control unit 108 may relatively increase the transparency of the virtual object as the distance between the display position of the virtual object and the user's gaze point on the display unit 124 is smaller.
  • the output control unit 108 can sequentially execute the display control described above every time the position and posture of the AR glass 10 (or the detection result of the user's line-of-sight direction) change.
  • the output control unit 108 can dynamically change the display mode of the virtual object in accordance with the change in the determination result by the real object determination unit 104. For example, when the concealment avoidance setting for a certain real object is switched from OFF to ON, the output control unit 108 displays the display mode of the virtual object positioned between the display unit 124 and the real object (as described above). Is changed from the default display mode. When the setting for avoiding concealment of a certain real object is switched from ON to OFF, the output control unit 108 sets the display mode of the virtual object positioned between the display unit 124 and the real object to the default display mode. Return to.
  • the output control unit 108 when the user makes an input for returning the display mode of the virtual object after the display mode of the virtual object is changed from the default display mode, the output control unit 108 It is also possible to return the display mode of the virtual object to the default display mode. Further, the output control unit 108 specifies a virtual object that the user has permitted in the past to return to the default display mode based on the history information, and displays the specified virtual object in the default display mode thereafter. May be.
  • the communication unit 120 transmits and receives information to and from other devices. For example, the communication unit 120 transmits an acquisition request for a virtual object located around the current position to the server 20 according to the control of the virtual object acquisition unit 102. In addition, the communication unit 120 receives a virtual object from the server 20.
  • the sensor unit 122 may include a positioning device that receives a positioning signal from a positioning satellite such as a GPS (Global Positioning System) and measures the current position.
  • the sensor unit 122 may include a range sensor.
  • the sensor unit 122 includes, for example, a three-axis acceleration sensor, a gyroscope, a magnetic sensor, a camera, a depth sensor, and / or a microphone.
  • the sensor unit 122 measures the speed, acceleration, posture, orientation, or the like of the AR glass 10.
  • the sensor unit 122 captures an image of the eyes of the user wearing the AR glass 10 or captures an image in front of the AR glass 10.
  • the sensor unit 122 can detect an object positioned in front of the user and can detect a distance to the detected object.
  • the display unit 124 displays an image according to the control of the output control unit 108.
  • the display unit 124 projects an image on at least a partial region (projection plane) of each of the left-eye lens and the right-eye lens.
  • the left-eye lens and the right-eye lens can be formed of a transparent material such as resin or glass.
  • the display unit 124 may include a liquid crystal panel and the transmittance of the liquid crystal panel may be controllable. Thereby, the display unit 124 can be controlled to be transparent or translucent.
  • Storage unit 126 stores various data and various software. For example, the storage unit 126 stores a list of concealment avoidance setting conditions for each type of real object.
  • FIG. 10 is a flowchart showing an operation example according to the present embodiment.
  • the virtual object acquisition unit 102 of the AR glass 10 is arranged around the current position (for example, a certain range in all directions of up, down, left, and right directions) based on the position information measurement result by the sensor unit 122.
  • a plurality of virtual objects located are acquired from the server 20.
  • the virtual object acquisition unit 102 selects a virtual object included in the user's field of view from the acquired virtual objects. Extract (S101).
  • the real object determination unit 104 avoids concealment among real objects included in the user's field of view based on a list of concealment avoidance setting conditions for each type of real object stored in the storage unit 126, for example. It is determined whether or not the set real object exists (S103). If the real object set to avoid concealment does not exist in the user's field of view (S103: No), the AR glass 10 performs the process of S109 described later.
  • the overlap determination unit 106 for each of the corresponding real objects, it is determined whether or not at least one of the virtual objects acquired in S101 is located (S105).
  • the AR glass 10 When there is no virtual object between all the corresponding real objects and the user (S105: No), the AR glass 10 performs the process of S109 described later.
  • the output control unit 108 may decrease the visibility of the corresponding virtual object.
  • the display mode of the virtual object is changed. For example, the output control unit 108 hides the corresponding virtual object, makes it translucent, or shifts the display position so as not to overlap the corresponding real object (S107).
  • the output control unit 108 causes the display unit 124 to display the virtual object acquired in S101 (S109).
  • the AR glass 10 determines whether the real object included in the user's field of view is a real object that is set to avoid concealment.
  • the display by the display unit 124 is controlled so that the recognition degree of the real object changes in a range between the real object and the real object. For this reason, the recognition degree of the real object can be adaptively changed to the real object included in the user's field of view.
  • display by the display unit 124 is performed so that a real object that is set to avoid concealment has a higher degree of recognition than a real object that is not set to avoid concealment.
  • the user can visually recognize the real object that is set to avoid the concealment by the virtual object, or the user can visually recognize the virtual object that replaces the real object. Accordingly, since the user can recognize the existence of the real object that is set to avoid concealment, the safety when the user acts while wearing the AR glass 10 can be improved.
  • the AR glass 10 includes a CPU 150, a ROM (Read Only Memory) 152, a RAM 154, a bus 156, an interface 158, an input device 160, an output device 162, a storage device 164, and a communication device 166.
  • the CPU 150 functions as an arithmetic processing unit and a control unit, and controls the overall operation in the AR glass 10 according to various programs. In addition, the CPU 150 realizes the function of the control unit 100 in the AR glass 10.
  • the CPU 150 is configured by a processor such as a microprocessor.
  • the ROM 152 stores programs used by the CPU 150 and control data such as calculation parameters.
  • the RAM 154 temporarily stores a program executed by the CPU 150, for example.
  • the bus 156 includes a CPU bus and the like.
  • the bus 156 connects the CPU 150, the ROM 152, and the RAM 154 to each other.
  • the interface 158 connects the input device 160, the output device 162, the storage device 164, and the communication device 166 to the bus 156.
  • the input device 160 includes, for example, input means for a user to input information, such as buttons, switches, levers, and microphones, and an input control circuit that generates an input signal based on the input by the user and outputs the input signal to the CPU 150.
  • input means for a user to input information such as buttons, switches, levers, and microphones
  • input control circuit that generates an input signal based on the input by the user and outputs the input signal to the CPU 150.
  • the output device 162 includes, for example, a display device such as a projector and an audio output device such as a speaker.
  • the display device may be a liquid crystal display (LCD) device, an organic light emitting diode (OLED) device, or the like.
  • the storage device 164 is a data storage device that functions as the storage unit 126.
  • the storage device 164 includes, for example, a storage medium, a recording device that records data on the storage medium, a reading device that reads data from the storage medium, or a deletion device that deletes data recorded on the storage medium.
  • the communication device 166 is a communication interface composed of a communication device for connecting to the communication network 22 or the like, for example.
  • the communication device 166 may be a wireless LAN compatible communication device, an LTE (Long Term Evolution) compatible communication device, or a wire communication device that performs wired communication.
  • the communication device 166 functions as the communication unit 120.
  • Modification 1> For example, in the above-described embodiment, an example in which the AR glass 10 (the output control unit 108) changes the display mode for each virtual object has been described. However, the present invention is not limited to this example, and the AR glass 10 includes a plurality of AR glasses 10. You may change the display mode of a virtual object collectively. As an example, the output control unit 108 may hide all the virtual objects located between all the real objects set to avoid concealment and the user, or may make them semi-transparent.
  • the output control unit 108 may change the way of changing the display mode of the virtual object depending on the type of the virtual object. For example, for a virtual object that prompts the user to confirm only once, the output control unit 108 may change the display mode of the virtual object depending on whether or not the user has recognized that the virtual object has been viewed. More specifically, when the user has not yet seen the virtual object, the output control unit 108 displays the virtual object in a default display mode. Further, after it is recognized that the user has seen the virtual object, the output control unit 108 displays the virtual object in a simple display format such as hiding or displaying only the outline.
  • the output control unit 108 displays the display mode of the virtual object displayed for each of the plurality of users.
  • the display may be controlled to be the same.
  • the output control unit 108 may further control display based on detection of occurrence of a system error so that the visibility of a virtual object located in the user's field of view is reduced. .
  • the output control unit 108 hides all virtual objects. Or semi-transparent.
  • the output control unit 108 may notify the user by voice or vibration (tactile stimulus) instead of or in addition to the above-described display control of the virtual object.
  • voice or vibration vibration
  • the output control unit 108 may cause the built-in speaker (not shown) to output sound indicating the lighting color of the traffic light (for example, “currently red” or “changed to blue”).
  • a vibration pattern is registered for each type of lighting color, and the output control unit 108 is a vibration pattern corresponding to the current lighting color of the traffic light, and another device (smart phone or smart watch) carried by the user. Or the 3D glass 10 itself may be vibrated.
  • the information processing apparatus may be the server 20.
  • the server 20 controls the display of the virtual object with respect to the AR glass 10 by acquiring the position information and posture information of the AR glass 10 (and the detection result of the user's line-of-sight direction) from the AR glass 10. It is possible.
  • the information processing apparatus is not limited to the server 20, and may be another type of apparatus that can be connected to the communication network 22, such as a smartphone, a tablet terminal, a PC, or a game machine. Alternatively, the information processing apparatus may be a car.
  • the display unit in the present disclosure is the display unit 124 of the AR glass 10
  • the present invention is not limited to this example.
  • the display unit may be a see-through device such as a head-up display (for example, an in-vehicle windshield) or a desktop transparent display, or a video transmission type HMD (Head Mounted Display) or tablet. It may be a video see-through device such as a terminal. In the latter case, the captured image in front of the user can be sequentially displayed on the corresponding display.
  • the display unit may be a 3D projector.
  • the 3D projector performs projection mapping of the virtual object on the projection target while the sensor worn by the user or the sensor disposed in the environment senses the user's field of view. This function can be realized.
  • the projection target may be a flat surface or a three-dimensional object.
  • an example in which an object to be concealed is set as a real object has been described.
  • the present invention is not limited to such an example, and a virtual object may be set to be concealed.
  • a specific type of virtual object may be set in advance as a concealment avoidance target.
  • important notification information from the system, a message display window in a chat service, or a message reception notification screen may be set as a concealment avoidance target.
  • the AR glass 10 controls display so that the virtual object set to avoid concealment has a higher recognition degree than the virtual object not set to avoid concealment.
  • the AR glass 10 may be configured such that a virtual object that is not set to avoid concealment that is positioned between the virtual object that is set to avoid concealment and the user is hidden, translucent, or shifted in display position. Good.
  • the priority for each type of virtual object can be registered in the table in advance.
  • the AR glass 10 may hide another semi-transparent object positioned between the virtual object and the user and has a lower priority than the virtual object, or may be translucent, or the display position may be shifted. Good. According to these display examples, a virtual object with high importance can be visually recognized by a user without being hidden by other virtual objects.
  • each step in the operation of the above-described embodiment does not necessarily have to be processed in the order described.
  • the steps may be processed by changing the order as appropriate.
  • Each step may be processed in parallel or individually instead of being processed in time series. Further, some of the described steps may be omitted, or another step may be further added.
  • An output control unit for controlling display by the display unit An information processing apparatus comprising: (2) When the real object is determined to be the first real object, and when the real object is determined to be a second real object different from the first real object, The information processing apparatus according to (1), wherein the output control unit controls display by the display unit so that the degree of recognition is different. (3) When it is determined that the real object is the first real object, the degree of recognition of the real object is greater than when the real object is determined to be the second real object.
  • the information processing apparatus wherein the output control unit controls display by the display unit.
  • the range between the user and the real object is a range having a predetermined shape located between the user and the real object, When it is determined that the real object is the first real object and at least a part of the real object is included in the range having the predetermined shape, the recognition degree of the real object is increased.
  • the information processing apparatus according to (3), wherein the output control unit controls display by the display unit.
  • the information processing apparatus according to (3) or (4), wherein whether or not the real object is the first real object is determined based on a positional relationship between the real object and the user.
  • Information processing device (11)
  • the real object is an electronic device, The information processing apparatus according to any one of (3) to (10), wherein whether or not the real object is the first real object is determined based on a device state of the real object. (12) Any of (3) to (11), wherein whether or not the real object is the first real object is determined based on whether or not the real object is a predetermined type of object.
  • the output control unit changes a display mode of the first virtual object located in a range between the user and the real object.
  • the information processing apparatus according to any one of (3) to (12).
  • (14) The information processing apparatus according to (13), wherein when the real object is determined to be the second real object, the output control unit does not change a display mode of the first virtual object.
  • the output control unit controls display so that a part or all of the first virtual object is hidden.
  • the output control unit increases the transparency of the first virtual object, and is any one of (13) to (15).
  • the information processing apparatus according to one item.
  • the output control unit determines a region where the real object is visually recognized on the display unit and a display position of the first virtual object.
  • the information processing apparatus according to any one of (13) to (16), wherein a display position of the first virtual object is changed so as not to overlap.
  • the output control unit sets a second position related to the real object at a display position different from the display position of the first virtual object.
  • the information processing apparatus according to any one of (13) to (17), wherein the virtual object is newly displayed.
  • AR glass 20 server 22 communication network 30 real object 32 virtual object 100 control unit 102 virtual object acquisition unit 104 real object determination unit 106 overlap determination unit 108 output control unit 120 communication unit 122 sensor unit 124 display unit 126 storage unit

Abstract

[Problem] To provide an information processing device, information processing method, and program which can change a user's degree of recognition of a real object included in the user's visual field in a manner appropriate to that real object. [Solution] This information processing device is provided with an output control unit which, on the basis of the result of determining whether or not a real object in the user's visual field is a first real object, controls display by a display unit in the area between the user and the real object so as to change the user's degree of recognition of said real object.

Description

情報処理装置、情報処理方法、およびプログラムInformation processing apparatus, information processing method, and program
 本開示は、情報処理装置、情報処理方法、およびプログラムに関する。 The present disclosure relates to an information processing apparatus, an information processing method, and a program.
 近年、実世界に付加的な情報を重畳してユーザに提示する拡張現実(AR:Augmented Reality)に関する技術が各種開発されている(例えば下記特許文献1参照)。 In recent years, various technologies related to augmented reality (AR) that superimposes additional information on the real world and presents it to the user have been developed (see, for example, Patent Document 1 below).
 また、ARコンテンツの表示時における視認性の向上を図るための技術も提案されている。例えば、下記特許文献2には、透過ディスプレイの背後に位置する実物体を、透過ディスプレイを通してユーザが視認可能なように、透過ディスプレイにおける表示オブジェクトの表示を制御する技術が記載されている。 Also, a technique for improving visibility when displaying AR content has been proposed. For example, Patent Document 2 below describes a technique for controlling display of a display object on a transmissive display so that a user can visually recognize an actual object located behind the transmissive display through the transmissive display.
特開2012-155654号公報JP 2012-155654 A 特許第5830987号公報Japanese Patent No. 5830987
 ところで、ユーザの視界内に含まれる実オブジェクトの認識度合いは、実オブジェクトの種類などによって異なることが望ましい。しかしながら、特許文献2には、透過ディスプレイにおける表示オブジェクトの表示方法を、実物体に応じて変化させることは開示されていない。 By the way, it is desirable that the recognition degree of the real object included in the user's field of view varies depending on the type of the real object. However, Patent Document 2 does not disclose changing the display method of the display object on the transmissive display according to the actual object.
 そこで、本開示では、ユーザの視界内に含まれる実オブジェクトに適応的に、ユーザによる当該実オブジェクトの認識度合いを変化させることが可能な、新規かつ改良された情報処理装置、情報処理方法、およびプログラムを提案する。 Therefore, in the present disclosure, a new and improved information processing apparatus, information processing method, and information processing apparatus capable of changing the degree of recognition of the real object by the user adaptively to the real object included in the user's field of view, and Suggest a program.
 本開示によれば、ユーザの視界内に含まれる実オブジェクトが第1の実オブジェクトであるか否かの判断結果に基づいて、前記ユーザと前記実オブジェクトとの間の範囲において前記ユーザによる前記実オブジェクトの認識度合いが変化するように表示部による表示を制御する出力制御部、を備える、情報処理装置が提供される。 According to the present disclosure, based on the determination result of whether or not the real object included in the user's field of view is the first real object, the real object by the user is in a range between the user and the real object. An information processing apparatus is provided that includes an output control unit that controls display by a display unit so that the degree of object recognition changes.
 また、本開示によれば、ユーザの視界内に含まれる実オブジェクトが第1の実オブジェクトであるか否かの判断結果に基づいて、前記ユーザと前記実オブジェクトとの間の範囲において前記ユーザによる前記実オブジェクトの認識度合いが変化するように表示部による表示をプロセッサが制御すること、を含む、情報処理方法が提供される。 Further, according to the present disclosure, based on a determination result of whether or not the real object included in the user's field of view is the first real object, the user can determine the range between the user and the real object. There is provided an information processing method including a processor controlling display by a display unit so that a recognition degree of the real object is changed.
 また、本開示によれば、コンピュータを、ユーザの視界内に含まれる実オブジェクトが第1の実オブジェクトであるか否かの判断結果に基づいて、前記ユーザと前記実オブジェクトとの間の範囲において前記ユーザによる前記実オブジェクトの認識度合いが変化するように表示部による表示を制御する出力制御部、として機能させるための、プログラムが提供される。 In addition, according to the present disclosure, the computer can be used in a range between the user and the real object based on a determination result of whether or not the real object included in the user's field of view is the first real object. A program is provided for functioning as an output control unit that controls display by a display unit such that the degree of recognition of the real object by the user changes.
 以上説明したように本開示によれば、ユーザの視界内に含まれる実オブジェクトに適応的に、ユーザによる当該実オブジェクトの認識度合いを変化させることができる。なお、ここに記載された効果は必ずしも限定されるものではなく、本開示中に記載されたいずれかの効果であってもよい。 As described above, according to the present disclosure, the recognition degree of the real object by the user can be adaptively changed to the real object included in the user's field of view. Note that the effects described here are not necessarily limited, and may be any of the effects described in the present disclosure.
本開示の実施形態による情報処理システムの構成例を示した説明図である。It is explanatory drawing which showed the structural example of the information processing system by embodiment of this indication. ARグラス10を通して実オブジェクトおよび仮想オブジェクトをユーザが視認する様子を示した概略図である。FIG. 3 is a schematic diagram showing how a user visually recognizes a real object and a virtual object through an AR glass 10. ARグラス10を通して実オブジェクトおよび仮想オブジェクトをユーザが視認する様子を示した別の概略図である。FIG. 6 is another schematic diagram illustrating a state in which a user visually recognizes a real object and a virtual object through the AR glass 10. 図2に示した状況において表示部124を通して視認される実オブジェクトおよび仮想オブジェクトを示した図である。It is the figure which showed the real object and virtual object which are visually recognized through the display part 124 in the condition shown in FIG. 本実施形態によるARグラス10の構成例を示した機能ブロック図である。It is the functional block diagram which showed the structural example of AR glass 10 by this embodiment. 実オブジェクト30aが隠蔽回避設定されている際の、ARグラス10を通してユーザが視認する様子を示した図である。It is the figure which showed a mode that a user visually recognizes through AR glass 10 when the real object 30a is set to avoid concealment. 図6に示した状況において表示部124を通して視認される実オブジェクトおよび仮想オブジェクトの例を示した図である。It is the figure which showed the example of the real object and virtual object visually recognized through the display part in the condition shown in FIG. 図6に示した状況において表示部124を通して視認される実オブジェクトおよび仮想オブジェクトの別の例を示した図である。It is the figure which showed another example of the real object and virtual object visually recognized through the display part in the condition shown in FIG. 図6に示した状況において表示部124を通して視認される実オブジェクトおよび仮想オブジェクトの別の例を示した図である。It is the figure which showed another example of the real object and virtual object visually recognized through the display part in the condition shown in FIG. 仮想オブジェクト32に対して設定される位置変更許容範囲の例を示した図である。6 is a diagram illustrating an example of a position change allowable range set for a virtual object 32. FIG. 避難経路における仮想オブジェクトの表示例を示した図である。It is the figure which showed the example of a display of the virtual object in an evacuation route. 避難経路における仮想オブジェクトの別の表示例を示した図である。It is the figure which showed another example of a display of the virtual object in an evacuation route. 本実施形態による動作例を示したフローチャートである。It is the flowchart which showed the operation example by this embodiment. 本実施形態によるARグラス10のハードウェア構成例を示した説明図である。It is explanatory drawing which showed the hardware structural example of AR glass 10 by this embodiment.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In addition, in this specification and drawing, about the component which has the substantially same function structure, duplication description is abbreviate | omitted by attaching | subjecting the same code | symbol.
 また、本明細書及び図面において、実質的に同一の機能構成を有する複数の構成要素を、同一の符号の後に異なるアルファベットを付して区別する場合もある。例えば、実質的に同一の機能構成を有する複数の構成を、必要に応じてARグラス10aおよびARグラス10bのように区別する。ただし、実質的に同一の機能構成を有する複数の構成要素の各々を特に区別する必要がない場合、同一符号のみを付する。例えば、ARグラス10aおよびARグラス10bを特に区別する必要が無い場合には、単にARグラス10と称する。 In the present specification and drawings, a plurality of constituent elements having substantially the same functional configuration may be distinguished by adding different alphabets after the same reference numeral. For example, a plurality of configurations having substantially the same functional configuration are differentiated as needed, such as the AR glass 10a and the AR glass 10b. However, when it is not necessary to particularly distinguish each of a plurality of constituent elements having substantially the same functional configuration, only the same reference numerals are given. For example, the AR glass 10a and the AR glass 10b are simply referred to as the AR glass 10 when it is not necessary to distinguish between them.
 また、以下に示す項目順序に従って当該「発明を実施するための形態」を説明する。
 1.情報処理システムの構成
 2.実施形態の詳細な説明
 3.ハードウェア構成
 4.変形例
Further, the “DETAILED DESCRIPTION OF THE INVENTION” will be described according to the following item order.
1. 1. Configuration of information processing system 2. Detailed Description of Embodiments Hardware configuration Modified example
<<1.情報処理システムの構成>>
 まず、本開示の実施形態による情報処理システムの構成について、図1を参照して説明する。図1に示すように、当該情報処理システムは、ARグラス10、サーバ20、および、通信網22を含む。
<< 1. Configuration of information processing system >>
First, a configuration of an information processing system according to an embodiment of the present disclosure will be described with reference to FIG. As shown in FIG. 1, the information processing system includes an AR glass 10, a server 20, and a communication network 22.
 <1-1.ARグラス10>
 ARグラス10は、本開示における情報処理装置の一例である。ARグラス10は、予め実世界上の位置に関連付けられている仮想オブジェクトの表示を制御する装置である。例えば、ARグラス10は、まず、ARグラス10の位置情報に基づいて、当該位置の周囲(例えば上下左右全方位の一定の範囲)に位置する仮想オブジェクトを、通信網22を介してサーバ20から取得する。そして、ARグラス10は、ARグラス10の姿勢(またはユーザの視線方向の検出結果)に基づいて、取得した仮想オブジェクトのうち、ユーザの視界内に含まれる仮想オブジェクトを、後述する表示部124に表示する。例えば、ARグラス10は、取得した仮想オブジェクトに基づいて右目用画像および左目用画像を生成し、そして、右目用画像を右目用表示部124aに表示し、かつ、左目用画像を左目用表示部124bに表示する。これにより、仮想的な立体視映像をユーザに視認させることができる。
<1-1. AR Glass 10>
The AR glass 10 is an example of an information processing apparatus according to the present disclosure. The AR glass 10 is a device that controls display of a virtual object associated with a position in the real world in advance. For example, the AR glass 10 first sends a virtual object located around the position (for example, a certain range in all directions) from the server 20 via the communication network 22 based on the position information of the AR glass 10. get. Then, the AR glass 10 displays, on the display unit 124 described later, a virtual object included in the user's field of view among the acquired virtual objects based on the posture of the AR glass 10 (or the detection result of the user's line-of-sight direction). indicate. For example, the AR glass 10 generates a right-eye image and a left-eye image based on the acquired virtual object, displays the right-eye image on the right-eye display unit 124a, and displays the left-eye image on the left-eye display unit. It is displayed on 124b. Thereby, a user can visually recognize a virtual stereoscopic video.
 ここで、仮想オブジェクトは、基本的に3Dのオブジェクトであるが、かかる例に限定されず、2Dのオブジェクトであってもよい。また、ARグラス10の表示部124は、透過型のディスプレイで構成される。 Here, the virtual object is basically a 3D object, but is not limited to such an example, and may be a 2D object. Further, the display unit 124 of the AR glass 10 is configured by a transmissive display.
 図2および図3は、ユーザがARグラス10を装着して、実オブジェクト30および仮想オブジェクト32を視認する様子を示した概略図である。図2(および図3)に示したように、実世界に位置する複数の実オブジェクト30のうち、ユーザの視界40内に含まれる実オブジェクト30および仮想オブジェクト32を、ユーザは表示部124を通して同時に視認することができる。なお、以下では「実オブジェクト」という記載は、単体の実オブジェクトだけでなく、実世界上の所定の領域(例えば建物全体、交差点、廊下など)を含むものとする。 FIG. 2 and FIG. 3 are schematic diagrams showing how the user wears the AR glass 10 and visually recognizes the real object 30 and the virtual object 32. As shown in FIG. 2 (and FIG. 3), the user simultaneously displays the real object 30 and the virtual object 32 included in the user's view 40 among the plurality of real objects 30 located in the real world through the display unit 124. It can be visually recognized. In the following description, “real object” includes not only a single real object but also a predetermined area in the real world (for example, the entire building, an intersection, a corridor, etc.).
 なお、ユーザの視界40は種々の方法で定義されてよい。例えば、ARグラス10の外側、すなわちARグラス10の前側に設けられたカメラが撮像した領域の略中央がユーザの視界40であると推定されてもよい。あるいは、ARグラス10の内側、すなわちARグラス10の後ろ側に設けられたカメラが撮像した眼球の画像に基づいてユーザの注視方向が推定され、そして、当該注視方向に対応する所定の立体空間がユーザの視界40であると推定されてもよい。ユーザの視界40の立体的な形状は適宜決定されてよいが、立体的な形状は略円錐形状として定義されることが望ましい。 Note that the user's field of view 40 may be defined in various ways. For example, it may be estimated that the user's field of view 40 is approximately the center of the area captured by the camera provided on the outside of the AR glass 10, that is, the front side of the AR glass 10. Alternatively, the gaze direction of the user is estimated based on the eyeball image captured by the camera provided inside the AR glass 10, that is, the rear side of the AR glass 10, and a predetermined three-dimensional space corresponding to the gaze direction is obtained. It may be estimated that it is the user's field of view 40. The three-dimensional shape of the user's field of view 40 may be determined as appropriate, but the three-dimensional shape is preferably defined as a substantially conical shape.
 また、ユーザの視界40と表示部124の表示領域との関係も多様に定められ得る。例えば、図2および図4に示したように、表示部124の表示領域に交差するユーザの視界40aの面積が当該表示領域全体以上になるように、両者の関係が定められてもよい。または、図3および図4に示したように、表示部124の表示領域に交差するユーザの視界40bの面積が当該表示領域全体よりも小さくなるように、両者の関係が定められてもよい。 Also, the relationship between the user's field of view 40 and the display area of the display unit 124 can be variously determined. For example, as shown in FIGS. 2 and 4, the relationship between the two may be determined so that the area of the user's field of view 40 a that intersects the display area of the display unit 124 is greater than or equal to the entire display area. Alternatively, as illustrated in FIGS. 3 and 4, the relationship between the two may be determined so that the area of the user's field of view 40 b that intersects the display area of the display unit 124 is smaller than the entire display area.
 図4は、図2および図3に示した状況において、表示部124を通して視認される実オブジェクト30および仮想オブジェクト32の例を示した図である。なお、図4では、仮想オブジェクト32が非透明なオブジェクトであることを前提としている。 FIG. 4 is a diagram illustrating an example of the real object 30 and the virtual object 32 that are visually recognized through the display unit 124 in the situation illustrated in FIGS. 2 and 3. In FIG. 4, it is assumed that the virtual object 32 is a non-transparent object.
 図2および図3に示した例では、仮想オブジェクト32は、実オブジェクト30aおよび実オブジェクト30bと、ユーザとの間に位置する。このため、図4に示したように、実オブジェクト30aおよび実オブジェクト30bのそれぞれ一部は、仮想オブジェクト32に隠れてユーザに視認される。 2 and 3, the virtual object 32 is located between the real object 30a and the real object 30b and the user. Therefore, as shown in FIG. 4, a part of each of the real object 30 a and the real object 30 b is hidden by the virtual object 32 and visually recognized by the user.
 また、ARグラス10は、通信網22を介してサーバ20と通信することが可能である。 Moreover, the AR glass 10 can communicate with the server 20 via the communication network 22.
 <1-2.サーバ20>
 サーバ20は、実世界の位置情報と対応付けて仮想オブジェクトを記憶する装置である。ここで、実世界の位置情報は、緯度および経度を含む情報であってもよいし、所定の建物内の間取り図の情報であってもよい。また、サーバ20は、仮想オブジェクトの取得要求を例えばARグラス10などの他の装置から受信した場合には、当該取得要求に対応する仮想オブジェクトを当該他の装置へ送信する。
<1-2. Server 20>
The server 20 is a device that stores virtual objects in association with real-world position information. Here, the real world position information may be information including latitude and longitude, or may be floor plan information in a predetermined building. When the server 20 receives a virtual object acquisition request from another device such as the AR glass 10, for example, the server 20 transmits a virtual object corresponding to the acquisition request to the other device.
 <1-3.通信網22>
 通信網22は、通信網22に接続されている装置から送信される情報の有線、または無線の伝送路である。例えば、通信網22は、電話回線網、インターネット、衛星通信網などの公衆回線網や、Ethernet(登録商標)を含む各種のLAN(Local Area Network)、WAN(Wide Area Network)などを含んでもよい。また、通信網22は、IP-VPN(Internet Protocol-Virtual Private Network)などの専用回線網を含んでもよい。
<1-3. Communication network 22>
The communication network 22 is a wired or wireless transmission path for information transmitted from a device connected to the communication network 22. For example, the communication network 22 may include a public line network such as a telephone line network, the Internet, and a satellite communication network, various LANs including the Ethernet (registered trademark), a wide area network (WAN), and the like. . Further, the communication network 22 may include a dedicated network such as an IP-VPN (Internet Protocol-Virtual Private Network).
 <1-4.課題の整理>
 以上、本実施形態による情報処理システムの構成について説明した。ところで、特に表示部124が高輝度なディスプレイである場合には、上述したように、ARグラス10を装着するユーザの視界内に位置する実オブジェクトが、当該実オブジェクトとユーザとの間に位置する仮想オブジェクトにより隠れ、見難くなる。従って、ユーザがARグラス10を装着しながら行動すると、様々な危険が生じ得る。例えば、ユーザに接近する実オブジェクトの存在に気付かないことにより、当該実オブジェクトがユーザにぶつかり得る。または、ユーザは、静止中の実オブジェクトの存在に気付かず、当該実オブジェクトに自ら衝突し得る。または、ユーザは、仮想の信号機を本物の信号機と見間違えてしまい、危険な運転をし得る。
<1-4. Organizing issues>
The configuration of the information processing system according to this embodiment has been described above. By the way, especially when the display unit 124 is a high-luminance display, as described above, the real object located in the field of view of the user wearing the AR glass 10 is located between the real object and the user. Hidden by virtual objects and difficult to see. Therefore, when the user acts while wearing the AR glass 10, various dangers may occur. For example, by not being aware of the existence of a real object approaching the user, the real object can hit the user. Alternatively, the user does not notice the presence of a stationary real object and can collide with the real object. Alternatively, the user may mistake a virtual traffic light for a real traffic light and perform dangerous driving.
 そこで、上記事情を一着眼点にして、本実施形態によるARグラス10を創作するに至った。本実施形態によるARグラス10は、ユーザの視界内に含まれる実オブジェクトが特定の実オブジェクトであるか否かの判断結果に基づいて、ユーザと実オブジェクトとの間の範囲において該当の実オブジェクトの認識度合いが変化するように表示部124による表示を制御する。このため、例えば、特定の実オブジェクトとユーザとの間に仮想オブジェクトが存在する場合には、ARグラス10は、当該実オブジェクトをユーザが認識可能なように当該仮想オブジェクトの表示を制御することができる。その結果、仮想オブジェクトの表示時における安全性を向上させることができる。 Therefore, the AR glass 10 according to the present embodiment has been created with the above circumstances taken into consideration. The AR glass 10 according to the present embodiment is based on the determination result of whether or not a real object included in the user's field of view is a specific real object, and the real object in the range between the user and the real object. The display by the display unit 124 is controlled so that the degree of recognition changes. For this reason, for example, when a virtual object exists between a specific real object and the user, the AR glass 10 can control the display of the virtual object so that the user can recognize the real object. it can. As a result, it is possible to improve safety when displaying the virtual object.
<<2.実施形態の詳細な説明>>
 <2-1.構成>
 以上、本実施形態による情報処理システムの構成について説明した。次に、本実施形態によるARグラス10の構成について詳細に説明する。図5は、本実施形態によるARグラス10の構成例を示した機能ブロック図である。図5に示すように、ARグラス10は、制御部100、通信部120、センサ部122、表示部124、および、記憶部126を有する。
<< 2. Detailed Description of Embodiment >>
<2-1. Configuration>
The configuration of the information processing system according to this embodiment has been described above. Next, the configuration of the AR glass 10 according to the present embodiment will be described in detail. FIG. 5 is a functional block diagram showing a configuration example of the AR glass 10 according to the present embodiment. As illustrated in FIG. 5, the AR glass 10 includes a control unit 100, a communication unit 120, a sensor unit 122, a display unit 124, and a storage unit 126.
 {2-1-1.制御部100}
 制御部100は、ARグラス10に内蔵される、後述するCPU(Central Processing Unit)150や、RAM(Random Access Memory)154などのハードウェアを用いて、ARグラス10の動作を統括的に制御する。また、図5に示すように、制御部100は、仮想オブジェクト取得部102、実オブジェクト判断部104、重なり判定部106、および、出力制御部108を有する。
{2-1-1. Control unit 100}
The control unit 100 centrally controls the operation of the AR glass 10 using hardware such as a CPU (Central Processing Unit) 150 and a RAM (Random Access Memory) 154, which will be described later, built in the AR glass 10. . As illustrated in FIG. 5, the control unit 100 includes a virtual object acquisition unit 102, a real object determination unit 104, an overlap determination unit 106, and an output control unit 108.
 {2-1-2.仮想オブジェクト取得部102}
 仮想オブジェクト取得部102は、後述するセンサ部122による、ARグラス10の位置情報の測定結果に基づいて、表示対象の仮想オブジェクトをサーバ20から取得する。例えば、仮想オブジェクト取得部102は、まず、センサ部122により測定された位置情報をサーバ20へ送信することにより、当該位置情報の周囲(例えば上下左右全方位の一定の範囲)に位置付けられている複数の仮想オブジェクトをサーバ20から取得する。そして、仮想オブジェクト取得部102は、センサ部122により測定された、ARグラス10の姿勢またはユーザの視線方向に基づいて、受信した複数の仮想オブジェクトの中から、ユーザの視界内に含まれる仮想オブジェクトを表示対象の仮想オブジェクトとして抽出する。
{2-1-2. Virtual object acquisition unit 102}
The virtual object acquisition unit 102 acquires a virtual object to be displayed from the server 20 based on the measurement result of the position information of the AR glass 10 by the sensor unit 122 described later. For example, first, the virtual object acquisition unit 102 transmits the position information measured by the sensor unit 122 to the server 20 to be positioned around the position information (for example, a certain range in all directions). A plurality of virtual objects are acquired from the server 20. The virtual object acquisition unit 102 then includes a virtual object included in the user's field of view from the plurality of received virtual objects based on the posture of the AR glass 10 or the user's line-of-sight direction measured by the sensor unit 122. Are extracted as virtual objects to be displayed.
 {2-1-3.実オブジェクト判断部104}
 実オブジェクト判断部104は、ユーザの視界内に含まれる個々の実オブジェクトが隠蔽回避設定されているか否かを判断する。ここで、隠蔽回避設定されている実オブジェクトは、本開示における第1の実オブジェクトの一例であり、また、隠蔽回避設定されていない実オブジェクトは、本開示における第2の実オブジェクトの一例である。
{2-1-3. Real object determination unit 104}
The real object determination unit 104 determines whether or not individual real objects included in the user's field of view are set to avoid concealment. Here, the real object that is set to avoid concealment is an example of the first real object in the present disclosure, and the real object that is not set to avoid concealment is an example of the second real object in the present disclosure. .
 (2-1-3-1.判断例)
 詳細については後述するように、例えば、実オブジェクトの種類ごとに隠蔽回避の設定条件が予め登録され得る。この場合、実オブジェクト判断部104は、まず、例えばセンサ部122により撮像された、ユーザの前方の撮像画像に基づいて、ユーザの視界内に含まれる個々の実オブジェクトに関して物体認識を行う。そして、実オブジェクト判断部104は、ユーザの視界内の個々の実オブジェクトに関して、認識された当該オブジェクトの種類に対応する隠蔽回避の設定条件に基づいて、当該オブジェクトが隠蔽回避設定されているか否かを判断する。
(2-1-3-1. Judgment example)
As will be described in detail later, for example, concealment avoidance setting conditions may be registered in advance for each type of real object. In this case, the real object determination unit 104 first performs object recognition on each real object included in the user's field of view based on, for example, a captured image in front of the user captured by the sensor unit 122. Then, the real object determination unit 104 determines whether or not the object is concealed based on the concealment avoidance setting condition corresponding to the recognized object type for each real object in the user's field of view. Judging.
 なお、隠蔽回避の設定条件のリストは、記憶部126に格納されていてもよいし、または、サーバ20に保存されていてもよい。後者の場合、実オブジェクト判断部104は、認識された個々の実オブジェクトが隠蔽回避対象であるか否かの問い合わせをサーバ20へ送信し、そして、回答を取得することにより、ユーザの視界内の個々の実オブジェクトが隠蔽回避設定されているか否かを特定する。 Note that the concealment avoidance setting condition list may be stored in the storage unit 126 or may be stored in the server 20. In the latter case, the real object determination unit 104 sends an inquiry as to whether or not each recognized real object is a concealment avoidance target to the server 20, and obtains an answer, thereby obtaining a response within the user's field of view. It is specified whether or not each real object is set to avoid concealment.
 (2-1-3-2.隠蔽回避対象の設定例)
 ここで、上述した、実オブジェクトの種類ごとの隠蔽回避の設定例について具体的に説明する。例えば、信号機、道路標識などの公的な標識、または、工事現場の看板などは、常に隠蔽回避設定され得る。また、横断歩道(の上空全体)、交差点(の上空全体)、または、レジャー施設などの建物における避難経路などの所定の領域に関しても、(領域単位で)常に隠蔽回避設定され得る。これにより、ユーザはより安全に通行したり、車を運転することができる。
(2-1-3-2. Setting example of concealment avoidance target)
Here, the above-described setting example of concealment avoidance for each type of real object will be specifically described. For example, a public sign such as a traffic light, a road sign, or a signboard of a construction site can be always set to avoid concealment. Further, concealment avoidance setting can be always set for a predetermined area such as a pedestrian crossing (the entire sky), an intersection (the entire sky), or an evacuation route in a building such as a leisure facility. Thereby, the user can pass more safely or drive the car.
 また、その他の種類の実オブジェクトは、所定の基準に基づいて動的に隠蔽回避設定され得る。ここで、所定の基準は、実オブジェクトとユーザとの位置関係を含み得る。例えば、信号機や広告などの正面にユーザが位置する場合には、これらの実オブジェクトは隠蔽回避設定され、また、これらの実オブジェクトの側面や後方にユーザが位置する場合には、これらの実オブジェクトは隠蔽回避設定されない。 Also, other types of real objects can be set to avoid concealment dynamically based on predetermined criteria. Here, the predetermined reference may include a positional relationship between the real object and the user. For example, when the user is located in front of a traffic light or an advertisement, these real objects are set to avoid concealment, and when the user is located on the side or behind these real objects, these real objects are set. Is not set to avoid concealment.
 また、所定の基準は、実オブジェクトとユーザとの間の距離を含む。この場合、実オブジェクトとユーザとの間の距離が所定の距離以下である場合には、当該実オブジェクトは隠蔽回避設定され得る。または、所定の基準は、実オブジェクトの移動方向、実オブジェクトの速度、および/または、実オブジェクトの加速度を含み得る。例えば、実オブジェクト(例えばボールや自動車など)がユーザに対して近づいており、かつ、当該実オブジェクトの速度が所定の閾値以上である場合には、当該実オブジェクトは動的に隠蔽回避設定され得る。または、所定の基準は、ユーザの移動方向、ユーザの速度、および/または、ユーザの加速度を含み得る。例えば、ユーザがある実オブジェクトの方へ移動しており、かつ、ユーザの速度が所定の閾値以上である場合には、当該実オブジェクトは動的に隠蔽回避設定され得る。 Also, the predetermined standard includes a distance between the real object and the user. In this case, when the distance between the real object and the user is equal to or smaller than a predetermined distance, the real object can be set to avoid concealment. Alternatively, the predetermined criteria may include a moving direction of the real object, a speed of the real object, and / or an acceleration of the real object. For example, when a real object (for example, a ball or a car) is approaching the user and the speed of the real object is equal to or higher than a predetermined threshold, the real object can be set to avoid concealment dynamically. . Alternatively, the predetermined criteria may include a user moving direction, a user speed, and / or a user acceleration. For example, when the user is moving toward a certain real object and the speed of the user is equal to or higher than a predetermined threshold, the real object can be set to avoid concealment dynamically.
 または、所定の基準は、他の人物の行動の認識結果を含み得る。例えば、他の人物がユーザの方を向き、かつ、発声していることが検出された場合には、当該他の人物は、隠蔽回避設定され得る。 Alternatively, the predetermined standard may include a recognition result of another person's action. For example, when it is detected that another person is facing the user and speaking, the other person can be set to avoid concealment.
 または、所定の基準は、実オブジェクトの状態を含み得る。例えば、所定の基準は、実オブジェクトの温度を含む。この場合、実オブジェクト(例えばやかんなど)の温度が所定の閾値以上になったことが所定の温度センサにより検出される場合には、当該実オブジェクトは隠蔽回避設定され得る。なお、所定の閾値は、実オブジェクトの種類ごとに定められてもよい。 Or the predetermined criteria may include the state of the real object. For example, the predetermined criterion includes the temperature of the real object. In this case, when it is detected by a predetermined temperature sensor that the temperature of the real object (for example, a kettle) is equal to or higher than a predetermined threshold, the real object can be set to avoid concealment. The predetermined threshold may be determined for each type of real object.
 または、所定の基準は、実オブジェクト(例えば電子機器など)の機器状態を含む。例えば、テレビジョン受信機やPC(Personal Computer)の電源がONである場合にのみ、テレビジョン受信機やPCのモニターは、隠蔽回避設定され得る。また、稼働中の電子機器が故障したことが検出された場合には、当該電子機器は、隠蔽回避設定され得る。 Alternatively, the predetermined standard includes a device state of a real object (for example, an electronic device). For example, the concealment avoidance setting of the television receiver and the PC can be performed only when the power of the television receiver and the PC (Personal Computer) is ON. Further, when it is detected that an electronic device in operation has failed, the electronic device can be set to avoid concealment.
 または、所定の基準は、実オブジェクトからの音、光、または、煙の発生状況を含み得る。例えば、実オブジェクトから煙が発生したことが所定のセンサにより検出される場合には、当該実オブジェクトは隠蔽回避設定され得る。または、例えば時計、玄関などのドアの開閉、ドアのノック、玄関のチャイム、電話、やかん、実オブジェクトの衝突(落下など)、各種電子機器のタイマー、または、火災警報機などの音の発生が検出された場合には、これらの実オブジェクトは隠蔽回避設定され得る。これにより、後述するように、出力制御部108により、音の発生源とユーザとの間に位置する仮想オブジェクトは非表示にされ得る。従って、ユーザが音の到来方向へ目を向けた際に、当該音の発生源に向かう動線上の仮想オブジェクトは表示されないので、当該音の発生源をユーザに明確に知覚させることができる。 Or, the predetermined standard may include the state of sound, light, or smoke from the real object. For example, when a predetermined sensor detects that smoke is generated from a real object, the real object can be set to avoid concealment. Or, for example, opening and closing of doors such as clocks and entrances, knocking of doors, entrance chimes, telephones, kettles, real object collisions (falling, etc.), various electronic equipment timers, or fire alarms, etc. If detected, these real objects can be set to avoid concealment. Thereby, as will be described later, the output control unit 108 can hide the virtual object located between the sound source and the user. Accordingly, when the user looks in the direction of arrival of the sound, the virtual object on the flow line toward the sound generation source is not displayed, so that the user can clearly perceive the sound generation source.
 または、所定の基準は、契約の有無や課金状態を含み得る。例えば、当該ARサービスの運営者との間で、広告や商品の表示に関する契約が交わされていることがサーバ20に登録されている場合には、該当の広告や商品は、契約期間中(または課金中)のみ隠蔽回避設定され得る。 Or, the predetermined criteria can include the presence or absence of a contract or the billing status. For example, if the server 20 registers that an agreement regarding the display of advertisements and products has been exchanged with the AR service operator, the advertisements and products will remain in the contract period (or Concealment avoidance setting can only be set.
 {2-1-4.重なり判定部106}
 重なり判定部106は、ユーザの視界内に含まれる実オブジェクトのうち、隠蔽回避設定されていると実オブジェクト判断部104により判断された実オブジェクトと、仮想オブジェクト取得部102により取得された表示対象の仮想オブジェクトとの重なりの有無を判定する。例えば、重なり判定部106は、まず、ARグラス10の位置および姿勢(またはユーザの視線方向の検出結果)に基づいて、全ての表示対象の仮想オブジェクトに関する距離情報(デプスマップ)を特定する。次に、重なり判定部106は、ARグラス10の位置および姿勢(またはユーザの視線方向の検出結果)に基づいて、隠蔽回避設定されている全ての実オブジェクトに関する距離情報(デプスマップ)を特定する。そして、重なり判定部106は、この二つの距離情報を比較することにより、隠蔽回避設定されている実オブジェクトごとに、表示対象の仮想オブジェクトとの重なりの有無を判定する。具体的には、重なり判定部106は、隠蔽回避設定されている個々の実オブジェクトに関して、当該実オブジェクトとARグラス10との間に仮想オブジェクトが存在するか否かを判定し、そして、存在する場合には該当の仮想オブジェクトを全て特定する。
{2-1-4. Overlap determination unit 106}
The overlap determination unit 106 includes a real object determined by the real object determination unit 104 as being concealment avoidance among real objects included in the user's field of view, and a display target acquired by the virtual object acquisition unit 102. It is determined whether or not there is an overlap with the virtual object. For example, the overlap determination unit 106 first specifies distance information (depth map) regarding all virtual objects to be displayed based on the position and orientation of the AR glass 10 (or the detection result of the user's line-of-sight direction). Next, the overlap determination unit 106 specifies distance information (depth map) regarding all real objects that are set to avoid concealment based on the position and orientation of the AR glass 10 (or the detection result of the user's line-of-sight direction). . Then, the overlap determination unit 106 determines whether or not there is an overlap with the virtual object to be displayed for each real object that is set to avoid concealment by comparing the two pieces of distance information. Specifically, the overlap determination unit 106 determines whether or not a virtual object exists between the real object and the AR glass 10 for each real object that is set to avoid concealment, and exists. In this case, all corresponding virtual objects are specified.
 {2-1-5.出力制御部108}
 (2-1-5-1.制御例1)
 出力制御部108は、実オブジェクト判断部104による判断結果と、重なり判定部106による判定結果とに基づいて、実オブジェクトとユーザとの間に位置する所定の範囲における当該実オブジェクトの認識度合いが変化するように、表示部124による表示を制御する。例えば、出力制御部108は、ユーザの視界内に含まれる実オブジェクトのうち、隠蔽回避設定されていない実オブジェクトよりも隠蔽回避設定されている実オブジェクトの方が当該所定の範囲における当該実オブジェクトの認識度合いが大きくなるように、表示部124による表示を制御する。なお、所定の範囲は、実オブジェクトとユーザとの間に位置し、かつ、当該実オブジェクトを含まない範囲であり得る。
{2-1-5. Output control unit 108}
(2-1-5-1. Control example 1)
Based on the determination result by the real object determination unit 104 and the determination result by the overlap determination unit 106, the output control unit 108 changes the degree of recognition of the real object in a predetermined range located between the real object and the user. Thus, the display by the display unit 124 is controlled. For example, the output control unit 108 determines that, among the real objects included in the user's field of view, a real object that is set to avoid concealment is a real object that is set to avoid concealment. The display by the display unit 124 is controlled so that the degree of recognition increases. The predetermined range may be a range that is located between the real object and the user and does not include the real object.
 ‐仮想オブジェクトの表示属性の変更
 一例として、ユーザの視界内に含まれる実オブジェクトのうち、隠蔽回避設定されている実オブジェクトに関してのみ、出力制御部108は、当該実オブジェクトと表示部124との間に位置する仮想オブジェクトの視認性が低下するように、当該仮想オブジェクトの表示態様を変化させる。この場合、出力制御部108は、当該仮想オブジェクトの全部または一部(例えば、隠蔽回避設定されている実オブジェクトと重なる部分およびその近傍)を非表示にしてもよい。または、出力制御部108は、当該仮想オブジェクトを半透明にしてもよいし、当該仮想オブジェクトの輪郭線のみを表示(ワイヤフレーム表示)させてもよいし、当該仮想オブジェクトを所定の時間間隔で点滅させて表示させてもよい。
-Change of display attribute of virtual object As an example, among real objects included in the user's field of view, only for real objects that are set to avoid concealment, the output control unit 108 The display mode of the virtual object is changed so that the visibility of the virtual object located at is reduced. In this case, the output control unit 108 may hide all or a part of the virtual object (for example, a portion overlapping with a real object set to avoid concealment and its vicinity). Alternatively, the output control unit 108 may make the virtual object translucent, display only the outline of the virtual object (wire frame display), or blink the virtual object at a predetermined time interval. May be displayed.
 ここで、図6~図7Bを参照して、上記の機能についてより詳細に説明する。図6は、実オブジェクト30aが隠蔽回避設定されている例を示した説明図である。なお、図6に示したように、実オブジェクト30aの周囲の一定の範囲は隠蔽回避領域50として設定され得る。隠蔽回避領域50は、仮想オブジェクトによる隠蔽が回避される領域(空間)である。 Here, the above functions will be described in more detail with reference to FIGS. 6 to 7B. FIG. 6 is an explanatory diagram showing an example in which the real object 30a is set to avoid concealment. As shown in FIG. 6, a certain range around the real object 30 a can be set as the concealment avoidance area 50. The concealment avoidance area 50 is an area (space) where concealment by a virtual object is avoided.
 図6に示した状況では、例えば、出力制御部108は、隠蔽回避領域50と表示部124との間に位置する仮想オブジェクト32を非表示にする。これにより、図7Aに示したように、表示部124において、実オブジェクト30aの全体が仮想オブジェクト32に隠されずに、ユーザに視認される。または、出力制御部108は、仮想オブジェクト32を半透明にする。これにより、図7Bに示したように、表示部124において、実オブジェクト30aの全体が仮想オブジェクト32に隠されずに、ユーザに視認される。 6, for example, the output control unit 108 hides the virtual object 32 positioned between the concealment avoidance area 50 and the display unit 124. As a result, as shown in FIG. 7A, the entire real object 30 a is visible to the user on the display unit 124 without being hidden by the virtual object 32. Alternatively, the output control unit 108 makes the virtual object 32 translucent. As a result, as shown in FIG. 7B, the entire real object 30 a is visible to the user on the display unit 124 without being hidden by the virtual object 32.
 ‐‐変形例
 なお、仮想オブジェクトの表示態様を変化させる際には、出力制御部108は、表示態様の変化を強調するように表示を制御してもよい。例えば、仮想オブジェクトの輪郭線のみを表示させるように変更する際には、出力制御部108は、当該輪郭線の近傍を一時的にぼんやりと光らせてもよい。または、隠蔽回避設定されている実オブジェクトが移動している場合には、出力制御部108は、当該実オブジェクトの移動軌跡上に位置する仮想オブジェクトが破裂したり、崩れるなどのアニメーションを表示させながら、当該仮想オブジェクトを非表示にしてもよい。または、出力制御部108は、ARグラス10や、ユーザが携帯する他の機器などに音を出力させたり、振動を発生させながら、当該仮想オブジェクトの表示態様を変化させてもよい。これらの制御例によれば、仮想オブジェクトの表示態様が変化したことをより強調してユーザに通知することができる。
--- Modification When the display mode of the virtual object is changed, the output control unit 108 may control the display so as to emphasize the change in the display mode. For example, when changing to display only the contour line of the virtual object, the output control unit 108 may temporarily and gently flash the vicinity of the contour line. Alternatively, when the real object set to avoid concealment is moving, the output control unit 108 displays an animation such as a virtual object located on the movement locus of the real object exploding or collapsing. The virtual object may be hidden. Alternatively, the output control unit 108 may change the display mode of the virtual object while outputting sound or generating vibration to the AR glass 10 or other devices carried by the user. According to these control examples, it is possible to notify the user more emphasized that the display mode of the virtual object has changed.
 ‐仮想オブジェクトの表示位置の変更
 または、出力制御部108は、隠蔽回避設定されている実オブジェクトと当該仮想オブジェクトとが重ならなくなるように、当該仮想オブジェクトを、デフォルトの表示位置からずらして表示させてもよい。例えば、出力制御部108は、隠蔽回避設定されている実オブジェクトと重ならず、かつ、位置の変更量ができるだけ小さくなるように、当該仮想オブジェクトをずらして表示させる。
-Changing the display position of the virtual object Alternatively, the output control unit 108 causes the virtual object to be displayed while being shifted from the default display position so that the real object set to avoid concealment does not overlap the virtual object. May be. For example, the output control unit 108 shifts and displays the virtual object so that it does not overlap with the real object that is set to avoid concealment, and the position change amount is as small as possible.
 図6に示した例では、出力制御部108は、表示部124において隠蔽回避領域50が視認される領域と仮想オブジェクト32とが重ならなくなるように、仮想オブジェクト32の表示位置をずらして表示させる。これにより、図7Cに示したように、表示部124において、実オブジェクト30aの全体が仮想オブジェクト32に隠されずに、ユーザに視認される。 In the example illustrated in FIG. 6, the output control unit 108 shifts the display position of the virtual object 32 so that the virtual object 32 does not overlap the region where the concealment avoidance region 50 is visually recognized on the display unit 124. . As a result, as shown in FIG. 7C, the entire real object 30 a is not hidden by the virtual object 32 on the display unit 124 and is visually recognized by the user.
 なお、個々の仮想オブジェクトに関して、位置、姿勢、およびサイズの変更許容範囲(上限)は予め設定されていてもよい。例えば、ある仮想オブジェクトのデフォルト位置が(x,y,z)=(10m,2m,1m)である場合、位置に関する変更許容範囲は、各方向に関して-50cm以上50cm以下であり、変更許容回転範囲は、各軸に関して-30°以上30°以下であり、かつ、変更許容サイズ範囲は、0.4倍以上1.0倍以下のように設定され得る。このように、変更許容範囲が設定されることにより、仮想オブジェクトがデフォルトの表示位置から大きく異なって表示されないように制限することができる。 In addition, regarding each virtual object, the change allowable range (upper limit) of the position, posture, and size may be set in advance. For example, when the default position of a certain virtual object is (x, y, z) = (10 m, 2 m, 1 m), the change allowable range for the position is −50 cm to 50 cm in each direction, and the change allowable rotation range Is -30 ° or more and 30 ° or less with respect to each axis, and the change allowable size range can be set to 0.4 times or more and 1.0 times or less. Thus, by setting the change allowable range, it is possible to limit the virtual object from being displayed greatly different from the default display position.
 ここで、上記の変更許容範囲は、ワールド座標系で定義されてもよいし、または、ユーザ座標系(例えば、ARグラス10基準の座標系)で定義されてもよい。 Here, the change allowable range may be defined in a world coordinate system, or may be defined in a user coordinate system (for example, a coordinate system based on the AR glass 10).
 または、変更許容範囲は、ベクトル空間で定義されてもよい。図8は、仮想オブジェクトの変更許容範囲50がベクトル空間上で定義されている例を示した説明図である。例えば、変更許容範囲50内の各点において、許容される各軸の回転量および拡縮の値が設定され得る。 Alternatively, the change allowable range may be defined in a vector space. FIG. 8 is an explanatory diagram showing an example in which the change allowable range 50 of the virtual object is defined on the vector space. For example, at each point within the change allowable range 50, an allowable rotation amount and expansion / contraction value of each axis can be set.
 ‐‐変形例1
 なお、変形例として、出力制御部108は、ユーザの将来の移動を予め予測しておき、そして、予測結果に基づいて、仮想オブジェクトの表示位置の時系列を予め決定しておいてもよい。これにより、仮想オブジェクトの表示更新時の負荷を軽減することができる。
--- Modification 1
As a modification, the output control unit 108 may predict the future movement of the user in advance, and may predetermine the time series of the display position of the virtual object based on the prediction result. Thereby, the load at the time of the display update of a virtual object can be reduced.
 ‐‐変形例2
 一般的に、例えば人ごみの中などでは、個々の人(つまり実オブジェクト)の位置は逐次変化し得る。そこで、別の変形例として、出力制御部108は、隠蔽回避設定されている一以上の実オブジェクトの動きを予測することにより、仮想オブジェクトの表示位置の時系列を予め決定すること(path finding)も可能である。これにより、周囲の人が移動する度に、仮想オブジェクトをずらす位置を逐次計算する必要がなくなるので、仮想オブジェクトの表示更新時の負荷を軽減することができる。
--- Modification 2
In general, for example, in a crowd, the position of an individual person (that is, a real object) can change sequentially. Therefore, as another modified example, the output control unit 108 predicts the movement of one or more real objects that are set to avoid concealment, thereby predetermining the time series of the display positions of the virtual objects (path finding). Is also possible. This eliminates the need to sequentially calculate the position at which the virtual object is shifted each time a surrounding person moves, thereby reducing the load when updating the display of the virtual object.
 ‐‐変形例3
 また、別の変形例として、出力制御部108は、例えば上述した所定の基準に基づいて、仮想オブジェクトのずらし方を変化させてもよい。例えば、出力制御部108は、ユーザの方向に移動している実オブジェクトの速度が大きいほど、当該実オブジェクトと表示部124との間に位置する仮想オブジェクトをより高速にずらしたり、より遠くまでずらしてもよい。または、出力制御部108は、隠蔽回避設定されている実オブジェクトの温度が大きいほど、当該実オブジェクトと表示部124との間に位置する仮想オブジェクトをより高速にずらしたり、より遠くまでずらしてもよい。これらの制御例によれば、危険の大きさをユーザに通知することができる。
-Modification 3
As another modification, the output control unit 108 may change how the virtual object is shifted based on, for example, the predetermined criterion described above. For example, the output control unit 108 shifts the virtual object located between the real object and the display unit 124 at a higher speed or a farther distance as the speed of the real object moving in the direction of the user increases. May be. Alternatively, the output control unit 108 may shift the virtual object positioned between the real object and the display unit 124 at higher speed or farther as the temperature of the real object set to avoid concealment is higher. Good. According to these control examples, the magnitude of danger can be notified to the user.
 ‐別の仮想オブジェクトの表示
 または、出力制御部108は、該当の実オブジェクトに関連する別の仮想オブジェクトを、当該仮想オブジェクトと重ならない位置に新たに表示させてもよい。例えば、該当の実オブジェクトが信号機である場合には、出力制御部108は、当該信号機と重なって表示されている仮想オブジェクトからずれた位置に、仮想の信号機を新たに表示させてもよい。または、出力制御部108は、当該信号機の現在の点灯色を示す情報(テキストや画像)を新たに表示させてもよいし、または、表示部124全体を当該信号機の現在の点灯色で薄く表示させてもよい。
-Display of another virtual object Alternatively, the output control unit 108 may newly display another virtual object related to the corresponding real object at a position that does not overlap the virtual object. For example, when the corresponding real object is a traffic light, the output control unit 108 may newly display a virtual traffic light at a position shifted from the virtual object displayed overlapping the traffic light. Alternatively, the output control unit 108 may newly display information (text or image) indicating the current lighting color of the traffic signal, or display the entire display unit 124 lightly with the current lighting color of the traffic signal. You may let them.
 (2-1-5-2.制御例2)
 また、出力制御部108は、隠蔽回避設定されている実オブジェクトと表示部124との間に位置する仮想オブジェクトの表示態様を、例えば上述した所定の基準に基づいて動的に変化させてもよい。例えば、出力制御部108は、当該実オブジェクトと表示部124との間の距離の変化に基づいて、当該実オブジェクトと表示部124との間に位置する仮想オブジェクトの表示態様を動的に変化させる。より具体的には、出力制御部108は、当該実オブジェクトと表示部124との間の距離が小さくなるほど、当該仮想オブジェクトの透過度を上げたり、当該仮想オブジェクトのメッシュのサイズを徐々に粗くしたり、または、当該仮想オブジェクトを徐々にワイヤフレーム化してもよい。一例として、ユーザが車を運転している場面において、出力制御部108は、ユーザの前方に位置する信号機と表示部124との間の距離が小さくなるほど、信号機と表示部124との間に位置する仮想オブジェクトの透過度を上げてもよい。
(2-1-5-2. Control example 2)
Further, the output control unit 108 may dynamically change the display mode of the virtual object located between the real object set to avoid concealment and the display unit 124, for example, based on the predetermined criterion described above. . For example, the output control unit 108 dynamically changes the display mode of the virtual object located between the real object and the display unit 124 based on the change in the distance between the real object and the display unit 124. . More specifically, the output control unit 108 increases the transparency of the virtual object or gradually increases the mesh size of the virtual object as the distance between the real object and the display unit 124 decreases. Alternatively, the virtual object may be gradually made into a wire frame. As an example, when the user is driving a car, the output control unit 108 is positioned between the traffic signal and the display unit 124 as the distance between the traffic signal located in front of the user and the display unit 124 decreases. The transparency of the virtual object to be performed may be increased.
 または、出力制御部108は、周囲の状況の変化に応じて、当該実オブジェクトと表示部124との間に位置する仮想オブジェクトの表示態様を動的に変化させてもよい。例えば、レジャー施設などの所定の施設において、避難経路が隠蔽回避設定されているとする。この場合、避難誘導時には、出力制御部108は、障害物となる仮想オブジェクトを非表示にさせる等、避難経路上に表示させないように制御したり、または、誘導用の仮想オブジェクトを新たに表示させてもよい。または、常時、出力制御部108は、避難経路上に位置する仮想オブジェクトを半透明で表示させてもよい。 Alternatively, the output control unit 108 may dynamically change the display mode of the virtual object located between the real object and the display unit 124 according to a change in the surrounding situation. For example, it is assumed that the escape route is set to avoid concealment in a predetermined facility such as a leisure facility. In this case, at the time of evacuation guidance, the output control unit 108 performs control so as not to display on the evacuation route, such as hiding a virtual object that becomes an obstacle, or newly displays a virtual object for guidance. May be. Or the output control part 108 may always display the virtual object located on an evacuation route translucently.
 図9Aは、避難経路である廊下における、通常時の仮想オブジェクト32の表示例を示した図である。図9Aに示したように、例えば、通常時には、出力制御部108は、避難経路上に位置する仮想オブジェクト32aを半透明で表示させる。また、図9Bは、同一の廊下における、避難誘導時の仮想オブジェクト32の表示例を示した図である。図9Bに示したように、避難誘導時には、出力制御部108は、図9Aに示した仮想オブジェクト32aを非表示にし、かつ、誘導用の仮想オブジェクト32bを新たに表示させる。なお、出力制御部108は、避難を誘導するように、ユーザの移動に応じて仮想オブジェクト32bの表示位置、傾き、または表示内容を逐次更新してもよい。 FIG. 9A is a diagram showing a display example of the virtual object 32 at the normal time in the corridor which is an evacuation route. As shown in FIG. 9A, for example, during normal times, the output control unit 108 displays the virtual object 32a located on the evacuation route in a translucent manner. FIG. 9B is a diagram showing a display example of the virtual object 32 at the time of evacuation guidance in the same corridor. As shown in FIG. 9B, at the time of evacuation guidance, the output control unit 108 hides the virtual object 32a shown in FIG. 9A and newly displays the virtual object 32b for guidance. Note that the output control unit 108 may sequentially update the display position, tilt, or display content of the virtual object 32b in accordance with the movement of the user so as to guide evacuation.
 または、出力制御部108は、周囲の明るさに応じて、ユーザの視界内に位置する一以上の仮想オブジェクトの表示態様を変化させてもよい。例えば、周囲が暗い場合(夜や曇りの場合など)には、出力制御部108は、隠蔽回避設定されている実オブジェクトとユーザとの間に位置する仮想オブジェクトのうち非表示にする部分を、周囲が明るい場合よりも大きくしてもよい。または、この場合、出力制御部108は、さらに、当該仮想オブジェクトの周囲に位置する他の仮想オブジェクトを半透明にしてもよいし、ユーザの視界内に位置する他の仮想オブジェクト全部を半透明にしてもよい。 Alternatively, the output control unit 108 may change the display mode of one or more virtual objects located in the user's field of view according to the surrounding brightness. For example, when the surroundings are dark (such as when it is night or cloudy), the output control unit 108 displays a portion to be hidden among the virtual objects located between the real object set to avoid concealment and the user. It may be larger than when the surroundings are bright. Alternatively, in this case, the output control unit 108 may further make other virtual objects located around the virtual object semi-transparent, or make all other virtual objects located within the user's field of view semi-transparent. May be.
 (2-1-5-3.制御例3)
 また、隠蔽回避設定されている実オブジェクトが広告である場合には、出力制御部108は、ユーザと当該広告との位置関係に基づいて、当該広告と表示部124との間に位置する仮想オブジェクトの表示態様を動的に制御してもよい。例えば、ユーザが当該広告の正面に位置する場合には、出力制御部108は、当該仮想オブジェクトの透過度を大きくする。また、ユーザが当該広告の側面や後方に位置する場合には、出力制御部108は、当該仮想オブジェクトの透過度をより小さくする。これにより、広告の認識度合いを維持しながら、仮想オブジェクトの視認性の低下をできるだけ抑制することができる。
(2-1-5-3. Control example 3)
When the real object set to avoid concealment is an advertisement, the output control unit 108 determines whether or not the virtual object is positioned between the advertisement and the display unit 124 based on the positional relationship between the user and the advertisement. The display mode may be controlled dynamically. For example, when the user is located in front of the advertisement, the output control unit 108 increases the transparency of the virtual object. In addition, when the user is located on the side or rear of the advertisement, the output control unit 108 further reduces the transparency of the virtual object. Thereby, the fall of the visibility of a virtual object can be suppressed as much as possible, maintaining the recognition degree of an advertisement.
 (2-1-5-4.制御例4)
 または、出力制御部108は、表示部124上での仮想オブジェクトの表示位置に基づいて、仮想オブジェクトの表示態様を変化させてもよい。例えば、表示部124における当該仮想オブジェクトの表示位置と表示部124の中心との間の距離が小さいほど、出力制御部108は、当該仮想オブジェクトの透過度を相対的により大きくしてもよい。または、当該仮想オブジェクトの表示位置と、表示部124におけるユーザの注視点との間の距離が小さいほど、出力制御部108は、当該仮想オブジェクトの透過度を相対的により大きくしてもよい。
(2-1-5-4. Control example 4)
Alternatively, the output control unit 108 may change the display mode of the virtual object based on the display position of the virtual object on the display unit 124. For example, as the distance between the display position of the virtual object on the display unit 124 and the center of the display unit 124 is smaller, the output control unit 108 may relatively increase the transparency of the virtual object. Alternatively, the output control unit 108 may relatively increase the transparency of the virtual object as the distance between the display position of the virtual object and the user's gaze point on the display unit 124 is smaller.
 (2-1-5-5.制御例5)
 また、出力制御部108は、ARグラス10の位置および姿勢(またはユーザの視線方向の検出結果)が変化する度に、上述した表示制御を逐次実行することが可能である。
(2-1-5-5. Control example 5)
Further, the output control unit 108 can sequentially execute the display control described above every time the position and posture of the AR glass 10 (or the detection result of the user's line-of-sight direction) change.
 (2-1-5-6.制御例6)
 また、出力制御部108は、実オブジェクト判断部104による判断結果の変化に応じて、仮想オブジェクトの表示態様を動的に変化させることも可能である。例えば、ある実オブジェクトの隠蔽回避設定がOFFからONに切り替わった際には、(上述したように)出力制御部108は、表示部124と当該実オブジェクトとの間に位置する仮想オブジェクトの表示態様をデフォルトの表示態様から変化させる。また、ある実オブジェクトの隠蔽回避の設定がONからOFFに切り替わった際には、出力制御部108は、表示部124と当該実オブジェクトとの間に位置する仮想オブジェクトの表示態様をデフォルトの表示態様に戻す。
(2-1-5-6. Control example 6)
Further, the output control unit 108 can dynamically change the display mode of the virtual object in accordance with the change in the determination result by the real object determination unit 104. For example, when the concealment avoidance setting for a certain real object is switched from OFF to ON, the output control unit 108 displays the display mode of the virtual object positioned between the display unit 124 and the real object (as described above). Is changed from the default display mode. When the setting for avoiding concealment of a certain real object is switched from ON to OFF, the output control unit 108 sets the display mode of the virtual object positioned between the display unit 124 and the real object to the default display mode. Return to.
 なお、変形例として、仮想オブジェクトの表示態様がデフォルトの表示態様から変化された後に、当該仮想オブジェクトの表示態様を元に戻すための入力をユーザが行った場合には、出力制御部108は、当該仮想オブジェクトの表示態様をデフォルトの表示態様に戻すことも可能である。また、出力制御部108は、履歴情報に基づいて、デフォルトの表示態様に戻すことをユーザが過去に許可した仮想オブジェクトを特定し、そして、特定した仮想オブジェクトに関しては以後デフォルトの表示態様で表示させてもよい。 As a modification, when the user makes an input for returning the display mode of the virtual object after the display mode of the virtual object is changed from the default display mode, the output control unit 108 It is also possible to return the display mode of the virtual object to the default display mode. Further, the output control unit 108 specifies a virtual object that the user has permitted in the past to return to the default display mode based on the history information, and displays the specified virtual object in the default display mode thereafter. May be.
 {2-1-6.通信部120}
 通信部120は、他の装置との間で情報の送受信を行う。例えば、通信部120は、仮想オブジェクト取得部102の制御に従って、現在位置の周囲に位置する仮想オブジェクトの取得要求をサーバ20へ送信する。また、通信部120は、仮想オブジェクトをサーバ20から受信する。
{2-1-6. Communication unit 120}
The communication unit 120 transmits and receives information to and from other devices. For example, the communication unit 120 transmits an acquisition request for a virtual object located around the current position to the server 20 according to the control of the virtual object acquisition unit 102. In addition, the communication unit 120 receives a virtual object from the server 20.
 {2-1-7.センサ部122}
 センサ部122は、例えば、GPS(Global Positioning System)などの測位衛星から測位信号を受信して現在位置を測位する測位装置を含み得る。また、センサ部122は、測域センサを含み得る。
{2-1-7. Sensor unit 122}
The sensor unit 122 may include a positioning device that receives a positioning signal from a positioning satellite such as a GPS (Global Positioning System) and measures the current position. The sensor unit 122 may include a range sensor.
 さらに、センサ部122は、例えば、3軸加速度センサ、ジャイロスコープ、磁気センサ、カメラ、深度センサ、および/または、マイクロフォンなどを含む。例えば、センサ部122は、ARグラス10の速度、加速度、姿勢、または方位などを測定する。また、センサ部122は、ARグラス10を装着するユーザの目の画像を撮影したり、ARグラス10の前方の画像を撮影する。また、センサ部122は、ユーザの前方に位置する物体を検出したり、また、検出した物体までの距離を検出し得る。 Furthermore, the sensor unit 122 includes, for example, a three-axis acceleration sensor, a gyroscope, a magnetic sensor, a camera, a depth sensor, and / or a microphone. For example, the sensor unit 122 measures the speed, acceleration, posture, orientation, or the like of the AR glass 10. The sensor unit 122 captures an image of the eyes of the user wearing the AR glass 10 or captures an image in front of the AR glass 10. In addition, the sensor unit 122 can detect an object positioned in front of the user and can detect a distance to the detected object.
 {2-1-8.表示部124}
 表示部124は、出力制御部108の制御に従って、映像を表示する。例えば、表示部124は、左目用レンズおよび右目用レンズそれぞれの少なくとも一部の領域(投影面)に対して映像を投影する。なお、左目用レンズおよび右目用レンズは、例えば樹脂やガラスなどの透明材料により形成され得る。
{2-1-8. Display unit 124}
The display unit 124 displays an image according to the control of the output control unit 108. For example, the display unit 124 projects an image on at least a partial region (projection plane) of each of the left-eye lens and the right-eye lens. The left-eye lens and the right-eye lens can be formed of a transparent material such as resin or glass.
 なお、変形例として、表示部124は液晶パネルを有し、かつ、液晶パネルの透過率が制御可能であってもよい。これにより、表示部124は、透明または半透明の状態に制御され得る。 As a modification, the display unit 124 may include a liquid crystal panel and the transmittance of the liquid crystal panel may be controllable. Thereby, the display unit 124 can be controlled to be transparent or translucent.
 {2-1-9.記憶部126}
 記憶部126は、各種のデータや各種のソフトウェアを記憶する。例えば、記憶部126は、実オブジェクトの種類ごとの隠蔽回避設定条件のリストを記憶する。
{2-1-9. Storage unit 126}
The storage unit 126 stores various data and various software. For example, the storage unit 126 stores a list of concealment avoidance setting conditions for each type of real object.
 <2-2.動作>
 以上、本実施形態による構成について説明した。次に、本実施形態による動作の一例について、図10を参照して説明する。図10は、本実施形態による動作例を示したフローチャートである。
<2-2. Operation>
The configuration according to this embodiment has been described above. Next, an example of the operation according to the present embodiment will be described with reference to FIG. FIG. 10 is a flowchart showing an operation example according to the present embodiment.
 図10に示したように、まず、ARグラス10の仮想オブジェクト取得部102は、センサ部122による位置情報の測定結果に基づいて、現在位置の周囲(例えば上下左右全方位の一定の範囲)に位置する複数の仮想オブジェクトをサーバ20から取得する。そして、仮想オブジェクト取得部102は、センサ部122によるARグラス10の姿勢(またはユーザの視線方向)の測定結果に基づいて、取得した仮想オブジェクトの中から、ユーザの視界内に含まれる仮想オブジェクトを抽出する(S101)。 As shown in FIG. 10, first, the virtual object acquisition unit 102 of the AR glass 10 is arranged around the current position (for example, a certain range in all directions of up, down, left, and right directions) based on the position information measurement result by the sensor unit 122. A plurality of virtual objects located are acquired from the server 20. Then, based on the measurement result of the posture of the AR glass 10 (or the user's line of sight) by the sensor unit 122, the virtual object acquisition unit 102 selects a virtual object included in the user's field of view from the acquired virtual objects. Extract (S101).
 続いて、実オブジェクト判断部104は、例えば記憶部126に格納されている、実オブジェクトの種類ごとの隠蔽回避設定条件のリストに基づいて、ユーザの視界内に含まれる実オブジェクトのうち、隠蔽回避設定されている実オブジェクトが存在するか否かを判断する(S103)。隠蔽回避設定されている実オブジェクトがユーザの視界内に存在しない場合には(S103:No)、ARグラス10は、後述するS109の処理を行う。 Subsequently, the real object determination unit 104 avoids concealment among real objects included in the user's field of view based on a list of concealment avoidance setting conditions for each type of real object stored in the storage unit 126, for example. It is determined whether or not the set real object exists (S103). If the real object set to avoid concealment does not exist in the user's field of view (S103: No), the AR glass 10 performs the process of S109 described later.
 一方、隠蔽回避設定されている実オブジェクトがユーザの視界内に一以上存在する場合には(S103:Yes)、重なり判定部106は、該当の実オブジェクトの各々に関して、当該実オブジェクトとユーザとの間に、S101で取得された仮想オブジェクトのうちの少なくともいずれかが位置するか否かを判定する(S105)。 On the other hand, when one or more real objects that are set to avoid concealment exist in the user's field of view (S103: Yes), the overlap determination unit 106, for each of the corresponding real objects, In the meantime, it is determined whether or not at least one of the virtual objects acquired in S101 is located (S105).
 該当の実オブジェクトの全てとユーザとの間に仮想オブジェクトが全く存在しない場合には(S105:No)、ARグラス10は、後述するS109の処理を行う。 When there is no virtual object between all the corresponding real objects and the user (S105: No), the AR glass 10 performs the process of S109 described later.
 一方、該当の実オブジェクトのうちの少なくともいずれかとユーザとの間に仮想オブジェクトが一以上位置する場合には(S105:Yes)、出力制御部108は、該当の仮想オブジェクトの視認性が低下するように、当該仮想オブジェクトの表示態様を変化させる。例えば、出力制御部108は、該当の仮想オブジェクトを非表示にしたり、半透明にしたり、該当の実オブジェクトと重ならないように表示位置をずらす(S107)。 On the other hand, when one or more virtual objects are located between at least one of the corresponding real objects and the user (S105: Yes), the output control unit 108 may decrease the visibility of the corresponding virtual object. In addition, the display mode of the virtual object is changed. For example, the output control unit 108 hides the corresponding virtual object, makes it translucent, or shifts the display position so as not to overlap the corresponding real object (S107).
 その後、出力制御部108は、S101で取得された仮想オブジェクトを表示部124に表示させる(S109)。 Thereafter, the output control unit 108 causes the display unit 124 to display the virtual object acquired in S101 (S109).
 <2-3.効果>
 以上説明したように、本実施形態によれば、ARグラス10は、ユーザの視界内に含まれる実オブジェクトが、隠蔽回避設定されている実オブジェクトであるか否かの判断結果に基づいて、ユーザと当該実オブジェクトとの間の範囲において当該実オブジェクトの認識度合いが変化するように、表示部124による表示を制御する。このため、ユーザの視界内に含まれる実オブジェクトに適応的に当該実オブジェクトの認識度合いを変化させることができる。
<2-3. Effect>
As described above, according to the present embodiment, the AR glass 10 determines whether the real object included in the user's field of view is a real object that is set to avoid concealment. The display by the display unit 124 is controlled so that the recognition degree of the real object changes in a range between the real object and the real object. For this reason, the recognition degree of the real object can be adaptively changed to the real object included in the user's field of view.
 例えば、ユーザの視界内に含まれる個々の実オブジェクトに関して、隠蔽回避設定されている実オブジェクトの方が、隠蔽回避設定されていない実オブジェクトよりも認識度合いが大きくなるように、表示部124による表示を制御する。これにより、隠蔽回避設定されている実オブジェクトが仮想オブジェクトに隠蔽されないようにユーザに視認させたり、または、当該実オブジェクトの代わりとなる仮想オブジェクトをユーザに視認させることができる。従って、隠蔽回避設定されている実オブジェクトの存在をユーザに認識させることができるので、ユーザがARグラス10を装着しながら行動する際の安全性を向上させることができる。 For example, with respect to individual real objects included in the user's field of view, display by the display unit 124 is performed so that a real object that is set to avoid concealment has a higher degree of recognition than a real object that is not set to avoid concealment. To control. Accordingly, the user can visually recognize the real object that is set to avoid the concealment by the virtual object, or the user can visually recognize the virtual object that replaces the real object. Accordingly, since the user can recognize the existence of the real object that is set to avoid concealment, the safety when the user acts while wearing the AR glass 10 can be improved.
<<3.ハードウェア構成>>
 次に、本実施形態によるARグラス10のハードウェア構成について、図11を参照して説明する。図11に示すように、ARグラス10は、CPU150、ROM(Read Only Memory)152、RAM154、バス156、インターフェース158、入力装置160、出力装置162、ストレージ装置164、および通信装置166を備える。
<< 3. Hardware configuration >>
Next, the hardware configuration of the AR glass 10 according to the present embodiment will be described with reference to FIG. As shown in FIG. 11, the AR glass 10 includes a CPU 150, a ROM (Read Only Memory) 152, a RAM 154, a bus 156, an interface 158, an input device 160, an output device 162, a storage device 164, and a communication device 166.
 CPU150は、演算処理装置および制御装置として機能し、各種プログラムに従ってARグラス10内の動作全般を制御する。また、CPU150は、ARグラス10において制御部100の機能を実現する。なお、CPU150は、マイクロプロセッサなどのプロセッサにより構成される。 The CPU 150 functions as an arithmetic processing unit and a control unit, and controls the overall operation in the AR glass 10 according to various programs. In addition, the CPU 150 realizes the function of the control unit 100 in the AR glass 10. The CPU 150 is configured by a processor such as a microprocessor.
 ROM152は、CPU150が使用するプログラムや演算パラメータなどの制御用データなどを記憶する。 The ROM 152 stores programs used by the CPU 150 and control data such as calculation parameters.
 RAM154は、例えば、CPU150により実行されるプログラムなどを一時的に記憶する。 The RAM 154 temporarily stores a program executed by the CPU 150, for example.
 バス156は、CPUバスなどから構成される。このバス156は、CPU150、ROM152、およびRAM154を相互に接続する。 The bus 156 includes a CPU bus and the like. The bus 156 connects the CPU 150, the ROM 152, and the RAM 154 to each other.
 インターフェース158は、入力装置160、出力装置162、ストレージ装置164、および通信装置166を、バス156と接続する。 The interface 158 connects the input device 160, the output device 162, the storage device 164, and the communication device 166 to the bus 156.
 入力装置160は、例えばボタン、スイッチ、レバー、マイクロフォンなどユーザが情報を入力するための入力手段、および、ユーザによる入力に基づいて入力信号を生成し、CPU150に出力する入力制御回路などから構成される。 The input device 160 includes, for example, input means for a user to input information, such as buttons, switches, levers, and microphones, and an input control circuit that generates an input signal based on the input by the user and outputs the input signal to the CPU 150. The
 出力装置162は、例えば、プロジェクタなどの表示装置、および、スピーカーなどの音声出力装置を含む。なお、表示装置は、液晶ディスプレイ(LCD:Liquid Crystal Display)装置や、OLED(Organic Light Emitting Diode)装置などであってもよい。 The output device 162 includes, for example, a display device such as a projector and an audio output device such as a speaker. The display device may be a liquid crystal display (LCD) device, an organic light emitting diode (OLED) device, or the like.
 ストレージ装置164は、記憶部126として機能する、データ格納用の装置である。ストレージ装置164は、例えば、記憶媒体、記憶媒体にデータを記録する記録装置、記憶媒体からデータを読み出す読出し装置、または記憶媒体に記録されたデータを削除する削除装置などを含む。 The storage device 164 is a data storage device that functions as the storage unit 126. The storage device 164 includes, for example, a storage medium, a recording device that records data on the storage medium, a reading device that reads data from the storage medium, or a deletion device that deletes data recorded on the storage medium.
 通信装置166は、例えば通信網22などに接続するための通信デバイス等で構成された通信インターフェースである。また、通信装置166は、無線LAN対応通信装置、LTE(Long Term Evolution)対応通信装置、または有線による通信を行うワイヤー通信装置であってもよい。この通信装置166は、通信部120として機能する。 The communication device 166 is a communication interface composed of a communication device for connecting to the communication network 22 or the like, for example. The communication device 166 may be a wireless LAN compatible communication device, an LTE (Long Term Evolution) compatible communication device, or a wire communication device that performs wired communication. The communication device 166 functions as the communication unit 120.
<<4.変形例>>
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示はかかる例に限定されない。本開示の属する技術の分野における通常の知識を有する者であれば、請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。
<< 4. Modification >>
The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the present disclosure is not limited to such examples. It is obvious that a person having ordinary knowledge in the technical field to which the present disclosure belongs can come up with various changes or modifications within the scope of the technical idea described in the claims. Of course, it is understood that these also belong to the technical scope of the present disclosure.
 <4-1.変形例1>
 例えば、上述した実施形態では、ARグラス10(出力制御部108)が、個々の仮想オブジェクトごとに表示態様を変化させる例について説明したが、かかる例に限定されず、ARグラス10は、複数の仮想オブジェクトの表示態様を纏めて変化させてもよい。一例として、出力制御部108は、隠蔽回避設定されている実オブジェクト全てとユーザとの間に位置する全ての仮想オブジェクトを纏めて非表示にしたり、半透明にしてもよい。
<4-1. Modification 1>
For example, in the above-described embodiment, an example in which the AR glass 10 (the output control unit 108) changes the display mode for each virtual object has been described. However, the present invention is not limited to this example, and the AR glass 10 includes a plurality of AR glasses 10. You may change the display mode of a virtual object collectively. As an example, the output control unit 108 may hide all the virtual objects located between all the real objects set to avoid concealment and the user, or may make them semi-transparent.
 <4-2.変形例2>
 また、別の変形例として、出力制御部108は、仮想オブジェクトの種類によって、仮想オブジェクトの表示態様の変化の仕方を異ならせてもよい。例えば、ユーザに一度だけ確認を促す仮想オブジェクトに関しては、出力制御部108は、ユーザが当該仮想オブジェクトを見たことが認識されたか否かによって、当該仮想オブジェクトの表示態様を変化させてもよい。より具体的には、当該仮想オブジェクトをユーザがまだ見ていない場合には、出力制御部108は、当該仮想オブジェクトをデフォルトの表示態様で表示させる。また、当該仮想オブジェクトをユーザが見たことが認識された以後は、出力制御部108は、当該仮想オブジェクトを非表示にしたり、または、輪郭線のみを表示させるなど簡易な表示形式で表示させる。
<4-2. Modification 2>
As another modification, the output control unit 108 may change the way of changing the display mode of the virtual object depending on the type of the virtual object. For example, for a virtual object that prompts the user to confirm only once, the output control unit 108 may change the display mode of the virtual object depending on whether or not the user has recognized that the virtual object has been viewed. More specifically, when the user has not yet seen the virtual object, the output control unit 108 displays the virtual object in a default display mode. Further, after it is recognized that the user has seen the virtual object, the output control unit 108 displays the virtual object in a simple display format such as hiding or displaying only the outline.
 <4-3.変形例3>
 また、別の変形例として、複数のユーザの視界内に同じ仮想オブジェクトが含まれる場合には、出力制御部108は、当該複数のユーザの各々に対して表示される当該仮想オブジェクトの表示態様が同一になるように、表示を制御してもよい。
<4-3. Modification 3>
As another modification, when the same virtual object is included in the field of view of a plurality of users, the output control unit 108 displays the display mode of the virtual object displayed for each of the plurality of users. The display may be controlled to be the same.
 <4-4.変形例4>
 また、別の変形例として、出力制御部108は、さらに、システムエラーの発生の検出に基づいて、ユーザの視界内に位置する仮想オブジェクトの視認性が低下するように表示を制御してもよい。例えば、ARグラス10が故障した際、ARグラス10によるSLAMのエラーが生じた際、および/または、サーバ20がダウンした際などには、出力制御部108は、全ての仮想オブジェクトを非表示にしたり、半透明にしてもよい。
<4-4. Modification 4>
As another modification, the output control unit 108 may further control display based on detection of occurrence of a system error so that the visibility of a virtual object located in the user's field of view is reduced. . For example, when the AR glass 10 breaks down, when a SLAM error occurs due to the AR glass 10, and / or when the server 20 goes down, the output control unit 108 hides all virtual objects. Or semi-transparent.
 <4-5.変形例5>
 また、別の変形例として、出力制御部108は、上述した仮想オブジェクトの表示制御の代わりに、または、追加して、ユーザに対して音声や振動(触覚刺激)で通知してもよい。例えば、ユーザの視界内に信号機が存在し、当該信号機が隠蔽回避設定されており、かつ、当該信号機とユーザとの間に仮想オブジェクトが位置しているとする。この場合、出力制御部108は、信号機の点灯色を示す音声(例えば「現在赤です」や「青に変わりました」など)を内蔵のスピーカ(図示省略)に出力させてもよい。または、点灯色の種類ごとに振動のパターンが登録されており、そして、出力制御部108は、信号機の現在の点灯色に対応する振動パターンで、ユーザが携帯する別の機器(スマートフォンやスマートウォッチなど)または3Dグラス10自体を振動させてもよい。
<4-5. Modification 5>
As another modified example, the output control unit 108 may notify the user by voice or vibration (tactile stimulus) instead of or in addition to the above-described display control of the virtual object. For example, it is assumed that there is a traffic light in the user's field of view, the traffic light is set to avoid concealment, and a virtual object is located between the traffic light and the user. In this case, the output control unit 108 may cause the built-in speaker (not shown) to output sound indicating the lighting color of the traffic light (for example, “currently red” or “changed to blue”). Alternatively, a vibration pattern is registered for each type of lighting color, and the output control unit 108 is a vibration pattern corresponding to the current lighting color of the traffic light, and another device (smart phone or smart watch) carried by the user. Or the 3D glass 10 itself may be vibrated.
 <4-6.変形例6>
 また、上述した実施形態では、本開示における情報処理装置がARグラス10である例について説明したが、かかる例に限定されない。例えば、上述した制御部100に含まれる全ての構成要素をサーバ20が含む場合には、当該情報処理装置は、サーバ20であってもよい。この場合、サーバ20は、ARグラス10の位置情報および姿勢情報(およびユーザの視線方向の検出結果)をARグラス10から取得することにより、ARグラス10に対して、仮想オブジェクトの表示を制御することが可能である。また、当該情報処理装置は、サーバ20に限定されず、例えば、スマートフォン、タブレット端末、PC、または、ゲーム機など、通信網22に接続可能な他の種類の装置であってもよい。または、当該情報処理装置は、車であってもよい。
<4-6. Modification 6>
Moreover, although embodiment mentioned above demonstrated the example whose information processing apparatus in this indication is AR glass 10, it is not limited to this example. For example, when the server 20 includes all the components included in the control unit 100 described above, the information processing apparatus may be the server 20. In this case, the server 20 controls the display of the virtual object with respect to the AR glass 10 by acquiring the position information and posture information of the AR glass 10 (and the detection result of the user's line-of-sight direction) from the AR glass 10. It is possible. The information processing apparatus is not limited to the server 20, and may be another type of apparatus that can be connected to the communication network 22, such as a smartphone, a tablet terminal, a PC, or a game machine. Alternatively, the information processing apparatus may be a car.
 また、上述した実施形態では、本開示における表示部がARグラス10の表示部124である例について説明したが、かかる例に限定されない。例えば、当該表示部は、ヘッドアップディスプレイ(例えば、車載フロントガラスなど)や、卓上型の透明ディスプレイなどのシースルー機器であってもよいし、または、ビデオ透過型のHMD(Head Mounted Display)やタブレット端末などのビデオシースルー機器であってもよい。後者の場合、ユーザの前方の撮像映像が該当のディスプレイに逐次表示され得る。 In the above-described embodiment, the example in which the display unit in the present disclosure is the display unit 124 of the AR glass 10 has been described. However, the present invention is not limited to this example. For example, the display unit may be a see-through device such as a head-up display (for example, an in-vehicle windshield) or a desktop transparent display, or a video transmission type HMD (Head Mounted Display) or tablet. It may be a video see-through device such as a terminal. In the latter case, the captured image in front of the user can be sequentially displayed on the corresponding display.
 または、当該表示部は、3Dプロジェクタであってもよい。この場合、ユーザが装着するセンサまたは環境内に配置されたセンサがユーザの視界をセンシングしながら、当該3Dプロジェクタは、仮想オブジェクトを投影対象に対してプロジェクションマッピングすることにより、上述した実施形態と同様の機能を実現することが可能である。なお、投影対象は、平面であってもよいし、立体物であってもよい。 Alternatively, the display unit may be a 3D projector. In this case, the 3D projector performs projection mapping of the virtual object on the projection target while the sensor worn by the user or the sensor disposed in the environment senses the user's field of view. This function can be realized. The projection target may be a flat surface or a three-dimensional object.
 <4-7.変形例7>
 また、上述した実施形態では、隠蔽回避設定されるオブジェクトが実オブジェクトである例について説明したが、かかる例に限定されず、仮想オブジェクトが隠蔽回避設定されてもよい。例えば、特定の種類の仮想オブジェクトが隠蔽回避対象として予め設定され得る。一例として、システムからの重要な通知情報、チャットサービスなどにおけるメッセージの表示ウィンドウ、または、メッセージの受信通知画面などが隠蔽回避対象として設定され得る。この場合、ARグラス10は、隠蔽回避設定されている仮想オブジェクトの方が、隠蔽回避設定されていない仮想オブジェクトよりも認識度合いが高まるように表示を制御する。例えば、ARグラス10は、隠蔽回避設定されている仮想オブジェクトとユーザの間に位置する、隠蔽回避設定されていない仮想オブジェクトを非表示にしたり、半透明にしたり、または、表示位置をずらしてもよい。
<4-7. Modification 7>
In the above-described embodiment, an example in which an object to be concealed is set as a real object has been described. However, the present invention is not limited to such an example, and a virtual object may be set to be concealed. For example, a specific type of virtual object may be set in advance as a concealment avoidance target. As an example, important notification information from the system, a message display window in a chat service, or a message reception notification screen may be set as a concealment avoidance target. In this case, the AR glass 10 controls display so that the virtual object set to avoid concealment has a higher recognition degree than the virtual object not set to avoid concealment. For example, the AR glass 10 may be configured such that a virtual object that is not set to avoid concealment that is positioned between the virtual object that is set to avoid concealment and the user is hidden, translucent, or shifted in display position. Good.
 あるいは、仮想オブジェクトの種類ごとの優先度が予めテーブルに登録され得る。この場合、ARグラス10は、仮想オブジェクトとユーザの間に位置する、当該仮想オブジェクトよりも優先度が低い別の仮想オブジェクトを非表示にしたり、半透明にしたり、または、表示位置をずらしてもよい。これらの表示例によれば、重要度の高い仮想オブジェクトが他の仮想オブジェクトに隠蔽されずに、ユーザに視認させることができる。 Alternatively, the priority for each type of virtual object can be registered in the table in advance. In this case, the AR glass 10 may hide another semi-transparent object positioned between the virtual object and the user and has a lower priority than the virtual object, or may be translucent, or the display position may be shifted. Good. According to these display examples, a virtual object with high importance can be visually recognized by a user without being hidden by other virtual objects.
 <4-8.変形例8>
 また、上述した実施形態の動作における各ステップは、必ずしも記載された順序に沿って処理されなくてもよい。例えば、各ステップは、適宜順序が変更されて処理されてもよい。また、各ステップは、時系列的に処理される代わりに、一部並列的に又は個別的に処理されてもよい。また、記載されたステップのうちの一部が省略されたり、または、別のステップがさらに追加されてもよい。
<4-8. Modification 8>
In addition, each step in the operation of the above-described embodiment does not necessarily have to be processed in the order described. For example, the steps may be processed by changing the order as appropriate. Each step may be processed in parallel or individually instead of being processed in time series. Further, some of the described steps may be omitted, or another step may be further added.
 また、上述した実施形態によれば、CPU150、ROM152、およびRAM154などのハードウェアを、上述した実施形態によるARグラス10の各構成と同等の機能を発揮させるためのコンピュータプログラムも提供可能である。また、該コンピュータプログラムが記録された記録媒体も提供される。 Further, according to the above-described embodiment, it is also possible to provide a computer program for causing the hardware such as the CPU 150, the ROM 152, and the RAM 154 to exhibit functions equivalent to the respective configurations of the AR glass 10 according to the above-described embodiment. A recording medium on which the computer program is recorded is also provided.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 In addition, the effects described in this specification are merely illustrative or illustrative, and are not limited. That is, the technology according to the present disclosure can exhibit other effects that are apparent to those skilled in the art from the description of the present specification in addition to or instead of the above effects.
 なお、以下のような構成も本開示の技術的範囲に属する。
(1)
 ユーザの視界内に含まれる実オブジェクトが第1の実オブジェクトであるか否かの判断結果に基づいて、前記ユーザと前記実オブジェクトとの間の範囲において前記ユーザによる前記実オブジェクトの認識度合いが変化するように表示部による表示を制御する出力制御部、
を備える、情報処理装置。
(2)
 前記実オブジェクトが前記第1の実オブジェクトであると判断される場合と、前記実オブジェクトが前記第1の実オブジェクトとは異なる第2の実オブジェクトであると判断される場合とで前記実オブジェクトの認識度合いが異なるように、前記出力制御部は、前記表示部による表示を制御する、前記(1)に記載の情報処理装置。
(3)
 前記実オブジェクトが前記第1の実オブジェクトであると判断される場合には、前記実オブジェクトが前記第2の実オブジェクトであると判断される場合よりも前記実オブジェクトの認識度合いが大きくなるように、前記出力制御部は、前記表示部による表示を制御する、前記(2)に記載の情報処理装置。
(4)
 前記ユーザと前記実オブジェクトとの間の範囲は、前記ユーザと前記実オブジェクトとの間に位置する所定の形状を有する範囲であり、
 前記実オブジェクトが前記第1の実オブジェクトであると判断され、かつ、前記所定の形状を有する範囲に前記実オブジェクトの少なくとも一部が含まれる場合には、前記実オブジェクトの認識度合いが大きくなるように、前記出力制御部は、前記表示部による表示を制御する、前記(3)に記載の情報処理装置。
(5)
 前記実オブジェクトと前記ユーザとの位置関係に基づいて、前記実オブジェクトが前記第1の実オブジェクトであるか否かが判断される、前記(3)または(4)に記載の情報処理装置。
(6)
 前記実オブジェクトと前記ユーザとの間の距離に基づいて、前記実オブジェクトが前記第1の実オブジェクトであるか否かが判断される、前記(5)に記載の情報処理装置。
(7)
 前記ユーザを基準とした前記実オブジェクトの向きに基づいて、前記実オブジェクトが前記第1の実オブジェクトであるか否かが判断される、前記(5)または(6)に記載の情報処理装置。
(8)
 前記実オブジェクトの速度または加速度に基づいて、前記実オブジェクトが前記第1の実オブジェクトであるか否かが判断される、前記(3)~(7)のいずれか一項に記載の情報処理装置。
(9)
 前記実オブジェクトの温度に基づいて、前記実オブジェクトが前記第1の実オブジェクトであるか否かが判断される、前記(3)~(8)のいずれか一項に記載の情報処理装置。
(10)
 前記実オブジェクトからの音または光の発生状況に基づいて、前記実オブジェクトが前記第1の実オブジェクトであるか否かが判断される、前記(3)~(9)のいずれか一項に記載の情報処理装置。
(11)
 前記実オブジェクトは電子機器であり、
 前記実オブジェクトの機器状態に基づいて、前記実オブジェクトが前記第1の実オブジェクトであるか否かが判断される、前記(3)~(10)のいずれか一項に記載の情報処理装置。
(12)
 前記実オブジェクトが、所定の種類のオブジェクトであるか否かに基づいて、前記実オブジェクトが前記第1の実オブジェクトであるか否かが判断される、前記(3)~(11)のいずれか一項に記載の情報処理装置。
(13)
 前記実オブジェクトが前記第1の実オブジェクトであると判断される場合には、前記出力制御部は、前記ユーザと前記実オブジェクトとの間の範囲に位置する第1の仮想オブジェクトの表示態様を変化させる、前記(3)~(12)のいずれか一項に記載の情報処理装置。
(14)
 前記実オブジェクトが前記第2の実オブジェクトであると判断される場合には、前記出力制御部は、前記第1の仮想オブジェクトの表示態様を変化させない、前記(13)に記載の情報処理装置。
(15)
 前記実オブジェクトが前記第1の実オブジェクトであると判断される場合には、前記出力制御部は、前記第1の仮想オブジェクトの一部または全部が非表示になるように表示を制御する、前記(13)または(14)に記載の情報処理装置。
(16)
 前記実オブジェクトが前記第1の実オブジェクトであると判断される場合には、前記出力制御部は、前記第1の仮想オブジェクトの透過度を大きくする、前記(13)~(15)のいずれか一項に記載の情報処理装置。
(17)
 前記実オブジェクトが前記第1の実オブジェクトであると判断される場合には、前記出力制御部は、前記表示部において前記実オブジェクトが視認される領域と前記第1の仮想オブジェクトの表示位置とが重ならないように、前記第1の仮想オブジェクトの表示位置を変更する、前記(13)~(16)のいずれか一項に記載の情報処理装置。
(18)
 前記実オブジェクトが前記第1の実オブジェクトであると判断される場合には、前記出力制御部は、前記第1の仮想オブジェクトの表示位置とは異なる表示位置に、前記実オブジェクトに関連する第2の仮想オブジェクトを新たに表示させる、前記(13)~(17)のいずれか一項に記載の情報処理装置。
(19)
 ユーザの視界内に含まれる実オブジェクトが第1の実オブジェクトであるか否かの判断結果に基づいて、前記ユーザと前記実オブジェクトとの間の範囲において前記ユーザによる前記実オブジェクトの認識度合いが変化するように表示部による表示をプロセッサが制御すること、
を含む、情報処理方法。
(20)
 コンピュータを、
 ユーザの視界内に含まれる実オブジェクトが第1の実オブジェクトであるか否かの判断結果に基づいて、前記ユーザと前記実オブジェクトとの間の範囲において前記ユーザによる前記実オブジェクトの認識度合いが変化するように表示部による表示を制御する出力制御部、
として機能させるための、プログラム。
The following configurations also belong to the technical scope of the present disclosure.
(1)
Based on the determination result of whether or not the real object included in the user's field of view is the first real object, the degree of recognition of the real object by the user changes in a range between the user and the real object An output control unit for controlling display by the display unit,
An information processing apparatus comprising:
(2)
When the real object is determined to be the first real object, and when the real object is determined to be a second real object different from the first real object, The information processing apparatus according to (1), wherein the output control unit controls display by the display unit so that the degree of recognition is different.
(3)
When it is determined that the real object is the first real object, the degree of recognition of the real object is greater than when the real object is determined to be the second real object. The information processing apparatus according to (2), wherein the output control unit controls display by the display unit.
(4)
The range between the user and the real object is a range having a predetermined shape located between the user and the real object,
When it is determined that the real object is the first real object and at least a part of the real object is included in the range having the predetermined shape, the recognition degree of the real object is increased. Moreover, the information processing apparatus according to (3), wherein the output control unit controls display by the display unit.
(5)
The information processing apparatus according to (3) or (4), wherein whether or not the real object is the first real object is determined based on a positional relationship between the real object and the user.
(6)
The information processing apparatus according to (5), wherein whether or not the real object is the first real object is determined based on a distance between the real object and the user.
(7)
The information processing apparatus according to (5) or (6), wherein whether or not the real object is the first real object is determined based on a direction of the real object with respect to the user.
(8)
The information processing apparatus according to any one of (3) to (7), wherein whether or not the real object is the first real object is determined based on a speed or acceleration of the real object. .
(9)
The information processing apparatus according to any one of (3) to (8), wherein whether or not the real object is the first real object is determined based on a temperature of the real object.
(10)
The method according to any one of (3) to (9), wherein whether or not the real object is the first real object is determined based on a sound or light generation state from the real object. Information processing device.
(11)
The real object is an electronic device,
The information processing apparatus according to any one of (3) to (10), wherein whether or not the real object is the first real object is determined based on a device state of the real object.
(12)
Any of (3) to (11), wherein whether or not the real object is the first real object is determined based on whether or not the real object is a predetermined type of object. The information processing apparatus according to one item.
(13)
When it is determined that the real object is the first real object, the output control unit changes a display mode of the first virtual object located in a range between the user and the real object. The information processing apparatus according to any one of (3) to (12).
(14)
The information processing apparatus according to (13), wherein when the real object is determined to be the second real object, the output control unit does not change a display mode of the first virtual object.
(15)
When it is determined that the real object is the first real object, the output control unit controls display so that a part or all of the first virtual object is hidden. (13) The information processing apparatus according to (14).
(16)
When it is determined that the real object is the first real object, the output control unit increases the transparency of the first virtual object, and is any one of (13) to (15). The information processing apparatus according to one item.
(17)
When it is determined that the real object is the first real object, the output control unit determines a region where the real object is visually recognized on the display unit and a display position of the first virtual object. The information processing apparatus according to any one of (13) to (16), wherein a display position of the first virtual object is changed so as not to overlap.
(18)
When it is determined that the real object is the first real object, the output control unit sets a second position related to the real object at a display position different from the display position of the first virtual object. The information processing apparatus according to any one of (13) to (17), wherein the virtual object is newly displayed.
(19)
Based on the determination result of whether or not the real object included in the user's field of view is the first real object, the degree of recognition of the real object by the user changes in a range between the user and the real object The processor controls the display by the display unit,
Including an information processing method.
(20)
Computer
Based on the determination result of whether or not the real object included in the user's field of view is the first real object, the degree of recognition of the real object by the user changes in a range between the user and the real object An output control unit for controlling display by the display unit,
Program to function as
10 ARグラス
20 サーバ
22 通信網
30 実オブジェクト
32 仮想オブジェクト
100 制御部
102 仮想オブジェクト取得部
104 実オブジェクト判断部
106 重なり判定部
108 出力制御部
120 通信部
122 センサ部
124 表示部
126 記憶部
10 AR glass 20 server 22 communication network 30 real object 32 virtual object 100 control unit 102 virtual object acquisition unit 104 real object determination unit 106 overlap determination unit 108 output control unit 120 communication unit 122 sensor unit 124 display unit 126 storage unit

Claims (20)

  1.  ユーザの視界内に含まれる実オブジェクトが第1の実オブジェクトであるか否かの判断結果に基づいて、前記ユーザと前記実オブジェクトとの間の範囲において前記ユーザによる前記実オブジェクトの認識度合いが変化するように表示部による表示を制御する出力制御部、
    を備える、情報処理装置。
    Based on the determination result of whether or not the real object included in the user's field of view is the first real object, the degree of recognition of the real object by the user changes in a range between the user and the real object An output control unit for controlling display by the display unit,
    An information processing apparatus comprising:
  2.  前記実オブジェクトが前記第1の実オブジェクトであると判断される場合と、前記実オブジェクトが前記第1の実オブジェクトとは異なる第2の実オブジェクトであると判断される場合とで前記実オブジェクトの認識度合いが異なるように、前記出力制御部は、前記表示部による表示を制御する、請求項1に記載の情報処理装置。 When the real object is determined to be the first real object, and when the real object is determined to be a second real object different from the first real object, The information processing apparatus according to claim 1, wherein the output control unit controls display by the display unit so that the degree of recognition is different.
  3.  前記実オブジェクトが前記第1の実オブジェクトであると判断される場合には、前記実オブジェクトが前記第2の実オブジェクトであると判断される場合よりも前記実オブジェクトの認識度合いが大きくなるように、前記出力制御部は、前記表示部による表示を制御する、請求項2に記載の情報処理装置。 When it is determined that the real object is the first real object, the degree of recognition of the real object is greater than when the real object is determined to be the second real object. The information processing apparatus according to claim 2, wherein the output control unit controls display by the display unit.
  4.  前記ユーザと前記実オブジェクトとの間の範囲は、前記ユーザと前記実オブジェクトとの間に位置する所定の形状を有する範囲であり、
     前記実オブジェクトが前記第1の実オブジェクトであると判断され、かつ、前記所定の形状を有する範囲に前記実オブジェクトの少なくとも一部が含まれる場合には、前記実オブジェクトの認識度合いが大きくなるように、前記出力制御部は、前記表示部による表示を制御する、請求項3に記載の情報処理装置。
    The range between the user and the real object is a range having a predetermined shape located between the user and the real object,
    When it is determined that the real object is the first real object and at least a part of the real object is included in the range having the predetermined shape, the recognition degree of the real object is increased. The information processing apparatus according to claim 3, wherein the output control unit controls display by the display unit.
  5.  前記実オブジェクトと前記ユーザとの位置関係に基づいて、前記実オブジェクトが前記第1の実オブジェクトであるか否かが判断される、請求項3に記載の情報処理装置。 4. The information processing apparatus according to claim 3, wherein whether or not the real object is the first real object is determined based on a positional relationship between the real object and the user.
  6.  前記実オブジェクトと前記ユーザとの間の距離に基づいて、前記実オブジェクトが前記第1の実オブジェクトであるか否かが判断される、請求項5に記載の情報処理装置。 6. The information processing apparatus according to claim 5, wherein whether or not the real object is the first real object is determined based on a distance between the real object and the user.
  7.  前記ユーザを基準とした前記実オブジェクトの向きに基づいて、前記実オブジェクトが前記第1の実オブジェクトであるか否かが判断される、請求項5に記載の情報処理装置。 6. The information processing apparatus according to claim 5, wherein whether or not the real object is the first real object is determined based on a direction of the real object with respect to the user.
  8.  前記実オブジェクトの速度または加速度に基づいて、前記実オブジェクトが前記第1の実オブジェクトであるか否かが判断される、請求項3に記載の情報処理装置。 4. The information processing apparatus according to claim 3, wherein whether or not the real object is the first real object is determined based on a speed or acceleration of the real object.
  9.  前記実オブジェクトの温度に基づいて、前記実オブジェクトが前記第1の実オブジェクトであるか否かが判断される、請求項3に記載の情報処理装置。 4. The information processing apparatus according to claim 3, wherein whether or not the real object is the first real object is determined based on a temperature of the real object.
  10.  前記実オブジェクトからの音または光の発生状況に基づいて、前記実オブジェクトが前記第1の実オブジェクトであるか否かが判断される、請求項3に記載の情報処理装置。 4. The information processing apparatus according to claim 3, wherein whether or not the real object is the first real object is determined based on a sound or light generation state from the real object.
  11.  前記実オブジェクトは電子機器であり、
     前記実オブジェクトの機器状態に基づいて、前記実オブジェクトが前記第1の実オブジェクトであるか否かが判断される、請求項3に記載の情報処理装置。
    The real object is an electronic device,
    The information processing apparatus according to claim 3, wherein whether or not the real object is the first real object is determined based on a device state of the real object.
  12.  前記実オブジェクトが、所定の種類のオブジェクトであるか否かに基づいて、前記実オブジェクトが前記第1の実オブジェクトであるか否かが判断される、請求項3に記載の情報処理装置。 4. The information processing apparatus according to claim 3, wherein whether or not the real object is the first real object is determined based on whether or not the real object is a predetermined type of object.
  13.  前記実オブジェクトが前記第1の実オブジェクトであると判断される場合には、前記出力制御部は、前記ユーザと前記実オブジェクトとの間の範囲に位置する第1の仮想オブジェクトの表示態様を変化させる、請求項3に記載の情報処理装置。 When it is determined that the real object is the first real object, the output control unit changes a display mode of the first virtual object located in a range between the user and the real object. The information processing apparatus according to claim 3.
  14.  前記実オブジェクトが前記第2の実オブジェクトであると判断される場合には、前記出力制御部は、前記第1の仮想オブジェクトの表示態様を変化させない、請求項13に記載の情報処理装置。 14. The information processing apparatus according to claim 13, wherein when the real object is determined to be the second real object, the output control unit does not change a display mode of the first virtual object.
  15.  前記実オブジェクトが前記第1の実オブジェクトであると判断される場合には、前記出力制御部は、前記第1の仮想オブジェクトの一部または全部が非表示になるように表示を制御する、請求項13に記載の情報処理装置。 When it is determined that the real object is the first real object, the output control unit controls display so that a part or all of the first virtual object is hidden. Item 14. The information processing apparatus according to Item 13.
  16.  前記実オブジェクトが前記第1の実オブジェクトであると判断される場合には、前記出力制御部は、前記第1の仮想オブジェクトの透過度を大きくする、請求項13に記載の情報処理装置。 14. The information processing apparatus according to claim 13, wherein when the real object is determined to be the first real object, the output control unit increases the transparency of the first virtual object.
  17.  前記実オブジェクトが前記第1の実オブジェクトであると判断される場合には、前記出力制御部は、前記表示部において前記実オブジェクトが視認される領域と前記第1の仮想オブジェクトの表示位置とが重ならないように、前記第1の仮想オブジェクトの表示位置を変更する、請求項13に記載の情報処理装置。 When it is determined that the real object is the first real object, the output control unit determines a region where the real object is visually recognized on the display unit and a display position of the first virtual object. The information processing apparatus according to claim 13, wherein a display position of the first virtual object is changed so as not to overlap.
  18.  前記実オブジェクトが前記第1の実オブジェクトであると判断される場合には、前記出力制御部は、前記第1の仮想オブジェクトの表示位置とは異なる表示位置に、前記実オブジェクトに関連する第2の仮想オブジェクトを新たに表示させる、請求項13に記載の情報処理装置。 When it is determined that the real object is the first real object, the output control unit sets a second position related to the real object at a display position different from the display position of the first virtual object. The information processing apparatus according to claim 13, wherein the virtual object is newly displayed.
  19.  ユーザの視界内に含まれる実オブジェクトが第1の実オブジェクトであるか否かの判断結果に基づいて、前記ユーザと前記実オブジェクトとの間の範囲において前記ユーザによる前記実オブジェクトの認識度合いが変化するように表示部による表示をプロセッサが制御すること、
    を含む、情報処理方法。
    Based on the determination result of whether or not the real object included in the user's field of view is the first real object, the degree of recognition of the real object by the user changes in a range between the user and the real object The processor controls the display by the display unit,
    Including an information processing method.
  20.  コンピュータを、
     ユーザの視界内に含まれる実オブジェクトが第1の実オブジェクトであるか否かの判断結果に基づいて、前記ユーザと前記実オブジェクトとの間の範囲において前記ユーザによる前記実オブジェクトの認識度合いが変化するように表示部による表示を制御する出力制御部、
    として機能させるための、プログラム。
    Computer
    Based on the determination result of whether or not the real object included in the user's field of view is the first real object, the degree of recognition of the real object by the user changes in a range between the user and the real object An output control unit for controlling display by the display unit,
    Program to function as
PCT/JP2017/013655 2016-07-04 2017-03-31 Information processing device, information processing method, and program WO2018008210A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-132696 2016-07-04
JP2016132696A JP2018005005A (en) 2016-07-04 2016-07-04 Information processing device, information processing method, and program

Publications (1)

Publication Number Publication Date
WO2018008210A1 true WO2018008210A1 (en) 2018-01-11

Family

ID=60912445

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/013655 WO2018008210A1 (en) 2016-07-04 2017-03-31 Information processing device, information processing method, and program

Country Status (2)

Country Link
JP (1) JP2018005005A (en)
WO (1) WO2018008210A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311754A (en) * 2018-12-12 2020-06-19 联想(新加坡)私人有限公司 Method, information processing apparatus, and product for augmented reality content exclusion
CN113411227A (en) * 2021-05-07 2021-09-17 上海纽盾科技股份有限公司 AR (augmented reality) -assisted network equipment testing method and device

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6976186B2 (en) * 2018-02-01 2021-12-08 Kddi株式会社 Terminal devices and programs
KR102481333B1 (en) 2018-05-08 2022-12-23 그리 가부시키가이샤 A moving image distribution system, a moving image distribution method, and a moving image distribution program for distributing a moving image including animation of a character object generated based on the movement of an actor.
JP7321787B2 (en) * 2019-06-19 2023-08-07 日産自動車株式会社 Information processing device and information processing method
JP7001719B2 (en) * 2020-01-29 2022-02-04 グリー株式会社 Computer programs, server devices, terminal devices, and methods
JP6968326B1 (en) * 2021-04-16 2021-11-17 ティフォン株式会社 Display device, display method and its display program

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09330489A (en) * 1996-06-07 1997-12-22 Hitachi Ltd Equipment monitoring method and its device
JP2005037181A (en) * 2003-07-17 2005-02-10 Pioneer Electronic Corp Navigation device, server, navigation system, and navigation method
JP2006171302A (en) * 2004-12-15 2006-06-29 Konica Minolta Photo Imaging Inc Video display device and information providing system
JP2008083289A (en) * 2006-09-27 2008-04-10 Sony Corp Imaging display apparatus, and imaging display method
JP2015204615A (en) * 2014-04-11 2015-11-16 三菱電機株式会社 Method and system for interacting between equipment and moving device
US20160041624A1 (en) * 2013-04-25 2016-02-11 Bayerische Motoren Werke Aktiengesellschaft Method for Interacting with an Object Displayed on Data Eyeglasses
JP2016045814A (en) * 2014-08-25 2016-04-04 泰章 岩井 Virtual reality service providing system and virtual reality service providing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09330489A (en) * 1996-06-07 1997-12-22 Hitachi Ltd Equipment monitoring method and its device
JP2005037181A (en) * 2003-07-17 2005-02-10 Pioneer Electronic Corp Navigation device, server, navigation system, and navigation method
JP2006171302A (en) * 2004-12-15 2006-06-29 Konica Minolta Photo Imaging Inc Video display device and information providing system
JP2008083289A (en) * 2006-09-27 2008-04-10 Sony Corp Imaging display apparatus, and imaging display method
US20160041624A1 (en) * 2013-04-25 2016-02-11 Bayerische Motoren Werke Aktiengesellschaft Method for Interacting with an Object Displayed on Data Eyeglasses
JP2015204615A (en) * 2014-04-11 2015-11-16 三菱電機株式会社 Method and system for interacting between equipment and moving device
JP2016045814A (en) * 2014-08-25 2016-04-04 泰章 岩井 Virtual reality service providing system and virtual reality service providing method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311754A (en) * 2018-12-12 2020-06-19 联想(新加坡)私人有限公司 Method, information processing apparatus, and product for augmented reality content exclusion
CN113411227A (en) * 2021-05-07 2021-09-17 上海纽盾科技股份有限公司 AR (augmented reality) -assisted network equipment testing method and device

Also Published As

Publication number Publication date
JP2018005005A (en) 2018-01-11

Similar Documents

Publication Publication Date Title
WO2018008210A1 (en) Information processing device, information processing method, and program
US11386626B2 (en) Information processing apparatus, information processing method, and program
US10818088B2 (en) Virtual barrier objects
US10650600B2 (en) Virtual path display
CN103975268B (en) Wearable computer with the response of neighbouring object
CN107015638B (en) Method and apparatus for alerting a head mounted display user
EP2857886B1 (en) Display control apparatus, computer-implemented method, storage medium, and projection apparatus
US20190377538A1 (en) Information Presentation Through Ambient Sounds
TW202113428A (en) Systems and methods for generating dynamic obstacle collision warnings for head-mounted displays
JPWO2019176577A1 (en) Information processing equipment, information processing methods, and recording media
US20190227312A1 (en) Systems and Methods for Collision Avoidance in Virtual Environments
US20190139307A1 (en) Modifying a Simulated Reality Display Based on Object Detection
US10636199B2 (en) Displaying and interacting with scanned environment geometry in virtual reality
WO2017169273A1 (en) Information processing device, information processing method, and program
WO2019244670A1 (en) Information processing device, information processing method, and program
KR20160009879A (en) Wearable display device and method for controlling the same
WO2018198503A1 (en) Information processing device, information processing method, and program
CN112105983A (en) Enhanced visual ability
EP3438938B1 (en) Information processing device, information processing method, and program
US11004273B2 (en) Information processing device and information processing method
JP7332823B1 (en) program
WO2018008208A1 (en) Information processing device, information processing method, and program
JP2023549842A (en) Locating controllable devices using wearable devices
CN117716253A (en) Mapping networking devices
CN115698923A (en) Information processing apparatus, information processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17823818

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17823818

Country of ref document: EP

Kind code of ref document: A1