WO2022091589A1 - Dispositif de traitement d'informations, procédé de traitement d'informations, et programme - Google Patents

Dispositif de traitement d'informations, procédé de traitement d'informations, et programme Download PDF

Info

Publication number
WO2022091589A1
WO2022091589A1 PCT/JP2021/032984 JP2021032984W WO2022091589A1 WO 2022091589 A1 WO2022091589 A1 WO 2022091589A1 JP 2021032984 W JP2021032984 W JP 2021032984W WO 2022091589 A1 WO2022091589 A1 WO 2022091589A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
information processing
display
content
control unit
Prior art date
Application number
PCT/JP2021/032984
Other languages
English (en)
Japanese (ja)
Inventor
陽方 川名
茜 近藤
保乃花 尾崎
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2022091589A1 publication Critical patent/WO2022091589A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

Definitions

  • This disclosure relates to information processing devices, information processing methods, and programs.
  • Patent Document 1 below discloses a technique for encouraging a user to take an action to improve the projection environment.
  • an appropriate projection location is determined from a plurality of candidates, the user's line of sight is guided to the projection location, and obstacles placed at the projection location are excluded from the user.
  • Action attraction such as letting is performed.
  • Patent Document 1 when urging the user to take a predetermined action, a one-sided instruction is explicitly given, which may give a feeling of coercion to the user.
  • the user is implicitly guided to a specific viewing place in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user.
  • the processor implicitly places the user in a particular viewing location in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user.
  • an information processing method that includes performing display control that guides the user to a display device.
  • the computer implicitly places the user in a particular viewing location in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user.
  • a program that functions as a control unit that performs display control guided by the display device.
  • FIG. 1 is a diagram illustrating an outline of an information processing system according to an embodiment of the present disclosure. As shown in FIG. 1, the information processing system according to the present embodiment includes a projector 210, a camera 310, and an information processing device 100.
  • the projector 210 is a display device that projects (displays) an image at an arbitrary place in real space.
  • the projector 210 projects an image on an arbitrary place such as a floor, a wall, a desk, a table, or a sofa in a conference room or a living space.
  • the projector 210 is an example of the output device 200 (FIG. 3).
  • the projector 210 projects an image in the real space included in the projection angle of view.
  • the image to be projected is output from the information processing apparatus 100.
  • the projected angle of view means a projectable range, and is also referred to as a "projection area" in the present specification.
  • the projection area is an example of a display area.
  • the projection area is defined by the position of the projector 210, the projection direction, and the angle of the projectable range about the projection direction as the central axis.
  • the image projected by the projector 210 is also referred to as a projected image.
  • the projected image is an example of a display image.
  • the projection area 211 is on the floor surface in the real space.
  • the present embodiment is not limited to this, and the projection area 211 may be provided on the wall or ceiling of the real space. Further, the projection area 211 may be provided in a plurality of places such as a floor surface and a wall (the range of the projection area 211 may be a width including a plurality of places such as a floor surface and a wall).
  • the projector 210 may have a mechanism for driving in any direction (for example, a pan / tilt mechanism). Further, a plurality of projectors 210 may be provided in the real space. With the plurality of projectors 210, it is possible to set a wider range as the projection area 211.
  • the camera 310 is an imaging device that captures images in real space.
  • the camera 310 has a lens system, a drive system, and an image pickup element such as an RGB (red, green, blue) camera, an IR (infrared) camera, etc., and captures an image (still image or moving image).
  • the camera 310 is an example of the sensor 300 (FIG. 3).
  • the camera 310 captures the real space included in the imaging angle of view 311.
  • the image pickup angle of view 311 means an image pickup range, and is defined by the installation position of the camera 310, the image pickup direction, and the angle of the image pickup range centered on the image pickup direction.
  • the image captured by the camera 310 is also referred to as a captured image.
  • the image pickup angle of view 311 of the camera 310 according to the present embodiment may be a range including at least the projection area 211. Further, the imaged angle of view 311 may be a range including the entire real space.
  • the camera 310 may have a mechanism for driving in any direction (for example, a pan / tilt mechanism).
  • the camera 310 may be fixed to the projector 210 so that the imaging direction of the camera 310 is aligned with the projection direction of the projector 210. Further, the camera 310 may be provided at a position different from that of the projector 210. Further, a plurality of cameras 310 may be provided in the real space. With the plurality of cameras 310, it is possible to set the imaging angle of view 311 in a wider range. The captured image captured by the camera 310 is output to the information processing apparatus 100.
  • the information processing apparatus 100 communicates with a projector 210 and a camera 310 arranged in a real space, controls the projection of an image into the real space by the projector 210, and acquires an image captured by the camera 310. do.
  • the information processing apparatus 100 can perform real space recognition (spatial recognition) based on the captured image acquired from the camera 310.
  • the information processing apparatus 100 communicates with the speaker and controls the audio output from the speaker.
  • the speaker is an example of the output device 200 (FIG. 3).
  • the speaker may be a directional speaker.
  • the speaker may be a unit integrated with the projector 210, may be arranged in a place different from the projector 210 in the real space, or may be provided in a personal terminal such as a smartphone or a mobile phone. good.
  • FIG. 2 is a diagram illustrating a place suitable for viewing a projected image.
  • the display is controlled so that the orientation of the projected image faces the user's body.
  • the shadow of the user appears on the projector 210 (light source) and the extension line of the user. Therefore, for example, when the user is located between the projector 210 and the projection area 211 (area E1) as shown in the upper left of FIG. 2, a shadow is generated in the same direction as the user's line of sight, which hinders the viewing of the projected image 500a. ..
  • area E1 area E1
  • the projection area 211 since the projection area 211 is located between the projector 210 and the user, the shadow of the user is generated in a direction different from the line-of-sight direction of the user (outside the visual field area), and the projected image 500b. It does not interfere with the viewing of. However, since the position of the user is the area E2 on the lateral side of the projection area 211, the projected image 500b may be reduced and displayed, and the visibility may be reduced.
  • the size of the projection area 211 is wide even if the position does not cause a shadow in the line-of-sight direction and the body orientation. May not be fully utilized and the projected image 500d may be reduced and displayed, which may reduce visibility. Especially when projecting an image in a living space, it is assumed that the user walks to various positions.
  • FIG. 3 is a diagram showing an example of the configuration of the information processing system according to the present embodiment.
  • the information processing system according to the present embodiment includes an information processing device 100, an output device 200, and a sensor 300.
  • the output device 200 is a device that presents the information received from the information processing device 100 to the user in the real space.
  • the output device 200 is realized by, for example, a projector 210.
  • the projector 210 is an example of a display device.
  • the output device 200 is not shown in FIG. 3, the user can obtain some information such as an audio output device (speaker), a lighting device, a vibration device, a wind output device, an air conditioning device, various actuators, and the like. It may further include a device that can be presented in. Further, a plurality of output devices 200 may exist in the space.
  • the projector 210 may be, for example, a device having a drive mechanism and capable of projecting in any direction. By having such a mechanism, it is possible to project an image not only at one place but also at various places.
  • the projector 210 may include a component capable of output other than the display.
  • the projector 210 may be combined with a sound output device such as a speaker.
  • the speaker may be a unidirectional speaker capable of forming directivity in a single direction. The unidirectional speaker outputs sound in the direction in which the user is, for example.
  • the projector 210 is an example of a display device
  • a display device other than the projector 210 may be used as the display device included in the output device 200.
  • the display device may be a display provided on a floor, a wall, a table, or the like in a real space.
  • the display device may be a touch panel display capable of detecting a user's touch operation.
  • the display device may be a TV device arranged in a real space.
  • the display device may be a device worn by the user.
  • the device worn by the user may be, for example, a glasses-type device provided with a transmissive display or a wearable device such as an HMD (Head Mounted Display) worn on the user's head.
  • the display device may be a personal terminal such as a smartphone, a tablet terminal, a mobile phone, or a PC (personal computer).
  • the sensor 300 is provided in the real space, detects information about the environment and the user from the real space, and outputs the detected information (sensing data) to the information processing apparatus 100.
  • the sensor 300 has three-dimensional information in the real space (shape of the real space, arrangement and shape of a real object such as furniture, etc.), information in the projection area (size and location), and state of the projection surface (coarse).
  • Sensing environmental information such as pod material, color, etc.), illuminance environment, and volume.
  • the sensor 300 senses information about the user such as the presence / absence of the user, the number of people, the position, the posture, the line-of-sight direction, the direction of the face, and the gesture of the fingers.
  • the sensor 300 may be single or plural. Further, the sensor 300 may be provided in the output device 200.
  • the senor 300 is realized by, for example, a camera 310 and a distance measuring sensor 320.
  • the camera 310 and the distance measuring sensor 320 may be installed on a ceiling, a wall, a table, or the like in a real space, or may be worn by a user. Further, a plurality of cameras 310 and a plurality of distance measuring sensors 320 may be provided.
  • the camera 300 captures one or more users in the space and the projection area 211, and outputs the captured image to the information processing apparatus 100.
  • the camera 300 may be single or plural.
  • a camera for environment recognition, a camera for user recognition, and a camera for shooting a projection area may be arranged in a real space.
  • the imaging wavelength is not limited to the visible light region, and may include ultraviolet rays and infrared rays, or may be limited to a specific wavelength region.
  • the camera 300 may be an RGB-IR camera in which an RGB camera and an IR camera are combined.
  • the information processing apparatus 100 can simultaneously acquire a visible light image (also referred to as an RGB image or a color image) and an IR image.
  • the distance measuring sensor 320 detects the distance information (depth data) in the space and outputs it to the information processing apparatus 100.
  • the distance measuring sensor 320 may be realized by a depth sensor that can acquire a three-dimensional image that can comprehensively recognize three-dimensional information in space and can be driven by a mechanical mechanism. Further, the distance measuring sensor 320 may be realized by a method using infrared light as a light source, a method using ultrasonic waves, a method using a plurality of cameras, a method using image processing, or the like.
  • the range-finding sensor 320 may be a device that acquires depth information such as an infrared range-finding device, an ultrasonic range-finding device, LiDAR (Laser Imaging Detection and Ringing), or a stereo camera.
  • the distance measuring sensor 320 may be a ToF (Time Of Flight) camera capable of acquiring a highly accurate distance image.
  • the distance measuring sensor 320 may be a single sensor or a plurality of sensors 320, or the distance information in the space may be collectively acquired.
  • the camera 310 and the distance measuring sensor 320 that realize the sensor 300 may be provided at different places or may be provided at the same place. Further, the sensor 300 is not limited to the camera 310 and the distance measuring sensor 320, and may be further realized by an illuminance sensor or a microphone. Further, the sensor 300 may be further realized by a touch sensor provided in the projection area 211 and detecting a user operation on the projection area 211.
  • the information processing apparatus 100 includes an I / F (Interface) unit 110, a control unit 120, an input unit 130, and a storage unit 140.
  • I / F Interface
  • the I / F unit 110 is a connection device for connecting the information processing device 100 and other devices.
  • the I / F unit 110 includes, for example, a USB (Universal Serial Bus) connector, a wired / wireless LAN (Local Area Network), a Wi-Fi (registered trademark), a Bluetooth (registered trademark), a ZigBee (registered trademark), and a mobile communication network (registered trademark). It is realized by LTE (Long Term Evolution), 3G (3rd generation mobile communication method), 4G (4th generation mobile communication method), 5G (5th generation mobile communication method), and the like.
  • the I / F unit 110 inputs / outputs information to / from the projector 210, the camera 310, and the distance measuring sensor, respectively.
  • Control unit 120 The control unit 120 functions as an arithmetic processing unit and a control device, and controls the overall operation in the information processing device 100 according to various programs.
  • the control unit 120 is realized by an electronic circuit such as a CPU (Central Processing Unit) or a microprocessor. Further, the control unit 120 may include a ROM (Read Only Memory) for storing programs to be used, calculation parameters, and the like, and a RAM (Random Access Memory) for temporarily storing parameters and the like that change as appropriate.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the control unit 120 functions as a recognition unit 121, a viewing location determination unit 122, and a display control unit 123.
  • the recognition unit 121 recognizes the real space environment and the user based on various sensing data (captured image, depth data, etc.) detected by the sensor 300.
  • the recognition unit 121 may include the shape of the real space, the existence of a real object (arrangement of home appliances and furniture, shape, etc.), the state of the projection surface (roughness, color, reflectance, etc.), the position of the projection area, and the like. It recognizes the size, obstacles placed in the projection area, obstacles that block the projected light to the projection area (home appliances, furniture, users, etc.), illuminance environment, sound environment, etc.
  • the recognition unit 121 detects the presence / absence of a user, the position, the number of people, the posture, the line-of-sight direction, the face orientation, the gesture of the fingers, the user operation, etc. as the user recognition process.
  • the user operation include a touch operation on the projected image (projected surface) and an operation on the projected image (projected surface) by a digital pen provided with an IR light emitting unit or the like on the pen tip.
  • a user operation an operation using a controller or a laser beam can be mentioned.
  • the recognition unit 121 may acquire the user's spoken voice with a microphone and recognize the voice input by the user.
  • the viewing place determination unit 122 determines an appropriate viewing place for the user. For example, the viewing location determination unit 122 determines an appropriate viewing location for the user based on the position of the projector 210 and the position of the projection area 211. Specifically, the appropriate viewing place is assumed to be a place where there is less risk of deterioration in visibility when the user visually recognizes the image displayed in the projection area 211. A place where there is less risk of deterioration of visibility is, for example, a place where a shadow does not occur in the line-of-sight direction of the user or a place where a projected image (content) can be displayed larger (a place where the projection area 211 can be effectively used). And so on.
  • the display orientation of the projected image is uniquely determined (specifically, facing the user) with respect to the user's line-of-sight direction (or face orientation, body orientation). A place where the projection area 211 can be effectively used may be determined.
  • Output control unit 123 controls the output of information from the output device 200.
  • the output control unit 123 functions as a display control unit that controls image projection (display) from the projector 210.
  • the display control unit can control the projection location and orientation of the image.
  • the output control unit 123 may further function as an audio output control unit that controls audio output from a speaker (not shown). Further, the output control unit 123 can function as a control unit of various other output devices.
  • the output control unit 123 can perform display control that implicitly guides the user to an appropriate viewing place as a function as a display control unit.
  • the output control unit 123 projects a human figure image in the line-of-sight direction of the user as if it were a shadow of the user, and controls the display so that the closer the person is to the place determined by the viewing place determination unit 122, the smaller the human image is. ..
  • the output control unit 123 can implicitly guide the user by controlling the display state of the content (moving image, still image, website, text, GUI, etc.). Details of the implicit induction by this embodiment will be described later.
  • the input unit 130 receives input information to the information processing device 100.
  • the input unit 130 may be an operation input unit that receives an operation instruction by the user.
  • the operation input unit may be a touch sensor, a pressure sensor, or a proximity sensor.
  • the operation input unit may have a physical configuration such as a button, a switch, and a lever.
  • the input unit 130 may be a voice input unit (microphone).
  • the storage unit 140 is realized by a ROM (Read Only Memory) that stores programs and arithmetic parameters used for processing of the control unit 120, and a RAM (Random Access Memory) that temporarily stores parameters and the like that change as appropriate.
  • the storage unit 140 stores various information input from the external device by the I / F unit 110 and various information calculated and generated by the control unit 120.
  • the information processing apparatus 100 may further have, for example, an output unit.
  • the output unit may be realized by, for example, a display unit or an audio output unit (microphone).
  • the display unit outputs an operation screen, a menu screen, or the like, and may be a display device such as a liquid crystal display (LCD: Liquid Crystal Display) or an organic EL (Electro Luminescence) display.
  • LCD Liquid Crystal Display
  • organic EL Electro Luminescence
  • the information processing device 100 may be realized by, for example, a smartphone, a tablet terminal, a PC (personal computer), an HMD, or the like. Further, the information processing device 100 may be a dedicated device arranged in the same space as the output device 200 and the sensor 300. Further, the information processing apparatus 100 may be a server (cloud server) on the Internet, or may be realized by an intermediate server, an edge server, or the like.
  • server cloud server
  • the information processing device 100 may be configured by a plurality of devices, or at least a part of the configuration may be provided in the output device 200 or the sensor 300. Further, at least a part of the configuration of the information processing device 100 is a server (cloud server) on the Internet, an intermediate server, an edge server, a dedicated device arranged in the same space as the output device 200 and the sensor 300, and a user's personal terminal. It may be provided in (smartphone, tablet terminal, PC, HMD, etc.).
  • FIG. 4 is a flowchart showing an example of the flow of the first induction processing example according to the present embodiment.
  • the first guidance processing example for example, when an image is projected in a living space, when a user enters the living space, that is, before the image projection is started (before the user starts viewing the content). Will be explained assuming.
  • the recognition unit 121 of the information processing apparatus 100 recognizes the environment of the living space based on various sensing data (captured images and depth data) detected by the camera 310 and the distance measuring sensor 320. (Step S103). Specifically, the recognition unit 121 recognizes the shape of the living space, the arrangement and shape of a real object such as furniture, and the like. The recognition unit 121 also recognizes the position of the projector 210 arranged in the living space, the location of the projection area 211, and the like. In this embodiment, it is assumed that the projection area 211 is provided on the floor surface of the living space as an example.
  • the recognition unit 121 may recognize the location and size of the projection area 211 from the information on the position and projection direction of the projector 210 and the projectable angle. Further, the projection direction of the projector 210 may be manually set and drive-controlled by the user in advance, or may be automatically set and drive-controlled by the information processing apparatus 100. Further, the environment recognition process shown in step S103 may be continuously performed or may be performed at regular time intervals. Further, the environment recognition process may be performed at least before the start of image projection (viewing of content), or may be continuously performed during image projection. When the process of environment recognition is continuous, the recognition unit 121 may mainly recognize changes (differences) in the environment.
  • the viewing location determination unit 122 determines an appropriate viewing location (step S106). Specifically, the viewing location determination unit 122 determines a viewing location that can avoid deterioration of visibility due to the appearance of a user's shadow or the like based on the position of the projector 210 as a light source and the position of the projection area 211. decide. For example, the viewing place determination unit 122 determines a place where the direction in which the shadow appears and the line-of-sight direction are different from each other as an appropriate viewing place in consideration of the appearance of the shadow of the user on the extension line of the positions of the projector 210 and the user.
  • the viewing location determination unit 122 determines the outside (periphery) of the projection area 211 as an appropriate viewing location for the user in order to display the content as large as possible and effectively use the projection area 211. You may do it. For example, in the case of the positional relationship as shown in FIG. 2, the viewing place determination unit 122 determines the area E3 shown in the lower left of FIG. 2 as an appropriate viewing place.
  • the recognition unit 121 recognizes the position of the user in the living space based on various sensing data (captured images and depth data) detected by the camera 310 and the distance measuring sensor 320 (step S109).
  • various sensing data captured images and depth data
  • the output control unit 123 determines whether or not it is necessary to guide to the appropriate viewing place determined above (step S112). Specifically, the output control unit 123 determines that guidance is not necessary when the recognized user's position is the appropriate viewing location determined above. On the other hand, if the recognized user's position is not the appropriate viewing place determined above, the output control unit 123 determines that guidance is necessary. For example, in the positional relationship as shown in FIG. 2, when the user enters the room from the vicinity where the projector 210 is installed (area E1), stays in the area E1 and starts viewing the content, a shadow is cast in the line-of-sight direction of the user. Appears and visibility is reduced.
  • the output control unit 123 determines that it is necessary to guide the user to the area E3 determined as an appropriate viewing place. On the other hand, for example, when the user enters the room from the vicinity of the area E3 and stays in the area E3, even if the image is projected on the projection area 211, the shadow of the user is generated on the side opposite to the line-of-sight direction and the visibility does not deteriorate. Therefore, the output control unit 123 determines that guidance is not necessary.
  • the output control unit 123 performs guidance by controlling the display of the shadow image (step S115) or guidance by controlling the content (step S118).
  • the guidance by the display control of the shadow image is mainly used when it is desired to change the physical position in the living space of the user (for example, when it is desired to move from the area E1 to the area E3).
  • the guidance by controlling the content is mainly used when it is desired to change the user's face direction, line-of-sight direction, body direction, posture, etc. without changing the physical position in the user's living space. Which method may be used for guidance may be preset, or both may be performed in parallel or sequentially depending on the situation.
  • the output control unit 123 first moves the user to an appropriate viewing place by controlling the display of the shadow image, and then controls the content to control the face orientation and posture of the user. May be changed.
  • the output control unit 123 sets the shadow image (virtually) in the direction of the user's line of sight (preferably a place in front of the user and within the user's field of view, such as the direction in which the body is facing). Display an example of an object) like a user's shadow (for example, display a human figure image as if the shadow extends from the user's feet). As a result, the user thinks that his / her shadow will be an obstacle when viewing the content displayed on the floor, and is expected to move naturally to another place.
  • the output control unit 123 may also follow the human figure image when the user moves, and may end the display of the human figure image when the user moves to an appropriate viewing place.
  • the human figure image may be a real and finely shaped human figure, or may be a deformed (fine shape omitted) human figure. Since a human figure is a familiar physical phenomenon that a user can experience on a daily basis, by using a human figure image, it is possible to induce the user implicitly without making the user feel unnatural. Further, since it is generally recognized that the human shadow is black or gray, the color of the human shadow image is preferably a color close to black or gray, but the present embodiment is not limited to this, and blue or It may be another color such as green or red.
  • the output control unit 123 is a projection area in the user's line-of-sight direction (which may be the general orientation of the face, the head, or the body).
  • the 211 is controlled to project a human figure image.
  • the output control unit 123 controls the display to gradually reduce the size and / or gradually reduce the size of the human figure image (increase the transmittance) when the user moves in the direction (correct direction) of the appropriate viewing place.
  • the display control is performed so that the human image is gradually enlarged or / and gradually darkened (lowers the transmittance). You may. That is, the output control unit 123 performs display control that changes the display state of the shadow image according to the change in the positional relationship (distance, etc.) between the appropriate viewing place and the user. This makes it possible to more reliably guide the user to an appropriate viewing place implicitly.
  • FIG. 5 shows a diagram illustrating an example of guidance using a human figure image according to the present embodiment.
  • the output control unit 123 receives the output control unit 123.
  • the human figure image 530 is displayed. It should be noted that the actual shadow is generated in a direction different from the line-of-sight direction on the extension line of the projector 210 and the user.
  • the output control unit 123 causes the display position of the human figure image 530 to follow the movement of the user and controls the display so that the human figure image 530 becomes smaller.
  • the user naturally moves in the direction in which the shadow becomes smaller so that the shadow does not interfere with the viewing of the content, so that the user can be implicitly guided to the appropriate viewing place T.
  • the output control unit 123 also moves the user to the display position of the human figure image 530. The display is controlled so that the human figure image 530 becomes larger while following the image.
  • the guidance using the human shadow image is not limited to the guidance in the left-right direction (right or left direction for the user facing the projection area 211) with respect to the projection area 211 as described above, and is, for example, an output.
  • the control unit 123 wants to move the user backward (backward for the user facing the projection area 211)
  • the control unit 123 displays a large shadow in the line-of-sight direction of the user so that the user moves backward (for example,). Display control may be performed to reduce the shadow (as the person walks backward and moves backward).
  • the output control unit 123 when the output control unit 123 wants to move the user in the forward direction (forward for the user who is facing the projection area 211), the output control unit 123 displays a large shadow in the line-of-sight direction of the user and the user moves forward. Display control may be performed to reduce the shadow (the closer to the projection area 211).
  • the output control unit 123 may use silhouette images of various figures such as circles, squares, triangles, ellipses, polygons, trapezoids, rhombuses, and cloud shapes as other examples of virtual objects. Silhouette images with such various shapes are called graphic shadow images.
  • the color of the graphic shadow image may be a color close to black or gray, or may be another color such as blue, green, or red, as in the case of a general human figure.
  • FIG. 6 is a diagram illustrating an example of guidance using a graphic shadow image according to the present embodiment.
  • the output control unit 123 displays a circular graphic shadow image 570a in the line-of-sight direction of the user and in the projection area 211, and when the user is moving in the correct direction, the graphic shadow image 570d, It is possible to control the circular shadow to be displayed smaller and lighter as in 570e, and to display the circular shadow larger and darker as in the graphic shadow images 570b and 570c when moving in the wrong direction. ..
  • the implicit guidance by the display control of the shadow image described above may be performed before the user starts viewing the content, that is, before the content is projected on the projection area 211. Further, even when the content is already projected, a shadow image (virtual object) may be displayed overlaid on the content to implicitly guide the content.
  • the shadow image is displayed in the user's line-of-sight direction (direction in which the body is facing, forward, etc.), when the user is in the projection area 211, the user's line-of-sight direction (face or body direction) is explained.
  • the shadow image may be displayed in the direction), and when the user is outside the projection area 211, the shadow image may be displayed in the area closest to the user in the projection area 211.
  • the output control unit 123 displays some content in the projection area 211 and responds to the user's face orientation, head orientation, line-of-sight orientation, body orientation, posture (standing / sitting / squatting), etc. By changing the display state of the content, it is expected that the user will naturally change the face orientation, posture, etc. in order to see the content well.
  • the output control unit 123 controls the strength of the blurring of the content according to the user's face orientation, head orientation, line-of-sight direction, etc., and the user looks good in focus when viewed from a certain viewpoint (desirable place or direction). You may do so. As a result, the user naturally tilts his / her face or moves to look at it from a different direction or place, so that it is possible to implicitly guide the user to a desired face direction, line-of-sight direction, or place.
  • the strength of blurring is given as an example, but it is not limited to this, and by controlling the saturation and brightness of the content and controlling the transmittance, the user's face orientation etc. are implicitly used. It is also possible to induce.
  • the output control unit 123 when the output control unit 123 wants to make the user sit or crouch so that the actual shadow of the user becomes small, the output control unit 123 naturally makes the user sit by reducing the content displayed (projected) on the floor. It is possible to crouch or crouch. When the content is displayed small, the user is expected to naturally bring his / her face closer to the content so that he / she can see the content closely, and to sit, crouch, or bend down.
  • the output control unit 123 can implicitly guide the user's face direction, line-of-sight direction, posture, etc. by presenting the content using the optical illusion.
  • An example of content using an optical illusion is an image in which some characters or figures can be seen or 2D looks 3D depending on the viewing direction or angle.
  • the output control unit 123 may perform control to display some characters or figures or change 2D to 3D when the user's face direction or posture becomes the desired direction or line of sight (that is,). Display control may be performed to make the image look like an illusion image). As a result, it is expected that the user will naturally adjust the face direction and the line-of-sight direction from the interest in the illusion image.
  • the output control unit 123 controls to localize the sound source in a predetermined direction and output the sound, and implicitly turns the user in an arbitrary direction (direction of the sound source) or directs the line of sight. Guidance can be done.
  • the implicit guidance by controlling the content described above may be performed after the guidance of the physical position movement of the user is performed. Further, the content used for guidance may be content to be viewed by the user (video, still image, website, text, GUI, etc.) or content prepared for guidance.
  • the output control unit 123 may start viewing the content. It should be noted that it may be assumed that the user does not move to an appropriate viewing place even if the guidance control is performed. The output control unit 123 of the content even if the user does not move to an appropriate viewing place (or even if the user moves to a place other than the appropriate viewing place) even if some implicit guidance is performed. You may start viewing.
  • step S112 / No when guidance is not required (step S112 / No), that is, when the user's position is already an appropriate viewing place, the output control unit 123 does not perform guidance and may start viewing the content as it is. good.
  • the start of viewing the content may be, for example, displaying a menu screen or playing a start video. Further, the output control unit 123 responds to predetermined content according to a user's operation (gesture, touch operation to the projection area 211, predetermined switch / button operation, voice input, operation input using a personal terminal such as a smartphone, etc.). You may start watching the content, or you may start watching the content automatically.
  • a user's operation gesture, touch operation to the projection area 211, predetermined switch / button operation, voice input, operation input using a personal terminal such as a smartphone, etc.
  • Second guidance processing example> Next, an example of the second induction process will be described.
  • the projection area 211 exists on the floor surface of the real space, it is assumed that the user walks around in the real space and is located in the projection area 211.
  • the content is displayed and controlled so as to face the user's face orientation and body orientation, a part of the content is blocked by the user himself or the content cannot be displayed in a sufficient size. The visibility and comfort of the user may be reduced.
  • the physical load such as changing the user's face or body orientation is smaller than the guidance with a large physical load such as movement of the user's physical position, and visual recognition is performed in a short time.
  • Guidance that improves sexuality and comfort is preferred.
  • FIG. 7 is a flowchart showing an example of the flow of the second guidance processing example according to the present embodiment.
  • the output control unit 123 controls the projection of the content (step S203).
  • the content projection control may start the reproduction of the predetermined content according to the user operation, or may automatically start the reproduction of the predetermined content.
  • the content is projected on the floor of the living space.
  • the above-mentioned first guidance process may be performed before the projection control of the content.
  • the recognition unit 121 recognizes the environment of the living space (step S206).
  • the environment recognition may be performed before the projection control of the content or after the projection control of the content is started.
  • Examples of the environment recognition of the living space include space recognition, recognition of the projection area 211, recognition of the projection surface, recognition of a shield, and the like.
  • the recognition unit 121 recognizes the user's position (step S209). Further, the recognition unit 121 also recognizes the face orientation, the head orientation, the body orientation, or the line-of-sight direction of the user, and the output control unit 123 controls the orientation of the content so as to face the user orientation. You may do so. Further, the output control unit 123 continuously recognizes the position and orientation of the user, and when the user moves while viewing the content, the output control unit 123 follows the position and orientation of the user and changes the display position and orientation of the content. good. This allows the user to view the content anywhere in the living space.
  • the output control unit 123 acquires the positional relationship between the projector 210, the projection area 211, and the user (step S212).
  • the output control unit 123 determines whether or not there is a high possibility of a decrease in visibility based on the above positional relationship (step S215). For example, when the user is located in the projection area 211, the visibility may be deteriorated depending on the orientation of the user and the position of the projector 210.
  • FIG. 8 is a diagram illustrating a decrease in visibility when the user is located within the projection area 211. As shown on the left side of FIG. 8, when the light source direction of the projector 210 and the line-of-sight direction of the user are substantially the same direction, the light hits the back surface of the user, so that the user's visual field range S is shaded by the user and the visibility is improved. It is likely to decrease.
  • shadows are: It occurs on the back side of the user, on the extension of the projector 210 and the user's position), and the possibility of deterioration of visibility is low.
  • the display area of the content may be narrow and the content may have to be reduced and displayed.
  • Whether or not there is a high possibility of deterioration in visibility may be determined based on whether or not the positional relationship between the projector 210, the projection area 211, and the user satisfies a predetermined condition. For example, when the positions of the projector 210 and the projection area 211 are located, the position in the projection area 211 where the user is likely to have a high possibility of deterioration in visibility may be set in advance. Alternatively, the control unit 120 may acquire a positional relationship with a high possibility of deterioration in visibility by machine learning.
  • step S215 when it is determined that there is a high possibility that the visibility is deteriorated (step S215 / Yes), the output control unit 123 unspecifiedly changes the face orientation and body orientation of the user by controlling the content being viewed by the user. Guidance is performed to improve visibility (step S218). A specific example of implicit guidance by controlling the content will be described below with reference to FIG.
  • FIG. 9 is a diagram illustrating implicit guidance by controlling the content being viewed in the second guidance processing example.
  • the user is located in the projection area 211 as shown on the left side of FIG. 9, or the user is likely to walk in the room and reach the position shown on the left side of FIG.
  • a shadow appears in the line-of-sight direction of the user and interferes with the viewing of the content 500 (the shadow overlaps the content 500), and the visibility is lowered. Therefore, the output control unit 123 controls the content 500 to implicitly guide the user's face orientation, posture, and the like to an arbitrary direction and posture to improve visibility.
  • the output control unit 123 controls to move the content 500 to a place different from the place where the user's shadow appears (that is, the place where the user's shadow does not overlap). ..
  • the user can continue viewing in a direction in which his / her shadow does not get in the way by simply changing the direction of the body without physical load such as physical movement. It can be done and visibility is improved.
  • the movement of the content 500 is not limited to rotation, and may be control such as shifting the content 500 in the lateral direction (an example of planar movement control).
  • the content 500 may be shifted laterally.
  • the output control unit 123 may be moved to a wall instead of the floor for display (an example of three-dimensional movement control). For example, the output control unit 123 moves the content 500 to the lateral wall of the user so that the user does not have a physical load such as physical movement and simply changes the direction of the body so that his / her shadow does not get in the way. Viewing can be continued and visibility is improved.
  • the output control unit 123 is a place different from the place where the shadow of the user appears, and the user does not have a physical load such as physical movement, and simply changes the direction of the body and the direction of the line of sight of the user. It is possible to move the content 500 to a place where viewing can be continued in a direction in which the shadow does not get in the way, and improve the visibility.
  • the output control unit 123 naturally squats the user by performing display control for reducing the content 500, and improves visibility by reducing the shadow area. It is also possible to make it.
  • the display control of the content 500 for squatting the user is an example, and the present embodiment is not limited to this.
  • the content control and user-induced actions are not limited to the above examples.
  • the user can be raised by controlling the enlargement of the content, or the user's face, head, and line of sight can be changed by controlling the content to move from the floor to the wall or ceiling (an example of three-dimensional movement control). It is possible to do.
  • the present embodiment is not limited to controlling the content being viewed, and for example, the user can view the content while the user is located in the projection area 211.
  • the user's line-of-sight direction, face orientation, body orientation, posture, etc. may be implicitly guided by controlling the projected content.
  • FIG. 10 is a diagram illustrating a first control according to a modified example. As shown on the left side of FIG. 10, first, it is assumed that the user B comes in later while the user A is viewing or operating the content 502 in the living space. At this time, if there is an empty space in the projection area 211, the viewing place determination unit 122 determines an appropriate viewing place for the user B on the premise that new content is arranged in the empty space.
  • the location and orientation of new content and the appropriate viewing location for user B are determined in consideration of the position of the projector 210 and the display position of the content 502 that has already been displayed in the projection area 211 and is being viewed by user A. Can be done. Specifically, it is desirable that the place does not interfere with the viewing of the user B and the user A due to the shadow of the user B, and that the new content to be viewed from that place can be efficiently displayed in a large size. For example, in the example shown on the left side of FIG. 10, the viewing place T located on the right side of the projection area 211 with respect to the projector 210 is determined as an appropriate viewing place.
  • the output control unit 123 implicitly guides the user B to an appropriate viewing place T.
  • the guidance method include a guidance method using a shadow image as described with reference to FIGS. 5 and 6. Then, as shown on the right side of FIG. 10, the user B starts viewing and operating the new content 504 without deteriorating the visibility due to his / her shadow and without disturbing the viewing of the user A. Is possible.
  • FIG. 11 is a diagram illustrating a second control according to a modified example of the present embodiment.
  • the viewing location determination unit 122 reduces the content 502 and secures a free space for the user B, so that the user B can properly view the content. Decide on a location.
  • the location and orientation of the new content, and the appropriate viewing location for user B are the position of the projector 210 and the display position (securing empty space) of the content 502 that has already been displayed in the projection area 211 and is being viewed by user A.
  • the place does not interfere with the viewing of the user B and the user A due to the shadow of the user B, and that the new content to be viewed from that place can be efficiently displayed in a large size.
  • the viewing place T located on the right side of the projection area 211 with respect to the projector 210 is determined as an appropriate viewing place.
  • the user A who is watching does not have a physical load such as a large movement of the physical position. Therefore, for example, display control such as reducing the content 502 closer to the user A and displaying it, rotating the content 502 around the center of the user A, or moving the content 502 laterally with respect to the user A can be mentioned. Further, display control for moving the content 502 to a wall where the position of the user A can be seen can be mentioned.
  • the output control unit 123 implicitly guides the user B to an appropriate viewing place T.
  • the guidance method include a guidance method using a shadow image as described with reference to FIGS. 5 and 6.
  • the output control unit 123 may reduce the content 502 and secure an empty space before performing the guidance control.
  • the output control unit 123 displays the new content 504 in the empty space secured by reducing the content 502.
  • the user B guided to the appropriate viewing place T can start viewing or operating the new content 504 without deteriorating the visibility due to his / her shadow and without disturbing the viewing of the user A. It will be possible.
  • the content display control when there are multiple users is not limited to the example described above.
  • the output control unit 123 reduces and displays the content 502 and arranges the new content 504 in the reserved empty space.
  • the recognition unit 121 recognizes the position of the user B who has moved to view the content 504, and the shadow of the user B does not overlap the content 503 or the content 504 due to the positional relationship between the projector 210, the user B, and each content.
  • the output control unit 123 may perform an implicit guidance by displaying the virtual object or controlling the content as described above.
  • the position of the user B, the face orientation, the line-of-sight direction, the posture, and the like may be changed to optimize the position of the user B and the display of the content.
  • the position of each user and the display position of the content can be controlled as appropriate based on the relationship between the position of each user, the position of the projector 210 (light source), and the position of each content. It is possible to prevent the visibility from being lowered due to the shadow of each user.
  • FIG. 12 is a diagram illustrating a third control according to a modified example of the present embodiment.
  • the viewing place determination unit 122 is appropriate from the position of the projector 210 and the position of the projection area 211, respectively.
  • the viewing locations T1 to T4 are determined, and each user is implicitly guided by using a shadow image or the like. Then, as shown on the right side of FIG.
  • the contents 506a to 506d are viewed at a position where all the users do not interfere with the viewing of other users by their own shadows, respectively, in a size that maximizes the projection area 211. Is possible, and visibility and comfort are improved.
  • the contents 506a to 506d may be different contents for each user or may be the same contents. For example, during a team discussion, it is assumed that the same content will be shared and viewed individually.
  • the positions of the appropriate viewing locations T1 to T4 are not limited to the example shown in FIG. 12, and the appropriate viewing locations T1 to T4 may be within the projection area 211.
  • the projection area 211 exists on the floor surface
  • the present disclosure is not limited to this, and there is a case where the projection area 211 exists on a wall, a ceiling, a table, or the like. You may.
  • a plurality of contents may be projected by a plurality of projectors provided in the living space.
  • the viewing location determination unit 122 determines an appropriate viewing location in consideration of the positions of the plurality of projectors.
  • each step of the flowchart shown in FIGS. 4 and 7 may be processed in parallel as appropriate, or may be processed in the reverse order.
  • the viewing place determination unit 122 may calculate a place where a shadow is generated in consideration of a light source other than the projector 210, such as a lighting device installed in a living space, and determine an appropriate viewing place.
  • the display device for displaying the shadow image (virtual object) for performing implicit guidance a display device different from the display device for displaying the content may be used, or the same display device may be used.
  • virtual objects and contents are examples of images displayed in real space.
  • the output device 200 may be, for example, a glasses-type device provided with a transmissive display or an HMD worn on the user's head. In such a display device, the content may be superimposed and displayed in real space (AR (Augmented Reality) display). Further, the output control unit 123 implicitly guides the user and moves it to an arbitrary place by displaying a virtual object such as a human figure image in AR or controlling the display state of the content, or in an arbitrary direction.
  • AR Augmented Reality
  • An arbitrary place can be determined by the viewing place determination unit 122, for example, the shape of a real space, the arrangement of furniture, the position and size of a flat area, the content to be displayed, the position of a user or another user in the real space, and the real space. It may be determined based on the positional relationship between the light source of the lighting device and the like arranged in the space, the lighting environment, and the like. Further, the position and posture of the content superimposed and displayed in the real space may be associated with the floor surface and the wall in the real space.
  • the display device that AR-displays the content in the real space may be a smartphone, a tablet terminal, or the like. Further, even if it is a non-transparent display device, the captured image in real space is displayed on the display unit in real time (so-called through image display, also referred to as live view), and the content is superimposed and displayed on the captured image. Therefore, AR display can be realized.
  • the output device 200, or the sensor 300 described above exhibit the functions of the information processing device 100, the output device 200, or the sensor 300. It is also possible to create one or more computer programs of. Also provided is a computer-readable storage medium that stores the one or more computer programs.
  • the present technology can also have the following configurations.
  • a display device performs display control that implicitly guides the user to a specific viewing place in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user.
  • An information processing device provided with a control unit for performing.
  • the control unit determines the specific viewing place based on the position of the display area and the position of the light source in the real space, and if the position of the user is not the specific viewing place, the user is selected.
  • the information processing device according to (1) above which controls display to implicitly guide the user to the specific viewing location.
  • the information processing device according to (2), wherein the position of the light source includes the position of the display device that projects an image into the real space.
  • the information processing device according to (3) above, wherein the display area includes a range in which an image can be projected by the display device.
  • the control unit determines a place outside or inside the display area where the shadow appearing on the extension line of the light source and the user is different from the line-of-sight direction of the user as the specific viewing place.
  • the information processing apparatus according to any one of (2) to (4) above.
  • the control unit controls to display the virtual object included in the image in front of the user as the display control for implicitly guiding the image, according to any one of (1) to (5).
  • the information processing device according to any one of (6) to (10) above, wherein the virtual object is a shadow image of a person or a figure.
  • the control unit changes the display state of the content to be viewed by the user, which is included in the image, as the display control for implicitly guiding the user, according to any one of (1) to (5).
  • the information processing device described in. (13) The information processing device according to (12), wherein the control unit changes the display state of the content according to the face orientation or location of the user with respect to the content.
  • the control unit sets the position of the display area, the position of the display device that projects the content onto the display area, and the position of the user who views the content as the display control that is implicitly guided.
  • the information processing apparatus Based on the above (12), the information processing apparatus according to (12), wherein the content is controlled to be moved to a place in the display area that does not overlap with a shadow appearing on the extension line of the display device and the user.
  • the control unit recognizes a plurality of users from the real space, each user views each other's content based on the position of the display area and the position of the display device that projects the content on the display area.
  • the information processing device according to any one of (1) to (14) above, which controls to implicitly guide each user to each specific viewing place that does not interfere.
  • the control unit recognizes a new user from the real space, the position of the first user who is already viewing the content in the real space and the content being viewed by the first user.
  • the information processing apparatus according to any one of (1) to (15) above.
  • the processor A display device performs display control that implicitly guides the user to a specific viewing place in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user. Information processing methods, including what to do.
  • Computer A display device performs display control that implicitly guides the user to a specific viewing place in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user.
  • a program that functions as a control unit.
  • Information processing device 110 I / F unit 120 Control unit 121 Recognition unit 122 Viewing location determination unit 123 Output control unit 130 Input unit 140 Storage unit 200 Display device 210 Projector 300 Sensor 310 Camera 320 Distance measurement sensor

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

L'invention aborde le problème de la réalisation d'un dispositif de traitement d'informations, d'un procédé de traitement d'informations et d'un programme qui permettent d'utiliser des préconisations non explicites pour améliorer la visibilité sans donner un sentiment de coercition. La solution selon l'invention porte sur un dispositif de traitement d'informations qui comprend une unité de commande qui utilise un dispositif d'affichage pour effectuer une commande d'affichage qui guide non explicitement un utilisateur vers un endroit de visualisation spécifique dans un espace réel sur la base de l'emplacement d'une région d'affichage pour une image reconnue à partir de données de détection pour l'espace réel et l'emplacement de l'utilisateur.
PCT/JP2021/032984 2020-10-29 2021-09-08 Dispositif de traitement d'informations, procédé de traitement d'informations, et programme WO2022091589A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020181240 2020-10-29
JP2020-181240 2020-10-29

Publications (1)

Publication Number Publication Date
WO2022091589A1 true WO2022091589A1 (fr) 2022-05-05

Family

ID=81382253

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/032984 WO2022091589A1 (fr) 2020-10-29 2021-09-08 Dispositif de traitement d'informations, procédé de traitement d'informations, et programme

Country Status (1)

Country Link
WO (1) WO2022091589A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006080344A1 (fr) * 2005-01-26 2006-08-03 Matsushita Electric Industrial Co., Ltd. Dispositif et procede de guidage
WO2013132886A1 (fr) * 2012-03-07 2013-09-12 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme
WO2018163637A1 (fr) * 2017-03-09 2018-09-13 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations, et support d'enregistrement
JP2019036181A (ja) * 2017-08-18 2019-03-07 ソニー株式会社 情報処理装置、情報処理方法、およびプログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006080344A1 (fr) * 2005-01-26 2006-08-03 Matsushita Electric Industrial Co., Ltd. Dispositif et procede de guidage
WO2013132886A1 (fr) * 2012-03-07 2013-09-12 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme
WO2018163637A1 (fr) * 2017-03-09 2018-09-13 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations, et support d'enregistrement
JP2019036181A (ja) * 2017-08-18 2019-03-07 ソニー株式会社 情報処理装置、情報処理方法、およびプログラム

Similar Documents

Publication Publication Date Title
US11449133B2 (en) Information processing apparatus and information processing method
JP6511386B2 (ja) 情報処理装置および画像生成方法
US8743187B2 (en) Three-dimensional (3D) imaging based on MotionParallax
US20170132845A1 (en) System and Method for Reducing Virtual Reality Simulation Sickness
CN114365197A (zh) 在具有多个物理参与者的环境中放置虚拟内容
US20170264871A1 (en) Projecting device
WO2015163030A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
US11284047B2 (en) Information processing device and information processing method
CN106168855B (zh) 一种便携式mr眼镜、手机和mr眼镜系统
JP2015084002A (ja) ミラーディスプレイシステム、及び、その映像表示方法
US11961194B2 (en) Non-uniform stereo rendering
JPWO2019039119A1 (ja) 情報処理装置、情報処理方法、およびプログラム
JP2007318754A (ja) 仮想環境体験表示装置
JPWO2017195514A1 (ja) 画像処理装置、画像処理システム、および画像処理方法、並びにプログラム
JP2016213674A (ja) 表示制御システム、表示制御装置、表示制御方法、及びプログラム
US11195320B2 (en) Feed-forward collision avoidance for artificial reality environments
JP2022183213A (ja) ヘッドマウントディスプレイ
WO2020017435A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
US10082672B2 (en) Display apparatus and method of displaying using electromechanical faceplate
JP7452434B2 (ja) 情報処理装置、情報処理方法及びプログラム
WO2022091589A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations, et programme
CN110060349B (zh) 一种扩展增强现实头戴式显示设备视场角的方法
JP2007323093A (ja) 仮想環境体験表示装置
WO2021200494A1 (fr) Procédé de changement de point de vue dans un espace virtuel
JP2005293197A (ja) 画像処理装置及び方法、並びに画像表示システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21885708

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21885708

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP