WO2023074875A1 - Dispositif de fourniture de contenu virtuel - Google Patents

Dispositif de fourniture de contenu virtuel Download PDF

Info

Publication number
WO2023074875A1
WO2023074875A1 PCT/JP2022/040515 JP2022040515W WO2023074875A1 WO 2023074875 A1 WO2023074875 A1 WO 2023074875A1 JP 2022040515 W JP2022040515 W JP 2022040515W WO 2023074875 A1 WO2023074875 A1 WO 2023074875A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual content
area
deceleration
state
change speed
Prior art date
Application number
PCT/JP2022/040515
Other languages
English (en)
Japanese (ja)
Inventor
亮佑 二ツ森
智仁 山▲崎▼
けい 石黒
Original Assignee
株式会社Nttドコモ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Nttドコモ filed Critical 株式会社Nttドコモ
Priority to JP2023556689A priority Critical patent/JPWO2023074875A1/ja
Publication of WO2023074875A1 publication Critical patent/WO2023074875A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • One aspect of the present invention relates to a virtual content providing device.
  • Patent Document 1 discloses an information processing apparatus that provides a virtual space to a user wearing a head-mounted device (HMD).
  • the virtual content virtual object
  • the moving speed of the virtual content becomes the first moving speed
  • the virtual content does not exist within the visual field image
  • the virtual content is controlled so that the moving speed of the content becomes a second moving speed slower than the first moving speed.
  • an enemy object which is virtual content
  • Patent Literature 1 makes the moving speed of virtual content that exists in the user's field of view faster than the moving speed of virtual content that does not exist in the user's field of view. does not consider gender.
  • one aspect of the present invention aims to provide a virtual content providing device capable of improving the visibility of virtual content.
  • a virtual content providing device is a virtual content providing device that provides virtual content to be displayed on a display unit of a user terminal used by a user.
  • the virtual content providing device includes a content acquisition unit that acquires virtual content whose state changes over time, an area setting unit that sets a deceleration area that is part of the area displayed on the display unit, and a virtual content that exists within the deceleration area.
  • the virtual content is displayed so that the first change speed of the state of the virtual content applied to the virtual content that is not in the deceleration region is slower than the second change speed of the state of the virtual content applied to the virtual content that does not exist within the deceleration region.
  • a display control unit for controlling.
  • the state change speed (first change speed) of the virtual content when the virtual content exists in the deceleration region that is part of the region displayed on the display unit is slower than the state change speed (second change speed) of the virtual content when the virtual content does not exist within the deceleration region (that is, when the virtual content exists outside the deceleration region). display is controlled. According to the above configuration, it is possible to make it easier to grasp the content of the virtual content within the deceleration area, so it is possible to improve the visibility of the virtual content.
  • FIG. 4 is a diagram schematically showing an example of a deceleration region;
  • FIG. 4 is a diagram schematically showing an example of a deceleration region;
  • FIG. 10 is a diagram illustrating an example of display control of virtual content;
  • FIG. 10 is a diagram illustrating an example of display control of virtual content;
  • It is a figure which shows an example of a virtual content.
  • 4 is a flow chart showing an example of the operation of the virtual content providing system;
  • FIG. 8 is a flowchart showing an example of processing in step S4 of FIG. 7;
  • FIG. It is a figure which shows an example of the hardware constitutions of a server.
  • FIG. 1 is a diagram showing an example of a virtual content providing system 1 according to one embodiment.
  • the virtual content providing system 1 is a computer system that provides virtual content to be displayed on the display unit of the user terminal used by the user. That is, the virtual content providing system 1 is a system that provides users with VR, AR, MR, or other XR experiences by displaying virtual content on the display unit of the user terminal.
  • the virtual content providing system 1 includes a server 10 (virtual content providing device), a content DB 20, and an HMD (Head Mounted Display) 30, which is a user terminal.
  • the server 10 is communicably connected to the content DB 20 and HMD 30 via any wired or wireless communication network.
  • the server 10 distributes the virtual content stored in the content DB 20 to the HMD 30 worn on the user's head, and controls the display of the virtual content on the HMD 30 .
  • the user can view virtual content distributed and display-controlled by the server 10 through the HMD 30 .
  • FIG. 1 only one HMD 30 is illustrated as a target for which the server 10 performs display control of virtual content, but the server 10 may be configured to perform display control for a plurality of HMDs 30. good.
  • the server 10 may be configured by a single computer device, or may be configured by a plurality of computer devices that can communicate with each other.
  • the content DB 20 is a database device (database server) that stores virtual content data to be displayed on the HMD 30 .
  • a virtual content is a virtual object displayed on the display unit 31 of the HMD 30 .
  • virtual content is display information indicating an object that does not exist in the real space.
  • the type of virtual content is not limited to a specific type.
  • Examples of virtual content include images (still images or moving images), text information, graphic information, 3D models, and the like.
  • Such virtual content may include objects that change state over time. Examples of changes in state over time include movement based on a predetermined movement direction and movement speed, changes in the size or shape of an object, and the like. That is, the state of virtual content includes the position, size, shape, and the like of virtual content.
  • the server 10 executes processing for improving the visibility of virtual content that accompanies such state changes over time.
  • attention will be paid only to virtual content whose state changes over time. That is, hereinafter, "virtual content whose state changes over time" is simply referred to as "virtual content.”
  • the content DB 20 stores data for drawing (rendering) virtual content on the display unit 31 of the HMD 30, for example.
  • the content DB 20 stores standard specifications of virtual content (for example, size, operation, display position (for example, information indicating which coordinate position in the real space the virtual content is to be displayed in relation to), etc.). Prescribing setting information may be stored in association with the virtual content.
  • the HMD 30 is a device worn on the user's head.
  • the form of HMD30 is not limited to a specific form.
  • the HMD 30 can take various forms such as goggle type, glasses type (glasses type), and hat type.
  • the HMD 30 is, for example, smart glasses such as XR glasses.
  • the HMD 30 is AR glasses that have the function of providing augmented reality (AR) to the user.
  • the HMD 30 is a see-through glass that allows the user to view the real space (outside world) and virtual content superimposed on the real space.
  • the HMD 30 is not limited to the above, and may be an MR device such as MR glasses having a function of providing mixed reality (MR) to the user, or a VR having a function of providing virtual reality (VR) to the user. It may be a VR device such as glasses.
  • MR mixed reality
  • VR virtual reality
  • the HMD 30 has a display section 31 and a sensor 32 .
  • the display unit 31 is a display placed in front of the user's eyes.
  • the display unit 31 is configured by, for example, a liquid crystal display element, an organic EL (Electro Luminescence) element, or the like.
  • the display unit 31 may separately have a right-eye display panel arranged in front of the user's right eye and a left-eye display panel arranged in front of the user's left eye.
  • the display unit 31 may be configured by an optical see-through type (transmissive type) display, or may be configured by a video see-through type display. In the former case, the display unit 31 displays a transmitted image of the optically transmitted real space (that is, the real space in front of the display unit 31).
  • the display unit 31 displays a video image (picture) acquired (captured) in real time by a camera (a type of sensor 32 ) provided in the HMD 30 .
  • the display unit 31 also displays virtual content (that is, virtual content distributed from the server 10) that is superimposed on the image of the physical space (transmission image or video image).
  • the sensor 32 is a sensor group consisting of one or more sensors included in the HMD 30 .
  • the sensor 32 has, for example, a position sensor that detects the posture (orientation) and position of the HMD 30 .
  • a position sensor is configured by, for example, an acceleration sensor, a gyro sensor, a GPS sensor, or the like.
  • the sensor 32 has a line-of-sight sensor that detects the user's line of sight.
  • the line-of-sight sensor is configured to detect the user's line of sight, for example, by a known eye-tracking technique.
  • the sensor 32 has an external sensor such as a camera for acquiring a video image of the real space in front of the HMD 30 .
  • Information periodically acquired by various sensors included in the sensor 32 as described above is periodically transmitted to the server 10 at predetermined time intervals, for example.
  • the server 10 has a content acquisition unit 11, a line-of-sight information acquisition unit 12, an area setting unit 13, and a display control unit 14.
  • the processing contents of the server 10 will be described below, focusing on the case where the server 10 performs display control on the HMD 30 of a specific user U (see FIG. 2).
  • the content acquisition unit 11 acquires virtual content to be displayed on the display unit 31 of the HMD 30 of the user U. That is, the content acquisition unit 11 acquires virtual content to be distributed to the user U from among the virtual content stored in the content DB 20 .
  • the virtual content is received by a predetermined server (which may be the server 10 or may be a server different from the server 10) and stored in the content DB 20.
  • a predetermined server which may be the server 10 or may be a server different from the server 10.
  • the content acquisition unit 11 acquires the above virtual content addressed to the user U from the content DB 20 by periodically accessing the content DB 20 (or by a push notification from the content DB 20).
  • the content acquisition unit 11 may acquire position information for specifying the physical space displayed on the display unit 31 of the HMD 30 (for example, information indicating the orientation and position of the HMD 30 detected by the position sensor of the HMD 30). . Then, when the content DB 20 stores virtual content whose display location is set to the location specified by the position information acquired as described above, the content acquisition unit 11 acquires the virtual content from the content DB 20.
  • the mode in which the content acquisition unit 11 acquires virtual content is not limited to the mode illustrated above.
  • the content acquisition unit 11 may acquire the virtual content by newly generating virtual content to be displayed on the display unit 31 of the HMD 30 of the user U.
  • the line-of-sight information acquisition unit 12 acquires line-of-sight information indicating the user's U line of sight.
  • the line-of-sight information acquisition unit 12 may acquire information indicating the line of sight of the user U detected by the sensor 32 (line-of-sight sensor) of the HMD 30 of the user U as the line-of-sight information.
  • the line-of-sight information is information for specifying the gaze point P (see FIG. 3) of the user U in the display area 31a (see FIG. 3) of the display unit 31.
  • the area setting unit 13 sets a deceleration area that is part of the area displayed on the display unit 31 .
  • An example of the deceleration region will be described with reference to FIGS. 2 and 3.
  • FIG. 2 and 3 are diagrams schematically showing an example of the deceleration region.
  • FIG. 2 schematically shows the positional relationship among the HMD 30, the deceleration area A, and the virtual content 50 when viewed from the right side of the user U.
  • FIG. 3 is a diagram showing an example of the positional relationship between the deceleration area A and the virtual content 50 in the display area 31a of the display unit 31 of the HMD 30 of the user U.
  • FIG. 1 schematically shows the positional relationship among the HMD 30, the deceleration area A, and the virtual content 50 when viewed from the right side of the user U.
  • FIG. 3 is a diagram showing an example of the positional relationship between the deceleration area A and the virtual content 50 in the display area 31a of the display unit 31 of the HMD 30
  • the Z-axis direction corresponds to the vertical direction seen from the user U (that is, the vertical direction of the HMD 30), the X-axis direction corresponds to the front direction seen from the user U, and the Y-axis direction corresponds to the user U. It corresponds to the lateral direction (horizontal direction) viewed from U.
  • the area setting unit 13 sets an area including the user's gaze point P specified based on the line-of-sight information acquired by the line-of-sight information acquiring unit 12 as the deceleration area A. That is, the line-of-sight information acquisition unit 12 sets a partial area included in the field of view of the user U as the deceleration area A.
  • the area setting unit 13 may set, as the deceleration area A, an area corresponding to the effective visual field of the user U specified based on the line-of-sight information.
  • the effective field of view is the visual field known as the area within which instantaneous information reception is possible.
  • the effective field of view is an area within about 30 degrees in the horizontal direction and within about 20 degrees in the vertical direction with respect to the line-of-sight axis (the line connecting the viewpoint (position of the eyes) of the user U and the gaze point P).
  • the line-of-sight information acquired by the line-of-sight information acquiring unit 12 (that is, information indicating the actual line of sight of the user U) is not essential for setting the deceleration area A.
  • the region setting unit 13 sets the front direction of the user U (that is, the line-of-sight direction when it is assumed that the user U is looking straight ahead) to the user U's line-of-sight direction.
  • the deceleration area A may be set.
  • the deceleration area A for slowing down the speed of state change of the virtual content 50 and allowing the user U to grasp the content of the virtual content 50 can be set more appropriately. can be set to
  • the region setting unit 13 may divide the deceleration region A described above into a plurality of further subdivided regions based on the degree of visibility (easiness of visually grasping information).
  • the area setting unit 13 may include a first area A1 having a first visibility and a second area outside the first area A1 having a second visibility lower than the first visibility.
  • a deceleration area A having an area A2 may be set.
  • the area setting unit 13 may set the area corresponding to the user U's discriminative visual field as the first area A1.
  • the discriminative visual field is the visual field known as the range in which visual performance such as visual acuity is excellent.
  • the discriminative field of view is the area within about 5 degrees horizontally and vertically to the line-of-sight axis. Further, as an example, the area setting unit 13 may set an annular area of the effective visual field that does not include the discriminative visual field (that is, the first area A1) as the second area A2.
  • the change speed v2 is a standard speed preset for the virtual content 50 (standard speed).
  • the change speed v2 is a speed equivalent to the rain falling speed (or the virtual content 50 simulates rain falling from the sky). The speed is set so that the user U can perceive that the object is a moving object.
  • the change speed v1 is a speed that is slower than the change speed v2, which is the standard speed, in order to improve the visibility of the virtual content 50 .
  • the display control unit 14 may variably set the changing speed v1 so that the closer the distance from the center of the deceleration area A is, the slower the changing speed v1 is. For example, the display control unit 14 starts decelerating the virtual content 50 when the virtual content 50 enters the deceleration area A, and increases the degree of deceleration as the virtual content 50 moves toward the center of the deceleration area A ( That is, the change speed v1 may be changed so that the change speed v1 becomes smaller (slower) as the virtual content 50 moves toward the center of the deceleration area A).
  • the display control unit 14 may control the change speed v1 so that the change speed v1 is minimized when the virtual content 50 is positioned at the center of the deceleration area A.
  • the display control unit 14 controls the degree of deceleration to decrease as the virtual content 50 moves away from the center of the deceleration area A (that is, the change speed increases as the virtual content 50 moves away from the center of the deceleration area A).
  • the change speed v1 may be changed so that v1 becomes larger (faster). According to the above configuration, the change speed of the virtual content 50 that has entered the deceleration area A can be changed smoothly.
  • the virtual content 50 can be presented to the user U in a small manner.
  • the virtual content 50 is an object that vertically falls from above to below at a position slightly in front of the user U. That is, in this example, the virtual content 50 moves downward in the vertical direction as time elapses. That is, as shown in FIGS. 2 and 3, the virtual content 50 changes its position (state) in the order of "position P1 ⁇ position P2 ⁇ position P3 ⁇ position P4 ⁇ position P5" over time. .
  • the change speed of the position (state) of the virtual content 50 means the moving speed of the virtual content 50 .
  • the speed of change in the state of the virtual content 50 means the speed of change in size or shape of the virtual content 50.
  • a specific method of determining whether or not the virtual content 50 exists within the deceleration area A can be determined arbitrarily. For example, when the entire virtual content 50 is included in the deceleration area A, it may be determined that the virtual content 50 exists within the deceleration area A, or at least part of the virtual content 50 is included in the deceleration area A. It may be determined that the virtual content 50 exists within the deceleration area A when the Alternatively, it may be determined that the virtual content 50 exists within the deceleration area A when a threshold % or more (for example, 50% or more) of the virtual content 50 is included in the deceleration area A.
  • a threshold % or more for example, 50% or more
  • the display control unit 14 sets the change speed v2 when the virtual content 50 does not exist within the deceleration area A (that is, when the virtual content 50 is positioned at the position P1 or the position P5). ”>“Change speed v4 when the virtual content 50 is present in the second area A2 (that is, when the virtual content 50 is positioned at the position P2 or the position P4)”>“The virtual content 50 is in the first area A1
  • the movement speed (change speed) of the virtual content 50 displayed in the display area 31a is controlled so as to become the change speed v3 when the virtual content 50 exists (that is, when the virtual content 50 is positioned at the position P3).
  • the virtual content 50 when the virtual content 50 enters the first area A1, the speed of the state change of the virtual content 50 starts decelerating, and when the virtual content 50 enters the second area A2, the virtual content 50 slows down.
  • the speed of state changes can be further reduced.
  • the state change speed of the virtual content 50 since the state change speed of the virtual content 50 is controlled to be the slowest in the region (first region A1) with better visibility, the content of the virtual content 50 can be more effectively transmitted to the user U. It is possible to grasp
  • the moving speed (change speed v1) of the virtual content 50 when the virtual content 50 exists within the deceleration region A is the moving speed (change speed v1) of the virtual content 50 when the virtual content 50 does not exist within the deceleration region A (change speed v1). v2).
  • states s1, s2, s3, s4, and s5 are displayed in the display area 31a in successive frames f1, f2, f3, f4, and f5, assuming that the virtual content 50 moves at the change speed v2.
  • the virtual content 50 enters the deceleration area A when transitioning from frame f1 to frame f2, and when transitioning from frame f4 to frame f5 Exit deceleration area A.
  • the movement speed (change speed v1) applied to the virtual content 50 when the virtual content 50 exists within the deceleration region A is applied to the virtual content 50 when the virtual content 50 does not exist within the deceleration region A.
  • change speed v2 As a method of controlling the display of the virtual content 50 so as to make the user U feel that the moving speed (change speed v2) has become half, the following methods are conceivable.
  • FIG. 5A shows the correspondence between frames f1 to f5 and states s1 to s5 of virtual content 50 assuming that virtual content 50 continues to move at changing speed v2.
  • the frame interval at this time is represented as d.
  • (B) of FIG. 5 changes the frame interval applied to the virtual content 50 existing within the deceleration area A to the frame interval applied to the virtual content 50 not located within the deceleration area A (that is, , and the corresponding relationship between each frame f1 to f5 and each state s1 to s5 of the virtual content 50 when "2 ⁇ d", which is twice "d" shown in FIG. ing.
  • the changing speed v1 of the virtual content 50 existing in the deceleration area A can be changed by relatively easy processing such as switching the frame rate applied to the virtual content 50 depending on whether the virtual content 50 exists in the deceleration area A. can be made slower than the change speed v2 of the virtual content 50 that does not exist within the deceleration area A. That is, as shown in (A) and (B) of FIG. 5, the frame interval applied to the virtual content 50 existing within the deceleration area A is changed to the normal time (when the virtual content 50 does not exist within the deceleration area A). ), the speed of the state change of the virtual content 50 existing in the deceleration area A can be apparently slowed down.
  • the virtual content 50 continues to be displayed in the same state (position) in the deceleration area A for a long time. put away.
  • the state change (movement) of the virtual content 50 displayed in the display area 31a is not smooth, and the user U may feel that the virtual content 50 is instantaneously moving at constant intervals.
  • the display control unit 14 may perform the following display control so that the state change (movement in the example of FIG. 4) of the virtual content 50 is smooth on the display area 31a. That is, the display control unit 14 controls the first state of the virtual content 50 corresponding to the first frame when it is assumed that the state of the virtual content 50 changes at the change speed v2 in the deceleration area A and One or more intermediate third states of the virtual content 50 may be complemented between the second state of the virtual content 50 corresponding to the second frame.
  • the frame f n and the state s n are displayed without changing the frame rate (frame interval) applied to the virtual content 50 depending on whether the virtual content 50 exists within the deceleration area A.
  • the state change of the virtual content 50 displayed in the display area 31a can be made smoother.
  • FIG. 6 is a diagram showing another example of virtual content.
  • the virtual content 50 may include text information consisting of multiple characters.
  • the virtual content 50 is an object that moves along a predetermined moving direction (vertical falling direction in the example of FIG. 6).
  • the virtual content 50 can be configured, for example, as an object that vertically falls from the sky in front of the user U (HMD 30).
  • the virtual content 50 falls and moves as if a plurality of characters were raining down from the sky.
  • the virtual content 50 moves at a relatively high falling speed (change speed v2).
  • the display control unit 14 described above causes the virtual content 50 to fall on the display area 31a at a falling speed (change speed v1) slower than the change speed v2. You will have to move slowly. As a result, the user U can easily visually recognize the text information included in the virtual content 50 while the virtual content 50 is being decelerated within the deceleration area A.
  • the display control unit 14 may perform the following display control in order to ensure the visibility of the text information of the virtual content 50. That is, the display control unit 14 controls the moving direction of the virtual content 50, the change speed v1 applied when the virtual content 50 exists within the deceleration region A, the deceleration region A (for example, the position and size of the deceleration region A). , shape, etc.), and a deceleration period during which the virtual content 50 moves at a changing speed v1 slower than the changing speed v2 is calculated.
  • the display control unit 14 determines the time T1 when the virtual content 50 enters the deceleration region A (that is, when the virtual content 50 enters the deceleration region A) based on the moving direction, the change speed v1, and the deceleration region A of the virtual content 50. and the time T2 when the virtual content 50 exits the deceleration area A (that is, after the virtual content 50 enters the deceleration area A, the virtual content 50 does not exist in the deceleration area A).
  • the time point determined first is specified, and the period "T2-T1" from time point T1 to time point T2 is calculated as the deceleration period.
  • the display control unit 14 limits the number of characters included in the text information of the virtual content 50 to a predetermined number or less based on the calculated deceleration period. For example, in movies with foreign language audio subtitles, the number of subtitle characters displayed at one time is generally limited to four characters per second.
  • the display control unit 14 may limit the number of characters included in the text information of the virtual content 50 based on the same rule as that for limiting the number of characters in movie subtitles. For example, when the deceleration period is calculated to be 5 seconds, the display control unit 14 reduces the number of characters included in the text information of the virtual content 50 to 20 characters or less by applying the above rule (4 characters/second). may be restricted. According to the above configuration, when the virtual content 50 is an object including text information, it is possible to appropriately ensure the visibility of the virtual content 50 (that is, the ease of grasping the text information).
  • step S1 the content acquisition unit 11 acquires the virtual content 50 to be displayed on the display unit 31 of the HMD 30 of the user U.
  • step S2 the line-of-sight information acquisition unit 12 acquires line-of-sight information indicating the user's U line of sight.
  • the region setting unit 13 sets the deceleration region A.
  • the region setting unit 13 sets, as the deceleration region A, a region including the gaze point P of the user U specified based on the line-of-sight information acquired in step S2.
  • the area setting unit 13 sets an area corresponding to the effective visual field of the user U as the deceleration area A.
  • the area setting unit 13 sets the area corresponding to the discriminative visual field of the user U in the deceleration area A as the first area A1, and the area excluding the first area A1 in the deceleration area A (that is, the area included in the effective visual field). is not included in the discrimination visual field) is defined as a second area A2.
  • step S4 the display control unit 14 performs display control of the virtual content 50 displayed in the display area 31a of the display unit 31.
  • An example of display control (processing of step S4) by the display control unit 14 will be described with reference to the flowchart shown in FIG. Note that the processing of the flowchart of FIG. 8 is executed for each virtual content 50 . That is, when a plurality of independent virtual contents 50 exist on the display area 31a, the display control unit 14 may perform the processing of the flowchart of FIG.
  • step S41 the display control unit 14 determines whether or not the virtual content 50 exists within the deceleration area A.
  • step S41 determines whether the virtual content 50 exists at the position P1 or P5 shown in FIG. 3, it is determined in step S41 that the virtual content 50 does not exist within the deceleration area A (step S41: NO).
  • step S ⁇ b>42 the display control unit 14 applies the change speed v ⁇ b>2 described above as the state change speed set for the virtual content 50 .
  • step S41 determines whether or not the virtual content 50 exists in the first area A1.
  • step S43 it is determined in step S43 that the virtual content 50 exists within the first area A1 (step S43: YES).
  • step S ⁇ b>44 the display control unit 14 applies the change speed v ⁇ b>3 described above as the state change speed set for the virtual content 50 .
  • step S43 NO if the virtual content 50 exists at the position P2 or P4 shown in FIG. exists in) (step S43: NO).
  • step S ⁇ b>45 the display control unit 14 applies the change speed v ⁇ b>4 described above as the state change speed set for the virtual content 50 .
  • the state change speed (change speed v1) of the virtual content 50 when the virtual content 50 is present in the deceleration region A, which is a part of the region displayed on the display unit 31, is decelerated.
  • the virtual content 50 is changed so as to be slower than the state change speed (change speed v2) of the virtual content 50 when the virtual content 50 does not exist within the area A (that is, when the virtual content 50 exists outside the deceleration area A).
  • 50 display is controlled. According to the above configuration, it is possible to easily grasp the content of the virtual content 50 in the deceleration area A, so that the visibility of the virtual content 50 can be improved.
  • a deceleration area A in which the speed of the state change of the virtual content 50 is decelerated, is set in a part of the display area 31a.
  • the state is changed at speed v2 (that is, the speed of change without deceleration). Therefore, the user U can grasp the original motion of the virtual content 50 in the corners of the field of view (regions off the center of the field of view), and the decelerated virtual content 50 in the center of the field of view (that is, the deceleration region A). Content can be easily recognized. More specifically, in the example of FIG. 6, the virtual content 50 performs its original action (for example, the action of falling from the sky like rain) outside the deceleration area A in the display area 31a at the changing speed v2.
  • the user U can see the characters falling like rain from the corner of his vision. In other words, the user U can be made to experience the effect of the virtual content 50 of "making letters fall like rain.” In this way, according to the above-described display control of the virtual content 50 by the server 10, it is possible to allow the user U to experience the original dynamic operation of the virtual content 50 and to easily understand the contents of the virtual content 50. Become.
  • the server 10 functions as a virtual content providing device in the above embodiment, the functions of the server 10 may be implemented in the HMD 30.
  • the HMD 30 functions as a virtual content providing device.
  • some of the functions of the server 10 described above may be executed by the HMD 30 .
  • the computer system including the server 10 and HMD 30 functions as a virtual content providing device.
  • each functional block may be implemented using one device that is physically or logically coupled, or directly or indirectly using two or more devices that are physically or logically separated (e.g. , wired, wireless, etc.) and may be implemented using these multiple devices.
  • a functional block may be implemented by combining software in the one device or the plurality of devices.
  • Functions include judging, determining, determining, calculating, calculating, processing, deriving, investigating, searching, checking, receiving, transmitting, outputting, accessing, resolving, selecting, choosing, establishing, comparing, assuming, expecting, assuming, Broadcasting, notifying, communicating, forwarding, configuring, reconfiguring, allocating, mapping, assigning, etc. can't
  • the server 10 in one embodiment of the present disclosure may function as a computer that performs the virtual content providing method of the present disclosure.
  • FIG. 9 is a diagram illustrating an example of a hardware configuration of server 10 according to an embodiment of the present disclosure.
  • the server 10 may be physically configured as a computer device including a processor 1001, a memory 1002, a storage 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007 and the like.
  • the hardware configuration of the server 10 may be configured to include one or more of the devices shown in FIG. 9, or may be configured without some of the devices.
  • the HMD 30 may also have the same hardware configuration as the hardware configuration of the server 10 shown in FIG.
  • Each function in the server 10 is performed by causing the processor 1001 to perform calculations, controlling communication by the communication device 1004, controlling communication by the communication device 1004, and controlling the communication by the memory 1002 and It is realized by controlling at least one of data reading and writing in the storage 1003 .
  • the processor 1001 for example, operates an operating system and controls the entire computer.
  • the processor 1001 may be configured by a central processing unit (CPU) including an interface with peripheral devices, a control device, an arithmetic device, registers, and the like.
  • CPU central processing unit
  • the processor 1001 reads programs (program codes), software modules, data, etc. from at least one of the storage 1003 and the communication device 1004 to the memory 1002, and executes various processes according to them.
  • programs program codes
  • software modules software modules
  • data etc.
  • the program a program that causes a computer to execute at least part of the operations described in the above embodiments is used.
  • each functional unit for example, the display control unit 14, etc.
  • each functional unit for example, the display control unit 14, etc.
  • the server 10 may be stored in the memory 1002 and implemented by a control program operating in the processor 1001, and other functional blocks may be similarly implemented. good too.
  • FIG. Processor 1001 may be implemented by one or more chips.
  • the program may be transmitted from a network via an electric communication line.
  • the memory 1002 is a computer-readable recording medium, and is composed of at least one of, for example, ROM (Read Only Memory), EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), RAM (Random Access Memory), etc. may be
  • ROM Read Only Memory
  • EPROM Erasable Programmable ROM
  • EEPROM Electrical Erasable Programmable ROM
  • RAM Random Access Memory
  • the memory 1002 may also be called a register, cache, main memory (main storage device), or the like.
  • the memory 1002 can store executable programs (program codes), software modules, etc. for implementing the content providing method according to an embodiment of the present disclosure.
  • the storage 1003 is a computer-readable recording medium, for example, an optical disc such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disc, a magneto-optical disc (for example, a compact disc, a digital versatile disc, a Blu-ray disk), smart card, flash memory (eg, card, stick, key drive), floppy disk, magnetic strip, and/or the like.
  • Storage 1003 may also be called an auxiliary storage device.
  • the storage medium described above may be, for example, a database, server, or other suitable medium including at least one of memory 1002 and storage 1003 .
  • the communication device 1004 is hardware (transmitting/receiving device) for communicating between computers via at least one of a wired network and a wireless network, and is also called a network device, a network controller, a network card, a communication module, or the like.
  • the input device 1005 is an input device (for example, keyboard, mouse, microphone, switch, button, sensor, etc.) that receives input from the outside.
  • the output device 1006 is an output device (eg, display, speaker, LED lamp, etc.) that outputs to the outside. Note that the input device 1005 and the output device 1006 may be integrated (for example, a touch panel).
  • Each device such as the processor 1001 and the memory 1002 is connected by a bus 1007 for communicating information.
  • the bus 1007 may be configured using a single bus, or may be configured using different buses between devices.
  • the server 10 includes hardware such as a microprocessor, a digital signal processor (DSP), an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), and an FPGA (Field Programmable Gate Array).
  • DSP digital signal processor
  • ASIC Application Specific Integrated Circuit
  • PLD Physical Location Deposition
  • FPGA Field Programmable Gate Array
  • a part or all of each functional block may be implemented by the hardware.
  • processor 1001 may be implemented using at least one of these pieces of hardware.
  • Input/output information may be stored in a specific location (for example, memory) or managed using a management table. Input/output information and the like can be overwritten, updated, or appended. The output information and the like may be deleted. The entered information and the like may be transmitted to another device.
  • the determination may be made by a value represented by one bit (0 or 1), by a true/false value (Boolean: true or false), or by numerical comparison (for example, a predetermined value).
  • notification of predetermined information is not limited to being performed explicitly, but may be performed implicitly (for example, not notifying the predetermined information). good too.
  • Software whether referred to as software, firmware, middleware, microcode, hardware description language or otherwise, includes instructions, instruction sets, code, code segments, program code, programs, subprograms, and software modules. , applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, and the like.
  • software, instructions, information, etc. may be transmitted and received via a transmission medium.
  • the software uses at least one of wired technology (coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), etc.) and wireless technology (infrared, microwave, etc.) to website, Wired and/or wireless technologies are included within the definition of transmission medium when sent from a server or other remote source.
  • wired technology coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), etc.
  • wireless technology infrared, microwave, etc.
  • data, instructions, commands, information, signals, bits, symbols, chips, etc. may refer to voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of these. may be represented by a combination of
  • information, parameters, etc. described in the present disclosure may be expressed using absolute values, may be expressed using relative values from a predetermined value, or may be expressed using other corresponding information. may be represented.
  • any reference to elements using the "first,” “second,” etc. designations used in this disclosure does not generally limit the quantity or order of those elements. These designations may be used in this disclosure as a convenient method of distinguishing between two or more elements. Thus, reference to a first and second element does not imply that only two elements can be employed or that the first element must precede the second element in any way.
  • a and B are different may mean “A and B are different from each other.”
  • the term may also mean that "A and B are different from C”.
  • Terms such as “separate,” “coupled,” etc. may also be interpreted in the same manner as “different.”
  • 1... Virtual content providing system 10... Server (virtual content providing device), 11... Content acquisition unit, 12... Line-of-sight information acquisition unit, 13... Area setting unit, 14... Display control unit, 30... HMD (user terminal), 31... display part, 31a... display area, 32... sensor, 50... virtual content, A... deceleration area, A1... first area, A2... second area.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Selon un mode de réalisation de la présente invention, un serveur (10) fournit un contenu virtuel (50) à afficher sur une unité d'affichage (31) d'un HMD (30) utilisé par un utilisateur. Le serveur (10) comprend : une unité d'acquisition de contenu (11) qui acquiert contenu virtuel (50) dont l'état est modifié au fil du temps ; une unité de définition de région (13) qui définit une région de décélération (A), qui est une partie d'une région affichée sur l'unité d'affichage (31) ; et une unité de commande d'affichage (14) qui commande l'affichage du contenu virtuel (50) de telle sorte qu'un taux de changement (v1) de l'état du contenu virtuel (50) à appliquer au contenu virtuel (50) qui est présent dans la région de décélération (A) devient inférieur à un taux de changement (v2) de l'état du contenu virtuel (50) à appliquer au contenu virtuel (50) qui n'est pas présente dans la région de décélération (A).
PCT/JP2022/040515 2021-10-29 2022-10-28 Dispositif de fourniture de contenu virtuel WO2023074875A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023556689A JPWO2023074875A1 (fr) 2021-10-29 2022-10-28

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-177119 2021-10-29
JP2021177119 2021-10-29

Publications (1)

Publication Number Publication Date
WO2023074875A1 true WO2023074875A1 (fr) 2023-05-04

Family

ID=86158663

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/040515 WO2023074875A1 (fr) 2021-10-29 2022-10-28 Dispositif de fourniture de contenu virtuel

Country Status (2)

Country Link
JP (1) JPWO2023074875A1 (fr)
WO (1) WO2023074875A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014056217A (ja) * 2012-09-14 2014-03-27 Olympus Corp ウェアラブル携帯型表示装置、頭部装着型表示装置、表示処理システム及びプログラム
JP2014514658A (ja) * 2011-04-08 2014-06-19 アマゾン・テクノロジーズ、インコーポレイテッド 注視に基づくコンテンツディスプレイ

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014514658A (ja) * 2011-04-08 2014-06-19 アマゾン・テクノロジーズ、インコーポレイテッド 注視に基づくコンテンツディスプレイ
JP2014056217A (ja) * 2012-09-14 2014-03-27 Olympus Corp ウェアラブル携帯型表示装置、頭部装着型表示装置、表示処理システム及びプログラム

Also Published As

Publication number Publication date
JPWO2023074875A1 (fr) 2023-05-04

Similar Documents

Publication Publication Date Title
US7928926B2 (en) Display apparatus and method for hands free operation that selects a function when window is within field of view
US9195306B2 (en) Virtual window in head-mountable display
JP4927631B2 (ja) 表示装置、その制御方法、プログラム、記録媒体および集積回路
US9928655B1 (en) Predictive rendering of augmented reality content to overlay physical structures
US10962780B2 (en) Remote rendering for virtual images
US20160238852A1 (en) Head mounted display performing post render processing
WO2016043537A1 (fr) Visiocasque commandé par une ligne de visée, son procédé de commande et programme informatique pour celle-ci
US20130335442A1 (en) Local rendering of text in image
EP2815291A1 (fr) Visiocasque, programme de commande de visiocasque et procédé de commande de visiocasque
US11282481B2 (en) Information processing device
WO2005122128A1 (fr) Dispositif de présentation d’informations de type portatif
US20170221180A1 (en) Method and system for providing a virtual reality space
CN109640180B (zh) 视频3d显示的方法、装置、设备、终端、服务器及存储介质
US20190064528A1 (en) Information processing device, information processing method, and program
KR20160009879A (ko) 웨어러블 디스플레이 디바이스 및 그 제어 방법
EP3776159A1 (fr) Appareil de traitement d'informations, système de traitement d'informations, corps mobile, procédé de traitement d'informations, et programme
US20180270463A1 (en) Recording medium, image generation apparatus, and image generation method
JP2019114078A (ja) 情報処理装置、情報処理方法及びプログラム
WO2023074875A1 (fr) Dispositif de fourniture de contenu virtuel
US20190114502A1 (en) Information processing device, information processing method, and program
CN112156467A (zh) 虚拟相机的控制方法、系统、存储介质与终端设备
CN107479692B (zh) 虚拟现实场景的控制方法、设备及虚拟现实设备
EP3438939A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP2017049738A (ja) ヘッドマウントディスプレイシステムを制御するプログラム
WO2023047865A1 (fr) Dispositif de présentation d'espace virtuel

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22887200

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023556689

Country of ref document: JP