WO2022244052A1 - Visiocasque - Google Patents

Visiocasque Download PDF

Info

Publication number
WO2022244052A1
WO2022244052A1 PCT/JP2021/018597 JP2021018597W WO2022244052A1 WO 2022244052 A1 WO2022244052 A1 WO 2022244052A1 JP 2021018597 W JP2021018597 W JP 2021018597W WO 2022244052 A1 WO2022244052 A1 WO 2022244052A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information processing
head
mounted display
processing device
Prior art date
Application number
PCT/JP2021/018597
Other languages
English (en)
Japanese (ja)
Inventor
眞弓 中出
万寿男 奥
康宣 橋本
Original Assignee
マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by マクセル株式会社 filed Critical マクセル株式会社
Priority to JP2023522001A priority Critical patent/JPWO2022244052A1/ja
Priority to CN202180098200.3A priority patent/CN117337462A/zh
Priority to PCT/JP2021/018597 priority patent/WO2022244052A1/fr
Publication of WO2022244052A1 publication Critical patent/WO2022244052A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns

Definitions

  • the present invention is a head-mounted display device (HMD: Head Mount Display) for experiencing virtual reality (VR: Virtual Reality) content, etc., in which the use of an information processing device, etc. can be started while experiencing content. It relates to a mount display device.
  • HMD Head Mount Display
  • VR Virtual Reality
  • VR technology that allows you to experience an artificially created virtual reality space as if it were a real space is well known.
  • VR technology is applied, for example, to flight simulators, tourist information, and games in which a large number of users participate via a network to build a virtual world.
  • a user wearing an HMD experiences VR content by viewing a VR content image (also referred to as a VR image) displayed on the display of the HMD.
  • the VR image is supplied from a server or the like to the HMD and updated according to the user's movement.
  • Such VR content is known as so-called immersive content.
  • Patent Literature 1 discloses that an HMD and an information processing device such as a smartphone are linked to allow the user to recognize the front background through the transflective display of the HMD.
  • Patent Document 1 discloses that when a smartphone receives an incoming call, an incoming call icon is displayed on the semi-transmissive display, and the user operates the smartphone through the semi-transmissive display without attaching or detaching the HMD. ing.
  • the device information of the smartphone is displayed as a two-dimensional code or the like on the screen of the smartphone, and the HMD captures and reads the device information displayed on the smartphone with a camera. If the read device information matches the device information registered in the HMD, the smartphone authentication procedure is reduced, thereby improving usability.
  • HMDs with non-transmissive displays there are also HMDs with non-transmissive displays.
  • an HMD with a non-transmissive display When using an HMD with a non-transmissive display to experience or view immersive content such as VR content, the user cannot visually recognize an information processing device such as a smartphone. For this reason, for example, when there is information that the user wants to view on the information processing device, or when there is an incoming call, it is not possible to perform necessary operations on the information processing device in a timely manner.
  • a head mounted display device includes a non-transmissive display that displays a VR image, a camera, and a control device, The control device analyzes the captured image generated by the camera while the user is experiencing the VR content, and if it determines from the captured image that the user is holding the information processing device, displays the VR image and the image of the information processing device on the display. are superimposed and displayed.
  • the present invention it is possible to provide a head-mounted display device that allows the user to use the information processing device when the user wants to use it, even when experiencing VR content.
  • FIG. 1 is a configuration diagram showing an example of a VR sensory system including a head-mounted display device according to Embodiment 1 of the present invention
  • FIG. BRIEF DESCRIPTION OF THE DRAWINGS It is an external view which shows an example of the head mounted display apparatus which concerns on Embodiment 1 of this invention.
  • 1 is a block diagram showing an example of a head mounted display device according to Embodiment 1 of the present invention
  • FIG. FIG. 4 is a block diagram showing another example of the head mounted display device according to Embodiment 1 of the present invention
  • It is a block diagram which shows an example of a structure of an information processing apparatus.
  • FIG. 10 is a flowchart showing an example of processing when a user who is experiencing VR content uses the information processing device;
  • FIG. 10 is a flowchart showing an example of processing when a user who is experiencing VR content uses the information processing device;
  • FIG. 10 is a flowchart showing an example of processing when a user who is experiencing VR content uses the information
  • FIG. 4 is a diagram showing a first example of a display image of the head-mounted display device in which a VR image and an image of the information processing device are superimposed;
  • FIG. 10 is a diagram showing a second example of a display image of the head-mounted display device in which a VR image and an image of the information processing device are superimposed;
  • FIG. 10 is a diagram showing a third example of a display image of the head-mounted display device in which a VR image and an image of the information processing device are superimposed;
  • FIG. 10 is an operation sequence diagram showing an example of processing when receiving an incoming call according to the second embodiment of the present invention;
  • FIG. 10 is a diagram showing an example of a display image when receiving an incoming call;
  • FIG. 12 is an operation sequence diagram showing an example of processing when receiving an incoming call according to Embodiment 3 of the present invention
  • 4 is a diagram illustrating a display image when the information processing device 3 is present within the imaging range of the camera
  • FIG. 4 is a diagram illustrating a display image when the information processing device 3 is present within the imaging range of the camera
  • FIG. 4 is a diagram illustrating a display image when the information processing device 3 is present within the imaging range of the camera
  • FIG. 10 is a diagram illustrating a display image when the information processing apparatus according to Embodiment 2 of the present invention is not within the imaging range of the camera
  • FIG. 10 is a diagram illustrating a display image when the information processing apparatus according to Embodiment 2 of the present invention is not within the imaging range of the camera;
  • FIG. 20 is a flowchart showing an example of a method for detecting a user's cooperation instruction according to Embodiment 4 of the present invention;
  • FIG. 20 is a flowchart showing another example of a method for detecting a user's cooperation instruction according to Embodiment 4 of the present invention;
  • FIG. 20 is a flowchart showing an example of cooperative processing by a VR sensory application according to Embodiment 4 of the present invention;
  • FIG. 13 is a diagram showing an example of an object used during VR content experience according to Embodiment 5 of the present invention;
  • FIG. 12 is a diagram showing another example of an object used while experiencing VR content according to Embodiment 5 of the present invention;
  • FIG. 3 is a block diagram showing an example of a communication module;
  • Embodiment 1 Embodiment 1 will be described with reference to FIGS. 1 to 8.
  • FIG. 1 An illustration of Embodiment 1 will be described with reference to FIGS. 1 to 8.
  • FIG. 1 is a configuration diagram showing an example of a VR sensory system including a head-mounted display device according to Embodiment 1 of the present invention.
  • the VR experience system of FIG. 1 includes a head-mounted display device 2 worn by a user 1, an information processing device 3, a network 6, an access point 5 to the network 6, a VR service server 7, and the like.
  • VR service server 7 transmits VR content (including VR images and VR audio) to head mounted display device 2 via network 6 and access point 5 in response to access from user 1 or head mounted display device 2. .
  • a user 1 wearing the head-mounted display device 2 experiences the VR content that the head-mounted display device 2 receives from the VR service server 7 and downloads.
  • the VR service server 7 may be provided with a storage for storing VR contents, a communication device connected to the head mounted display device 2, and the like.
  • a personal computer or the like can be used as the VR service server 7 .
  • the head-mounted display device 2 and the information processing device 3 send and receive various types of information to and from the VR service server 7 via the access point 5 and network 6 .
  • the head mounted display device 2 transmits various information to the VR service server 7 by transmitting various information to the access point 5 via the network signal 4a.
  • the head-mounted display device 2 receives various information from the VR service server 7 via the network signal 4c transmitted from the access point 5.
  • the network signals 4a, 4b, 4c are, for example, Wi-Fi (registered trademark) signals.
  • the information processing device 3 is, for example, a smart phone, a tablet terminal, a wearable terminal, etc., but is not limited to these.
  • transmission and reception of various information may be performed by the network signals 4 a and 4 b described above, or transmission and reception of information may be performed by the near field communication 8 .
  • the proximity communication 8 is, for example, Bluetooth (registered trademark).
  • FIG. 2 is an external view showing an example of the head mounted display device according to Embodiment 1 of the present invention.
  • FIG. 3A is a block diagram showing an example of a head mounted display device according to Embodiment 1 of the present invention.
  • the head mounted display device 2 includes cameras 20a and 20b, displays 22a and 22b, speakers 23a and 23b, a microphone 24, proximity communication receiving devices (receiving devices) 25a, 25b and 25c, a sensor group 210, A controller 220 is provided. These elements are connected to each other via internal bus 200 .
  • the head mounted display device 2 also includes holders 26a and 26b.
  • the user 1 wears the head mounted display device 2 using the holders 26a and 26b.
  • the head-mounted display device 2 is fixed on the head by a holder 26a and fixed on the nose by a holder 26b.
  • the cameras 20a and 20b are attached so as to photograph the front of the head-mounted display device 2, that is, the line-of-sight direction of the user 1.
  • the camera 20a is arranged at a position corresponding to the left eye of the user 1
  • the camera 20b is arranged at a position corresponding to the right eye of the user 1.
  • two cameras are provided here, three or more cameras may be provided.
  • the cameras 20 a and 20 b capture images in the line-of-sight direction of the user 1 and output captured image data to the control device 220 via the internal bus 200 .
  • the sensor group 210 includes, for example, an orientation sensor 211, a gyro sensor 212, an acceleration sensor 213, a distance sensor 214, a position sensor (not shown), etc., as shown in FIG. 3A. Based on the sensing results of these sensors, the position of the head-mounted display device 2 (that is, the user 1), the posture of the user 1 (for example, tilt of the head), the line-of-sight direction of the user 1, and the movement (change) of the line-of-sight direction of the user 1 ) are detected. Each sensor outputs sensing results to the control device 220 via the internal bus 200 .
  • the distance sensor 114 is a sensor that measures the distance from the head-mounted display device 2 (that is, the user 1) to the information processing device 3, which is the object. Any sensor capable of three-dimensionally estimating the position of an object can be used as the distance sensor 214 . Examples of the distance sensor 214 include three-dimensional distance sensors such as LiDAR (Light Detection And Ranging) and TOF (Time of Flight) type sensors. As shown in FIGS. 2 and 3A, when the head-mounted display device 2 is provided with a proximity communication receiver for distance measurement, the distance sensor 214 may be omitted.
  • LiDAR Light Detection And Ranging
  • TOF Time of Flight
  • the displays 22a and 22b display contents such as VR contents (VR images) and guidance contents (guidance images) of the head-mounted display device 2, for example.
  • the displays 22a and 22b are of a non-transmissive type, and are composed of display panels (for example, curved panels or flat panels) such as liquid crystal or organic EL (Organic Electro-Luminescence), lenses, and the like.
  • the display 22a is a display corresponding to the user's 1 left eye
  • the display 22b is a display corresponding to the user's 1 right eye.
  • the displays 22a and 22b receive display image data (for example, VR image data, guidance image data, etc.) output from the control device 220 via the internal bus 200, and display VR content corresponding to each eye of the user 1. to display respectively.
  • the speakers 23a and 23b receive output audio data output from the control device 220 via the internal bus 200, and based on the output audio data, output VR content (VR audio) and guidance content for the head mounted display device 2 (guidance voice) and various sounds such as operation sounds.
  • the speaker 23a is a speaker corresponding to the user's 1 left ear
  • the speaker 23b is a speaker corresponding to the user's 1 right ear.
  • the speakers 23a and 23b output sounds corresponding to the ears of the user 1, respectively.
  • the microphone 24 acquires voices such as voices uttered by the user and environmental sounds, and outputs the acquired voices to the control device 220 as input voice data via the internal bus 200 .
  • the close proximity communication receivers 25a, 25b, and 25c are receivers that receive close proximity communication signals (position detection signals) transmitted from the information processing device 3.
  • the near field communication receivers 25 a , 25 b , 25 c output the received near field communication signals to the control device 220 via the internal bus 200 .
  • the proximity communication signal is used for information and data communication with other devices such as the information processing device 3 and for measuring the distance from the head mounted display device 2 (that is, the user 1 ) to the information processing device 3 .
  • the control device 220 is a functional block that controls each element included in the head-mounted display device 2, performs image analysis processing, voice recognition processing, and the like. As shown in FIG. 3A, the control device 220 includes a communication device 221, a computer 222, a memory 223, an image memory 224, a storage device 225, and the like. Each element in controller 220 is connected to each other via internal bus 200 .
  • the communication device 221 covers multiple communication methods such as Wi-Fi (registered trademark), fourth generation (4G) and fifth generation (5G) mobile communication, selects an appropriate communication method, It connects with the network 6 and the information processing device 3 .
  • the communication device 221 may also have a close proximity communication function.
  • the communication device 221 may include a proximity communication receiver for data reception separately from the proximity communication receivers 25a to 25c for distance measurement, or the proximity communication receivers 25a to 25c may be used for data reception.
  • the communication device 221 may have a configuration capable of proprietary connection with the VR service server 7 .
  • the computer 222 is composed of a processor such as a CPU (Central Processing Unit).
  • the computer 222 reads out and executes various programs held in the memory 223 and implements functional blocks for performing various processes on the computer 222 .
  • the computer 222 also accesses the memory 223 and image memory 224 to write and read programs and data.
  • Computer 222 may include a graphics processor that primarily performs image processing.
  • the memory 223 is a volatile memory such as RAM (Random Access Memory).
  • the memory 223 temporarily holds various programs read from the storage device 225 and expanded.
  • the memory 223 also holds various data such as parameters used in various programs, calculation results in the computer 222, and the like.
  • the memory 223 outputs and holds various information such as program data and calculation results based on instructions from the computer 222 or functional blocks realized on the computer 222, for example.
  • the image memory 224 is, for example, a volatile memory such as a RAM, and mainly temporarily holds various data related to image processing (for example, display image data, captured image data, etc.).
  • the image memory 224 also mainly outputs and holds various data related to image processing based on instructions from the computer 222 or functional blocks realized on the computer 222, for example.
  • the image memory 224 is provided separately from the memory 223 in FIG. 3A, the image memory 234 may be provided within the memory 223 . In this case, the memory 233 temporarily holds various data relating to image processing.
  • the control device 220 measures (calculates) the distance from the head mounted display device 2 to the information processing device 3 using the respective proximity communication signals received by the proximity communication receivers 25a, 25b, and 25c. For example, the control device 220 performs processing such as detecting the phase difference between the near field communication receivers 25a, 25b, and 25c based on the near field communication signals input from the near field communication receivers 25a, 25b, and 25c. Thus, the distance between the head mounted display device 2 and the information processing device 3 is estimated. Further, the control device 220 can detect the three-dimensional position of the information processing device 3 using the respective proximity communication signals received by the proximity communication receivers 25a, 25b, and 25c.
  • the control device 220 can estimate the three-dimensional position of the object. Note that when the distance to the information processing device 3 is not estimated, the plurality of proximity communication receiving devices 25a to 25c may not be provided.
  • the control device 220 outputs output audio data to the speakers 23a and 23b.
  • the control device 220 also executes analysis processing of images captured by the cameras 20 a and 20 b and recognition processing of voices acquired by the microphone 24 .
  • the control device 220 also executes network communication, proximity communication processing, and the like.
  • the storage device 225 includes a non-volatile memory such as flash memory.
  • the storage device 225 includes, for example, a storage area 225a that stores a basic operation program for performing basic control of the head mounted display device 2, and a storage that stores a cooperation processing program for linking the head mounted display device 2 and the information processing device 3. It includes storage areas such as an area 225b, a storage area 225c for storing an image analysis program, and a storage area 225d for storing a speech recognition program.
  • a program stored in each storage area is developed in the memory 223 and executed by the computer 222 . Thereby, each functional block is realized on the computer 222 .
  • the content images displayed on the displays 22a and 22b are mainly VR images, but in the present embodiment, an image of the information processing device 3 cut out from the captured image of the camera may be superimposed on the VR image. be. Image data of these images are held in, for example, the image memory 224 , read out from the image memory 224 by an instruction from the control device 220 , and output to the display 22 .
  • FIG. 3B is a block diagram showing another example of the head mounted display device according to Embodiment 1 of the present invention.
  • the same elements as those in FIG. 3A are given the same reference numerals, and descriptions of parts overlapping with those in FIG. 3A will be omitted.
  • FIG. 3B differs from FIG. 3A in that the storage device 225 is provided with a storage area 225e for storing VR content.
  • the communication device 221 downloads VR content (images, sounds) from the VR service server 7 via the network 6 based on the control of the computer 222 .
  • the computer 222 stores the downloaded VR content in the storage area 225e of the storage device 225.
  • the computer 222 reads VR content from the storage area 225e as necessary, outputs VR images to the displays 22a and 22b, and outputs VR audio to the speakers 23a and 23b, thereby allowing the user 1 to experience the VR content. Then, the computer 222 performs update processing of the VR image and the VR audio according to the motion of the user 1 .
  • FIG. 4 is a block diagram showing an example of the configuration of the information processing device.
  • the configuration of the information processing device 3 will be described using a smartphone as an example.
  • the information processing device 3 illustrated in FIG. 4 includes a camera 30, an input integrated display 31, a sensor group 32, a microphone 33, a speaker 34, a communication device 35, a computer 36, a memory 37, an image memory 38, and a storage device 39.
  • a camera 30 an input integrated display 31
  • a sensor group 32 a microphone 33
  • a speaker 34 a communication device 35
  • a computer 36 includes a memory 37, an image memory 38, and a storage device 39.
  • the camera 30 is provided, for example, on the surface opposite to the input-integrated display 31 .
  • the camera 30 may also be provided on the input-integrated display 31 side.
  • the camera 30 outputs captured image data to the computer 36, the storage device 39, or the like via the internal bus 300.
  • the input-integrated display 31 is a display with an input function by so-called touch operation.
  • the input-integrated display 31 displays, for example, an operation screen of the information processing device 3, an incoming call screen, a captured image, and the like.
  • An operation of the information processing device 3 is performed by a touch operation via the input-integrated display 31 .
  • the sensor group 32 includes, for example, a direction sensor, a gyro sensor, an acceleration sensor, a position sensor (all not shown), and the like.
  • the position and orientation of the information processing device 3 are detected based on the sensing results of these sensors.
  • Each sensor outputs sensing results to the computer 36 via the internal bus 300 .
  • the microphone 33 acquires voices such as user's voice and environmental sounds, and outputs the acquired voices to the computer 36 as input voice data via the internal bus 300 .
  • the speaker 34 receives the output audio data output from the computer 36 via the internal bus 300, and based on the output audio data, various sounds such as call voice, application voice, guidance voice of the information processing device 3, operation sound, etc. Output audio.
  • the communication device 35 connects to, for example, a fourth generation (4G) or fifth generation (5G) mobile communication network. Also, the communication device 35 is connected to the network 6 and the head-mounted display device 2 via Wi-Fi (registered trademark), for example. The communication device 35 also has a close proximity communication function and transmits a close proximity communication signal to the head mounted display device 2 .
  • 4G fourth generation
  • 5G fifth generation
  • the computer 36 is composed of a processor such as a CPU, for example.
  • the computer 36 reads out and executes various programs held in the memory 37 and implements functional blocks for performing various processes on the computer 36 .
  • the computer 36 also accesses the memory 37 and the image memory 38 to write and read programs and data.
  • the memory 37 is, for example, a volatile memory such as RAM.
  • the memory 37 temporarily holds various programs read from the storage device 39 and expanded.
  • the memory 37 holds various data such as parameters used in various programs, calculation results in the computer 36, and the like.
  • the memory 37 outputs and holds various information such as program data and calculation results based on instructions from the computer 36 or functional blocks realized on the computer 36, for example.
  • the image memory 38 is, for example, a volatile memory such as a RAM, and mainly temporarily holds various data related to image processing (for example, display image data, captured image data, etc.).
  • the image memory 38 also mainly outputs and holds various data relating to image processing based on instructions from the computer 36 or functional blocks realized on the computer 36, for example.
  • the image memory 38 may be provided separately from the memory 37 , or the image memory 38 may be provided within the memory 37 . In this case, the memory 37 temporarily holds various data relating to image processing.
  • the storage device 39 includes a non-volatile memory such as flash memory. As shown in FIG. 4, the storage device 39 includes, for example, a storage area 391 that stores basic operation programs, a storage area 392 that stores call applications, and a storage area 393 that stores other applications such as SNS applications.
  • the head-mounted display device 2 of the present embodiment superimposes the image of the information processing device on the VR content image, thereby allowing the user 1 who is experiencing the VR content to use the information processing device 3 .
  • FIG. 5 is a flowchart showing an example of cooperative processing by the VR experience application according to Embodiment 1 of the present invention.
  • FIG. 5 shows an example of processing when a user who is experiencing VR content uses the information processing device.
  • the control device 220 transmits the network signal 4a from the communication device 221 to access the VR service server 7 (step S101).
  • the network signal 4a includes various information such as address information of the VR service server 7 on the network and VR content specifying information specifying the VR content to be downloaded.
  • the VR service server 7 transmits predetermined VR content designated by the VR content designation information to the head mounted display device 2 .
  • the control device 220 Upon receiving the VR content, the control device 220 stores the received VR content in the memory 223 or the image memory 224, for example. Note that if the storage device 225 is provided with a storage area 225e (FIG. 3B) for storing VR content, the control device 220 may store the received VR content in the storage area 225e. In this case, the VR content may be downloaded in advance and stored in the storage area 225e, and predetermined VR content may be read out from the storage area 225e when the head mounted display device 2 is started to be used. In this case, the process of step S101 is performed only when the necessary VR content is not stored.
  • predetermined VR content is output to provide user 1 with the VR content.
  • the control device 225 outputs VR initial image data (VR image data) at the start of the VR content to the displays 22a and 22b, and displays the VR initial images (VR images) on the displays 22a and 22b, respectively.
  • the control device 225 also outputs the VR audio data to the speakers 23a and 23b, and causes the speakers 23a and 23b to output the VR audio corresponding to the VR image.
  • step S ⁇ b>103 the control device 220 continues to acquire the movement information of the head mounted display device 2 , that is, the movement information of the user 1 based on the sensing information of each sensor of the sensor group 210 . Then, in step S104, the control device 220 generates VR updated image data (VR image data) based on the movement information acquired in step S103.
  • VR updated image data VR image data
  • step S105 the control device 220 outputs the VR updated image data generated in step S104 to the displays 22a and 22b, and displays the VR updated image data in accordance with the visual field, line-of-sight direction, inclination of the head and body, etc. of the user 1. (VR images) are displayed on the displays 22a and 22b, respectively.
  • step S106 the captured image data input from the cameras 20a and 20b is analyzed (image analysis).
  • the control device 220 analyzes the captured image data and extracts the information processing device 3 . Also, the control device 220 may extract the hand of the user 1 from the captured image data. Note that the imaging by the cameras 20a and 20b and the analysis of the captured image data may be performed asynchronously.
  • step S107 it is determined whether or not the user 1 is holding the information processing device 3 based on the analysis result of the captured image data in step S106. Specifically, in step S107, it is determined whether or not the user 1 has moved the information processing device 3 in front of the camera 20a or the camera 20b. When it is difficult to detect the hand of the user 1 in image analysis, the control device 220 determines whether or not the information processing device 3 is shown in the captured image so that the user 1 moves the information processing device 3 in front of the camera. It may be determined whether or not the
  • control device 220 recognizes from the photographed image that the image displayed when the information processing device 3 senses vibration by a vibration sensor or the like is being displayed, thereby indicating that the user 1 has moved the information processing device 3 . may be detected.
  • the control device 220 determines whether the user 1 has moved the information processing device 3 in front of the camera by combining the position of the information processing device 3 in the captured image and the line-of-sight direction of the user 1 extracted based on the sensing result. It may be determined whether In this case, the control device 220 may determine that the user 1 has moved the information processing device 3 in front of the camera if the line of sight of the user 1 is directed toward the information processing device 3, or may determine that the user 1 has moved the information processing device 3 in front of the camera. If the user 1 is not facing the information processing device 3, it may be determined that the user 1 has not moved the information processing device 3 in front of the camera.
  • the information processing device 3 is left on the desk, and the information processing device 3 is extracted in the captured image.
  • the content is not displayed on the head-mounted display device 2, the content viewing by the user 1 is not disturbed.
  • step S108 the control device 220 executes the process of step S108.
  • step S ⁇ b>108 the control device 220 cuts out an image of the information processing device 3 from the captured image, and generates image data of the information processing device 3 .
  • the display screen data of the information processing device 3 can be obtained by requesting the information processing device 3 to transmit the display screen data
  • the display screen data is transformed to match the display screen of the image data of the information processing device 3. , are superimposed on the image data of the information processing device 3 .
  • device registration and authentication have been performed in advance between the information processing device 3 and the head-mounted display device 2, and mutual authentication is performed by proximity wireless communication or the like. .
  • step S109 the control device 220 superimposes the image data of the information processing device 3 generated in step S108 on the VR image. Specifically, the control device 220 combines the image data of the information processing device 3 generated in step S108 and the VR updated image data (VR image data) to generate composite image data. Then, the control device 220 outputs the generated composite image data to the displays 22a and 22b, and causes the displays 22a and 22b to display composite images in which the VR image and the image of the information processing device 3 are superimposed. As a result, the user can visually recognize and start using the information processing device 3 while experiencing the VR content.
  • the control device 220 outputs the generated composite image data to the displays 22a and 22b, and causes the displays 22a and 22b to display composite images in which the VR image and the image of the information processing device 3 are superimposed.
  • step S110 it is determined whether or not User 1 has given an instruction to pause the progress of the VR content. If the user 1 has given an instruction to pause the progress of the VR content (Yes), the control device 220 proceeds to step S111 and pauses the progress of the VR content. This allows the user 1 to concentrate on using the information processing device 3 . Then, when the user 1 gives an instruction to restart the progress of the VR content, the progress of the VR content is restarted, and the process of step S112 is executed.
  • step S110 if the user 1 has not given an instruction to pause the progress of the VR content (No), the user 1 uses the information processing device 3 while the VR content is progressing.
  • step S112 the control device 220 determines termination of the VR experience application that executes the VR content. Termination of the VR sensory application may be determined based on whether or not the user 1 has instructed termination, or may be determined based on whether or not the VR content has terminated.
  • step S113 If the VR experience application is to be continued (No), the process returns to step S103, and the processes of steps S103 to S112 are executed again. On the other hand, when ending the VR sensation application (Yes), the control device 220 ends the VR sensation application (step S113).
  • step S107 if the user 1 does not hold the information processing device 3, that is, if the control device 220 determines that the user 1 has not moved the information processing device 3 in front of the camera (No), step Move to S112. Subsequent processing is as already explained.
  • step S111 the user 1 may not give an instruction to resume the progress of the VR content. Therefore, a step may be provided between steps S111 and S112 in which the control device 220 superimposes a guidance image for resuming the progress of the VR content on the VR image and the image of the information processing device 3 .
  • the time after the user 1 stops the progress of the VR content may be counted, and when a predetermined time has passed, the progress of the VR content may be forcibly restarted.
  • the time from when the user 1 stopped the progress of the VR content may be counted, and when a predetermined time has passed, the process may proceed to step S113 and terminate the VR experience application.
  • FIG. 6 is a diagram showing a first example of the display image of the head-mounted display device in which the VR image and the image of the information processing device are superimposed.
  • FIG. 6 shows a VR image 50, an imaging range 51 of the cameras 20a and 20b, and an image 52 of the information processing device 3. As shown in FIG. Although the imaging range 51 is illustrated for explanation, it is not displayed on the head mounted display device 2 .
  • the control device 220 recognizes the information processing device 3 within the imaging range 51 by analyzing the captured image data, and extracts the image 52 of the information processing device 3 (which may include the hand of the user holding it) from the captured image. break the ice. Then, the control device 220 superimposes the clipped image 52 of the information processing device 3 on the VR image.
  • the present invention can also be applied to a transmissive head-mounted display device. In the case of the transmissive head-mounted display device 2, the control device 220 controls the VR image of the region corresponding to the image 52 of the information processing device 3. may be made transparent and the VR image may not be displayed in the transparent area.
  • control device 220 may not display the hand of the user 1 holding the information processing device 3, or may replace it with another image such as a generated graphic image of the hand of the user 1.
  • the user 1 can confirm that the information processing device 3 has been recognized by the head-mounted display device 2 and can perform various operations such as authentication while viewing the image 52 of the information processing device 3 .
  • the finger on the opposite side of the hand holding the information processing device 3 may be used. By displaying images of the hand and fingers on the opposite side, an effect of facilitating operation can be expected.
  • FIG. 7 is a diagram showing a second example of the display image of the head-mounted display device in which the VR image and the image of the information processing device are superimposed.
  • an overlay image 53 of the information processing device 3 in which the image 52 of the information processing device 3 in FIG. 6 and the display image 53a of the information processing device 3 are superimposed is displayed.
  • the image of the hand of the user 1 is displayed in consideration of the positional relationship with the information processing device 3 .
  • the control device 220 acquires display image data related to the display image from the information processing device 3, for example, through network communication or proximity communication.
  • the head-mounted display device 2 and the information processing device 3 have been authenticated, and communication between the devices has been established.
  • the information processing device 3 transmits the display image data to the head mounted display device 2 .
  • the control device 220 generates composite image data by superimposing the VR image data, the image data of the information processing device 3, and the received display image data. Then, the control device 220 outputs the generated synthesized image data to the displays 22a and 22b, and outputs synthesized images obtained by superimposing the VR image 50, the image 52 of the information processing device 3, and the display image 53a to the displays 22a and 22b, respectively. display.
  • a composite image obtained by superimposing the image 52 of the information processing device 3 and the display image 53 a is the overlay image 53 of the information processing device 3 .
  • the head mounted display device 2 acquires the latest display image of the information processing device 3 by acquiring the display image of the information processing device 3 while recognizing the information processing device 3 in the captured image.
  • the display image 53a generated from the display image data of the information processing device 3 is used instead of the display image captured by the camera. This makes it possible to improve the definition of the display image in the synthesized image.
  • control device 220 displays the lock image of the information processing device 3 by, for example, an operation of lifting the information processing device 3, and the user 1 holds the information processing device 3 in the lock screen. It may be determined that the user 1 has picked up the information processing device 3 by determining that the display operation is being performed.
  • FIG. 8 is a diagram showing a third example of the display image of the head-mounted display device in which the VR image and the image of the information processing device are superimposed.
  • an enlarged display image 53b of the display image 53a of the information processing device 3 is displayed near the image 52 of the information processing device 3 (on the right side in FIG. 8).
  • the display position of the enlarged display image 53 b is not limited to the right side of the image 52 .
  • the size of the enlarged display image 53b is arbitrary, and can be changed to a size that is easy for the user 1 to see.
  • the display position and size may be controllable by the user 1 moving the information processing device 3 left, right, up and down to change the display position and moving it back and forth to change the size.
  • FIG. 8 shows an example in which an operation pointer 54 is superimposed on the information processing device 3 corresponding to the finger operated by the user 1 .
  • the control device 220 determines that the user 1 is holding the information processing device 3 and performing a lock screen display operation by, for example, lifting the information processing device 3 , and sends display image data to the information processing device 3 . may be requested to be sent.
  • the display image 53a of the information processing device 3 can be superimposed on the VR image, so it is possible to improve the definition of the display image.
  • the enlarged display image 53b of the information processing device 3 can be superimposed on the VR image, so it is possible to improve the visibility of the display image.
  • the functions that are considered to be particularly effective while the user 1 is experiencing the VR content have been described as an example. Regardless, for example, the present invention can obtain the same effect even when the menu is displayed or nothing is displayed. In transmissive displays, the present invention is effective while displaying content or the like that greatly obstructs the field of view.
  • Embodiment 2 Next, Embodiment 2 will be described. In the present embodiment, processing when an incoming call is received at the information processing device 3 while the user 1 is experiencing VR content will be described.
  • FIG. 9 is an operation sequence diagram showing an example of processing when receiving a call according to Embodiment 2 of the present invention.
  • FIG. 9 shows an interrupt sequence and the like for presenting the use of the information processing device 3 to the user 1 when there is an incoming call to the information processing device 3 .
  • the information processing device 3 receives an incoming call while the user 1 is experiencing VR content, and the information processing device 3 notifies the head mounted display device 2 of the incoming call.
  • An operation sequence diagram is shown for causing the user 1 to start using the information processing device 3 with the notification as a trigger.
  • FIG. 9 shows the relationship between the head-mounted display device 2, the information processing device 3, and the VR service server 7. While the user 1 is experiencing VR content, the head mounted display device 2 executes cooperative processing by the VR experience application 227 .
  • the head-mounted display device 2 (control device 220) transmits movement data of the head-mounted display device 2 detected based on, for example, sensing results to the VR service server 7 (step S207).
  • the VR service server 7 Upon receiving the movement data from the head-mounted display device 2, the VR service server 7 transmits the image data of the VR image updated along with the movement to the head-mounted display device 2 (step S201).
  • the head mounted display device 2 Upon receiving the VR image data for updating from the VR service server 7, the head mounted display device 2 displays VR updated images based on the VR image data for updating on the displays 22a and 22b (step S202). Steps S200 to S202 are repeatedly executed.
  • the information processing device 3 When the information processing device 3 receives an incoming call, the information processing device 3 activates the call application and becomes an incoming call status (step S207). Then, the information processing device 3 transmits to the head-mounted display device 2, as an interrupt request, a request to display an incoming call icon indicating that there is an incoming call (step S208).
  • step S203 when the head-mounted display device 2 receives the request to display the incoming call icon, it reads the image data of the incoming call icon held in, for example, the memory 223 and the image memory 224, and updates the VR image data and the image of the incoming call icon. Synthetic image data is generated by synthesizing the data. Based on the synthesized image data, the head mounted display device 2 causes the displays 22a and 22b to display a synthesized image in which an incoming call icon is superimposed on the VR image.
  • FIG. 10 is a diagram showing an example of a display image when receiving an incoming call.
  • FIG. 10 shows a composite image in which the VR image 50 and the incoming call icon 55 are superimposed. As shown in FIG. 10 , the incoming call icon 55 is superimposed on a partial area of the VR image 50 .
  • the display location of the incoming call icon 55 is not limited to the example in FIG.
  • the user 1 Upon recognizing the incoming call icon 55, the user 1 moves the information processing device 3 to the front of the head mounted display device 2 (S204). Recognition of the incoming call icon 55 is performed, for example, by detecting movement of the user's 1 finger by analyzing the captured image and comparing the detected movement of the finger with a predetermined movement pattern. Alternatively, the incoming call icon 55 may be recognized by acquiring the vocalization of the user 1 with the microphone 24 and comparing the vocalization of the user 1 with a predetermined vocalization pattern.
  • the head-mounted display device 2 analyzes the captured image data, extracts the information processing device 3, and superimposes the image of the information processing device 3 on the VR image, for example, as shown in FIGS. 6 to 8 (step S205). .
  • processing such as pausing the progress of the VR content may be performed (step S206).
  • step S209 the user 1 operates the information processing device 3 to make a call.
  • the information processing device 3 transmits a request to erase the incoming call icon to the head mounted display device 2 (step S210). Note that step S211 may be performed at any timing after the start of the call.
  • the head-mounted display device 2 When the head-mounted display device 2 receives the request to erase the incoming call icon, it finishes synthesizing the VR image data for updating and the image data of the incoming call icon, and erases the incoming call icon 55 displayed on the displays 22a and 22b (step S211).
  • the icon display time may be set in advance so that the display icon 55 is automatically deleted when the icon display time elapses after the incoming call icon 55 is displayed.
  • step S210 can be omitted.
  • Step S211 in this case is executed at any timing after step S204 according to the icon display time.
  • the head mounted display device 2 cancels the temporary stop of the VR content, executes steps S200 to S202, and restarts the progress of the VR content.
  • this embodiment can also be applied to SNS applications, mail applications, and the like.
  • the SNS application, mail application, or the like receives an incoming call or mail
  • the information processing device 3 transmits a request to display an incoming call icon or a received icon.
  • the user 1 can pick up the information processing device 3, check the received content of the SNS application, mail application, etc., and reply to the received content.
  • Embodiment 3 Next, Embodiment 3 will be described.
  • user 1 does not know the location of information processing device 3 .
  • the user 1 may not be able to operate the information processing device 3 immediately after recognizing the incoming call from the incoming call icon 55 . Therefore, in the present embodiment, a method for making the user 1 recognize the position of the information processing device 3 when there is an incoming call will be described.
  • FIG. 11 is an operation sequence diagram showing an example of processing when receiving a call according to Embodiment 3 of the present invention.
  • FIG. 11 is similar to FIG. 9, but differs from FIG. 9 in that step S312 is added between steps S204 and S205.
  • step S312 is executed.
  • the head mounted display device 2 generates a composite image in which the captured images of the cameras 20a and 20b are further superimposed on the composite image in which the VR image and the incoming call icon 55 are superimposed, and the captured images are superimposed.
  • a synthesized image is displayed on the displays 22a and 22b.
  • the user 1 searches the information processing device 3 while viewing the captured image superimposed on the VR image.
  • the user 1 grabs the information processing device 3 and performs an operation such as an unlocking operation.
  • step S205 is executed, and a synthesized image in which the VR image and the image of the information processing device 3 are superimposed is displayed on the displays 22a and 22b. to be displayed.
  • 12A, 12B, and 12C are diagrams illustrating display images when the information processing device 3 exists within the imaging range of the camera. Note that the incoming call icon 55 is also superimposed on the display images of FIGS. 12A, 12B, and 12C.
  • the image captured by the camera is not processed, and an image in which the VR image 50 and the unprocessed captured image 351A are superimposed is displayed on the displays 22a and 22b. According to the example of FIG. 12A, it is possible to search for the information processing device 3 without impairing visibility of the VR image 50 .
  • an image in which the VR image 50 and the transparent image 351B are superimposed is displayed on the displays 22a and 22b.
  • the transparent image 351B is generated by executing a transparent process for making the area other than the information processing device 3 transparent on the captured image.
  • the image 52 of the information processing device 3 is included in the transparent image 351B.
  • FIG. 12C an image in which the VR image 50 and the line drawing processed image 351C are superimposed is displayed on the displays 22a and 22b.
  • the linework processed image 351C is an image obtained by performing linework processing on the captured image. Edge detection is performed on the line drawing processed image 351C to detect the shape processing device and the like.
  • the broken line indicates the process in which line drawing processing was performed.
  • 13A and 13B are diagrams illustrating display images when the information processing apparatus according to Embodiment 2 of the present invention is not within the imaging range of the camera.
  • the head mounted display device 2 recognizes the position of the information processing device 3 by three-dimensional position detection using proximity communication, for example. Then, a mark 356 indicating the direction in which the information processing device 3 exists is superimposed on the VR image 50 and displayed in the outer region of the imaging range 51 of the camera image. For example, in FIGS. 13A and 13B , the information processing device 3 is positioned on the right side of the imaging range 51 , so the mark 356 is displayed on the right side of the imaging range 51 .
  • the image of the information processing device is shown as the mark 356, but an icon may be used as the mark 356.
  • an icon for example, an arrow
  • the mark position is not particularly limited.
  • FIG. 13A is similar to FIG. 12A, and an image in which the VR image 50 and the captured image 351A are superimposed is displayed on the displays 22a and 22b. Furthermore, a mark 356 indicating the direction in which the information processing device 3 exists is superimposed on the VR image 50 and displayed outside the captured image 351A (imaging range 51).
  • the user 1 faces the head mounted display device 2 in the direction in which the mark 356 is displayed or in the direction indicated by the mark 356 until the information processing device 3 enters the captured image 351A. Then, when the information processing device 3 enters the imaging range 51, the user 1 picks up the information processing device 3 and operates the information processing device 3 while viewing the captured image 351A.
  • the user 1 moves the head mounted display device 2 while looking at the mark 356 so that the information processing device 3 enters near the front of the head mounted display device 2 . Then, the user 1 picks up the information processing device 3 and operates the information processing device 3 while viewing the captured image 351A.
  • the head-mounted display device 2 recognizes that the user 1 reaches out and touches the information processing device 3 by analyzing the captured image (captured image data), the information processing device 3 and the hand are superimposed and displayed. The user 1 can be notified that the information processing device 3 has been recognized.
  • the user 1 can find the information processing device 3 even if the user 1 does not know the location of the information processing device 3 . Therefore, the information processing device 3 can be used by holding it without attaching or detaching the head mounted display device 2 .
  • Embodiment 4 cooperation processing for allowing user 1 to use information processing device 3 is performed in accordance with a cooperation instruction from user 1 . That is, in the present embodiment, based on an active cooperation instruction from user 1, cooperation processing between head-mounted display device 2 and information processing device 3 is started. The head-mounted display device 2 is informed that the user 1 intends to use the information processing device 3 by the cooperation instruction of the user 1 .
  • FIG. 14 is a flowchart showing an example of a method for detecting a user's cooperation instruction according to Embodiment 4 of the present invention.
  • the cooperation instruction from the user 1 is detected from the user 1's gesture (including the finger movement pattern of the user 1) from the images captured by the cameras 20a and 20b.
  • step S300 imaging by the cameras 20a and 20b is started (step S300).
  • the head-mounted display device 2 takes in captured image data generated by the cameras 20a and 20b (step S301), analyzes the captured image data, and extracts feature points of the captured image (step S302).
  • the feature points extracted here are, for example, the hand and fingers of the user 1 .
  • the head-mounted display device 2 detects gestures (movement patterns) of feature points using the analysis results of the plurality of captured image data (step S303).
  • the head-mounted display device 2 matches the feature point gesture detected in step S303 with a matching gesture registered in advance (step S304).
  • Verification gestures may include gestures other than those corresponding to user 1's cooperation instruction. In pattern matching, the gesture of the detected feature point is compared with the registered matching gesture to determine what the user 1 wants to do.
  • step S304 when the head-mounted display device 2 detects the cooperation instruction of the user 1 by pattern matching in step S304, it outputs a command to perform cooperation processing to the VR experience application 227 (step S305). On the other hand, if the user 1's cooperation instruction is not detected in the pattern matching in step S304, step S305 is ignored.
  • step S306 the head mounted display device 2 determines whether or not to terminate the image analysis application. If the motion pattern detection application is to be terminated (Yes), the process proceeds to step S307 and the image analysis application is terminated. On the other hand, if the image analysis application is not terminated (No), the processing of steps S301 to S306 is performed again.
  • FIG. 15 is a flowchart showing another example of a method for detecting a user's cooperation instruction according to Embodiment 4 of the present invention.
  • FIG. 15 shows a method of detecting user 1's cooperation instruction from user 1's uttered voice.
  • voice acquisition by the microphone 24 is started (step S320).
  • the head-mounted display device 2 captures input audio data generated by the microphone 24 (step S321), analyzes the input audio data, and extracts features of the input audio (step S322).
  • the features of the input speech extracted here are, for example, words, phrases, and sentences of the input speech.
  • the head-mounted display device 2 compares the feature of the input voice extracted in step S322 with the previously registered matching feature (step S323).
  • Matching features may include features other than those corresponding to user 1's cooperation instructions.
  • pattern matching the extracted features of the input speech are compared with the registered features for matching to determine what the user 1 wants to do.
  • step S323 when the head-mounted display device 2 detects the cooperation instruction of the user 1 by pattern matching in step S323, it outputs a command to perform cooperation processing to the VR experience application 227 (step S324). On the other hand, if the user 1's cooperation instruction is not detected in the pattern matching in step S323, step S324 is ignored.
  • step S325 the head mounted display device 2 determines whether or not to end the image analysis application. If the speech recognition application is to be terminated (Yes), the process proceeds to step S326 and the speech recognition application is terminated. On the other hand, if the voice recognition application is not terminated (No), the processing of steps S321 to S325 is performed again.
  • FIG. 16 is a flowchart showing an example of cooperative processing by the VR experience application according to Embodiment 4 of the present invention. Since FIG. 16 is similar to FIG. 5, differences from FIG. 5 will be mainly described below.
  • step S105 When the VR updated image is displayed in step S105, the process proceeds to step S420.
  • step S420 it is determined whether or not there is an instruction to perform cooperation processing based on the cooperation instruction of user 1.
  • a predetermined cooperation signal is input to control device 220 (computer 222) from, for example, memory 223 or a register (not shown) in computer 222. be. Based on the cooperation signal, the control device 220 determines whether or not there is an instruction to perform cooperation processing based on the cooperation instruction of the user 1 .
  • the control device 220 recognizes that there is an instruction to perform cooperation processing based on the cooperation instruction of user 1 . That is, in this case, the control device 220 determines that the user 1 has the intention of using the information processing device 3 . Then, the process proceeds to step S421.
  • the control device 220 recognizes that there is no instruction to perform cooperation processing based on the cooperation instruction of the user 1 . That is, in this case, the control device 220 determines that the user 1 has no intention of using the information processing device 3 . Then, the process proceeds to step S112.
  • step S421 the control device 220 superimposes and displays the VR image and the images captured by the cameras 20a and 20b. In this manner, the VR image and the captured image are superimposed and displayed based on the cooperation instruction from the user 1 .
  • the processing after step S421 is the same as in FIG.
  • the present embodiment it is possible to detect an active cooperation instruction from the user 1 and start using the information processing device 3 while experiencing VR content.
  • Embodiment 5 Next, Embodiment 5 will be described.
  • the user 1 who is experiencing VR content can use objects other than the information processing device 3 .
  • FIG. 17 is a diagram showing an example of an object used while experiencing VR content according to Embodiment 5 of the present invention.
  • FIG. 17 exemplifies a mug 60 as an object, but other objects may be used.
  • the shape of the object is registered in advance.
  • the head mounted display device 2 analyzes the captured image using an image analysis application.
  • the head mounted display device 2 displays an image of the mug 60, for example, superimposed on the VR image. This allows the user 1 to take a coffee break or the like using the mug 60, for example, while experiencing the VR content.
  • FIG. 18A is a diagram showing another example of an object used while experiencing VR content according to Embodiment 5 of the present invention.
  • FIG. 18A illustrates a mug 60 to which a communication module 61 is attached as an object.
  • the head mounted display device 2 can detect the position of the mug 60 based on the proximity communication signal (position detection signal) transmitted from the communication module 61 even when the position of the mug 60 is unknown.
  • FIG. 18B is a block diagram showing an example of a communication module.
  • communication module 61 includes proximity communication device 62 and microcomputer 63 .
  • Proximity communication device 62 and microcomputer 63 are connected to each other via internal bus 600 .
  • the microcomputer 63 incorporates, for example, an MPU (Micro Processor Unit) 631, a memory 632, and a storage device 633, as shown in FIG. 18B.
  • MPU Micro Processor Unit
  • the head mounted display device 2 receives the proximity communication signal transmitted from the proximity communication device 62 and detects the three-dimensional position of the mug 60 . Based on the detected three-dimensional position of the mug 60, the head mounted display device 2 superimposes the VR image 50 and a mark indicating the direction of existence of the mug 60 on the display.
  • the user 1 can use objects other than the information processing device 3 while experiencing VR content.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne un visiocasque qui comprend un écran non transmissif servant à afficher une image VR, une caméra et un dispositif de commande. Le dispositif de commande analyse une image capturée générée par la caméra pendant qu'un utilisateur fait l'expérience d'un contenu VR (étape S106), et lorsqu'il est déterminé à partir de l'image capturée que l'utilisateur tient en main un dispositif de traitement d'informations (oui à l'étape S107), il superpose et affiche l'image VR et l'image du dispositif de traitement d'informations sur l'écran (étape S109).
PCT/JP2021/018597 2021-05-17 2021-05-17 Visiocasque WO2022244052A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2023522001A JPWO2022244052A1 (fr) 2021-05-17 2021-05-17
CN202180098200.3A CN117337462A (zh) 2021-05-17 2021-05-17 头戴式显示器装置
PCT/JP2021/018597 WO2022244052A1 (fr) 2021-05-17 2021-05-17 Visiocasque

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/018597 WO2022244052A1 (fr) 2021-05-17 2021-05-17 Visiocasque

Publications (1)

Publication Number Publication Date
WO2022244052A1 true WO2022244052A1 (fr) 2022-11-24

Family

ID=84141355

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/018597 WO2022244052A1 (fr) 2021-05-17 2021-05-17 Visiocasque

Country Status (3)

Country Link
JP (1) JPWO2022244052A1 (fr)
CN (1) CN117337462A (fr)
WO (1) WO2022244052A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005086328A (ja) * 2003-09-05 2005-03-31 Fuji Photo Film Co Ltd ヘッドマウントディスプレイ及びそのコンテンツ再生方法
WO2014188798A1 (fr) * 2013-05-21 2014-11-27 ソニー株式会社 Dispositif de commande d'affichage, procédé de commande d'affichage, et support d'enregistrement
WO2016021252A1 (fr) * 2014-08-05 2016-02-11 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et système d'affichage d'image
JP2016525917A (ja) * 2013-06-07 2016-09-01 株式会社ソニー・インタラクティブエンタテインメント ヘッドマウンテッドディスプレイ上でのゲームプレイの移行
WO2016194844A1 (fr) * 2015-05-29 2016-12-08 京セラ株式会社 Dispositif pouvant être porté
WO2020054760A1 (fr) * 2018-09-12 2020-03-19 株式会社アルファコード Dispositif de commande d'affichage d'image et programme permettant de commander l'affichage d'image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005086328A (ja) * 2003-09-05 2005-03-31 Fuji Photo Film Co Ltd ヘッドマウントディスプレイ及びそのコンテンツ再生方法
WO2014188798A1 (fr) * 2013-05-21 2014-11-27 ソニー株式会社 Dispositif de commande d'affichage, procédé de commande d'affichage, et support d'enregistrement
JP2016525917A (ja) * 2013-06-07 2016-09-01 株式会社ソニー・インタラクティブエンタテインメント ヘッドマウンテッドディスプレイ上でのゲームプレイの移行
WO2016021252A1 (fr) * 2014-08-05 2016-02-11 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et système d'affichage d'image
WO2016194844A1 (fr) * 2015-05-29 2016-12-08 京セラ株式会社 Dispositif pouvant être porté
WO2020054760A1 (fr) * 2018-09-12 2020-03-19 株式会社アルファコード Dispositif de commande d'affichage d'image et programme permettant de commander l'affichage d'image

Also Published As

Publication number Publication date
JPWO2022244052A1 (fr) 2022-11-24
CN117337462A (zh) 2024-01-02

Similar Documents

Publication Publication Date Title
US9983687B1 (en) Gesture-controlled augmented reality experience using a mobile communications device
US11372249B2 (en) Information processing device and information processing method
US10254847B2 (en) Device interaction with spatially aware gestures
US11373650B2 (en) Information processing device and information processing method
US10514755B2 (en) Glasses-type terminal and control method therefor
US10445577B2 (en) Information display method and information display terminal
CN110134744B (zh) 对地磁信息进行更新的方法、装置和系统
KR20160071263A (ko) 이동단말기 및 그 제어방법
CN111400610B (zh) 车载社交方法及装置、计算机存储介质
CN109917988B (zh) 选中内容显示方法、装置、终端及计算机可读存储介质
CN111723843B (zh) 一种签到方法、装置、电子设备及存储介质
CN113282355A (zh) 基于状态机的指令执行方法、装置、终端及存储介质
JP2018181256A (ja) 頭部装着ディスプレイ装置、認証方法、及び認証プログラム
EP3905037B1 (fr) Procédé de création de session et dispositif terminal
WO2022199102A1 (fr) Procédé et dispositif de traitement d'image
CN111415421B (zh) 虚拟物体控制方法、装置、存储介质及增强现实设备
US20200135150A1 (en) Information processing device, information processing method, and program
CN111061369B (zh) 交互方法、装置、设备及存储介质
US20210005014A1 (en) Non-transitory computer-readable medium, image processing method, and image processing system
CN113469360B (zh) 推理方法及装置
WO2022244052A1 (fr) Visiocasque
CN110213205B (zh) 验证方法、装置及设备
US11651567B2 (en) Display terminal, display control system and display control method
JP7155242B2 (ja) 携帯情報端末
KR20200122754A (ko) 증강 현실 이미지를 제공하는 스마트 안경 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21940672

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023522001

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 202180098200.3

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 18561332

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21940672

Country of ref document: EP

Kind code of ref document: A1