WO2018086224A1 - Procédé et appareil permettant de générer une scène de réalité virtuelle, et système de réalité virtuelle - Google Patents

Procédé et appareil permettant de générer une scène de réalité virtuelle, et système de réalité virtuelle Download PDF

Info

Publication number
WO2018086224A1
WO2018086224A1 PCT/CN2016/113087 CN2016113087W WO2018086224A1 WO 2018086224 A1 WO2018086224 A1 WO 2018086224A1 CN 2016113087 W CN2016113087 W CN 2016113087W WO 2018086224 A1 WO2018086224 A1 WO 2018086224A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
target
real
information
virtual reality
Prior art date
Application number
PCT/CN2016/113087
Other languages
English (en)
Chinese (zh)
Inventor
高进宝
Original Assignee
歌尔科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 歌尔科技有限公司 filed Critical 歌尔科技有限公司
Publication of WO2018086224A1 publication Critical patent/WO2018086224A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the present invention relates to a virtual display technology, and more particularly to a method, device, and virtual reality system for generating a virtual reality scene.
  • Virtual Reality is an important direction of simulation technology. It is a collection of simulation technology and computer graphics, human-machine interface technology, multimedia technology, sensing technology, network technology, etc., mainly including simulation environment. , perception, natural skills and sensing equipment.
  • the simulation environment is a computer-generated, real-time, dynamic, three-dimensional, realistic image.
  • perceptions such as hearing, touch, force, and motion, and even smell and taste, also known as multi-perception.
  • Natural skills refer to the rotation of the person's head, eyes, gestures, or other human behaviors.
  • the computer processes the data that is appropriate to the actions of the participants, responds to the user's input in real time, and feeds back to the user's facial features. .
  • a computer can be used to generate a simulation environment, and the user can immerse the user in the environment through the interactive 3D dynamic view and entity behavior simulation of multi-source information fusion to obtain the virtual reality experience.
  • virtual reality products such as virtual reality glasses and virtual reality helmets.
  • virtual reality devices are used in games or video viewing.
  • the virtual scenes are preset scenes in the game or scenes involved in preset movies and videos, and are not really virtual reality. Scenes. Unable to provide users with a virtual reality experience that is placed in real-time reality scenes.
  • a method for generating a virtual reality scene does not include:
  • the target scene information includes at least location information and body posture information of the user in the local real scene, where the target scene information is at least Include real-time picture information in the target reality scene;
  • the step of acquiring target scenario information includes:
  • the target scene information is generated according to the real-time panoramic video of the target remote reality scene acquired by the panoramic camera.
  • the generating method of the virtual reality scenario further includes:
  • the target scene information is re-acquired according to the changed real scene information to generate a changed target virtual reality scene.
  • the generating method of the virtual reality scenario further includes:
  • the real scene information is acquired by a positioning sensor.
  • the target scenario information further includes at least one of real-time sound information, real-time temperature information, real-time humidity information, and real-time odor information in the target remote reality scene.
  • a device for generating a virtual reality scenario includes:
  • the scene information acquiring unit is configured to acquire the target scene information according to the real scene information of the user, wherein the real scene information includes at least the location information and the body posture information of the user in the local real scene in response to the user's request for the target remote reality scene.
  • the target scene information includes at least real-time screen information in the target real-life scene;
  • the target scene generating unit is configured to generate a corresponding target virtual reality scene for displaying to the user according to the target scene information.
  • the scenario information acquiring unit includes:
  • a coordinate setting unit configured to set a target scene coordinate of the user in the scene coordinate system according to the current scene information of the user
  • a scene positioning unit configured to locate a target remote real-life scene according to the target scene coordinate
  • an information generating unit configured to generate target scene information according to the real-time panoramic video of the target remote real-life scene acquired by the panoramic camera.
  • the generating device of the virtual reality scenario further includes:
  • the change control unit is configured to re-acquire the target scene information to generate the changed target virtual reality scene according to the changed real scene information when the real scene information changes.
  • the generating device of the virtual reality scenario further includes:
  • the real information acquiring unit is configured to acquire the real scene information by using a positioning sensor.
  • a virtual reality system comprising the device for generating a virtual reality scene according to the second aspect of the present invention.
  • the inventor of the present invention has found that in the prior art, there is no method and device for generating a virtual reality scene that can generate a virtual reality scene according to a remote reality scene to provide a new virtual reality experience like a real-time remote reality scene to the user. And virtual reality systems. Therefore, the technical task to be achieved by the present invention or the technical problem to be solved is not thought of or expected by those skilled in the art, so the present invention is a new technical solution.
  • FIG. 1 is a block diagram showing an example of a hardware configuration of a computing system that can be used to implement an embodiment of the present invention
  • FIG. 2 is a flow chart showing a method of generating a virtual reality scene according to an embodiment of the present invention
  • FIG. 3 is a flow chart showing the steps of acquiring target scene information according to an embodiment of the present invention.
  • FIG. 4 is a schematic block diagram of a generating device of a virtual reality scene according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram showing a virtual reality system of an embodiment of the present invention.
  • FIG. 6 shows an example flow of a method of generating a virtual reality scene of an embodiment of the present invention.
  • FIG. 1 is a block diagram showing a hardware configuration of a computer system 1000 in which an embodiment of the present invention can be implemented.
  • computer system 1000 includes a computer 1110.
  • the computer 1110 includes a processing unit 1120, a system memory 1130, a fixed non-volatile memory interface 1140, a mobile non-volatile memory interface 1150, a user input interface 1160, a network interface 1170, a video interface 1190, and an output peripheral connected via a system bus 1121. Interface 1195.
  • the system memory 1130 includes a ROM (Read Only Memory) and a RAM (Random Access Memory).
  • the BIOS Basic Input Output System
  • Operating systems, applications, other program modules, and certain program data reside in RAM.
  • a fixed non-volatile memory such as a hard disk is connected to a fixed non-volatile memory interface 1140.
  • a fixed non-volatile memory can store, for example, an operating system, applications, other program modules, and certain program data.
  • a mobile non-volatile memory such as a floppy disk drive and a CD-ROM drive is connected to the mobile non-volatile memory interface 1150.
  • a floppy disk can be inserted into a floppy disk drive
  • a CD (disc) can be inserted into the CD-ROM drive.
  • Input devices such as a mouse and keyboard are connected to the user input interface 1160.
  • Computer 1110 can be coupled to remote computer 1180 via network interface 1170.
  • network interface 1170 can be connected to a remote computer via a local area network.
  • network interface 1170 can be coupled to a modem (modulator-demodulator) and the modem connected to remote computer 1180 via a wide area network.
  • Remote computer 1180 can include a memory, such as a hard disk, that can store remote applications.
  • Video interface 1190 is connected to the monitor.
  • the output peripheral interface 1195 is connected to the printer and the speaker.
  • the system memory 1130 is configured to store an instruction, and the instruction is used to control the processing unit 1120 to perform any one of the virtual reality scene generation methods provided by the embodiments of the present invention.
  • the present invention may relate only to some of the devices therein, for example, computer 1100 only relates to processing unit 1120 and system memory 1130, and the like.
  • a technician can design instructions in accordance with the disclosed aspects of the present invention. How the instructions control the processor for operation is well known in the art and will not be described in detail herein.
  • the overall concept of the present invention is a new technical solution that can generate a virtual reality scene according to a remote reality scenario requested by a user, so that the user can acquire a brand new virtual reality experience that is placed in a remote real-time scene in real time.
  • a virtual reality scene generation method is first provided, as shown in FIG. 2, including:
  • Step S2100 in response to the user's request for the target remote reality scenario, according to the user's current
  • the real scene information acquires target scene information, where the real scene information includes at least location information and body posture information of the user in the local real scene, and the target scene information includes at least real-time screen information in the target real scene.
  • the target remote reality scene is a remote scene that exists in reality, which is different from the preset scene in the game used in the virtual reality field frequency in the prior art or in a preset movie or video.
  • the scene for example, the user is in China, and the target remote scene of the request is the current Egyptian pyramid.
  • the user can request any target remote real-world scene that he or she wants to place on the device (for example, the virtual reality device or the main sending device connected to the virtual reality product) that implements the method for generating the virtual reality scene of the present embodiment.
  • the device that implements the method for generating the virtual reality scene in the embodiment obtains the target scene information according to the real scene information of the user, in response to the user's request for the target remote reality scenario.
  • the real scene information is obtained from a real scene that the user actually is in, and the information for generating the target virtual reality scene includes at least location information and body posture information of the user in the local real scene.
  • the real scene information may be acquired by a positioning sensor, which may be, for example, a nine-axis motion sensor (including a three-axis accelerometer, a three-axis gyroscope, a three-axis magnetometer), or using the virtual reality product implementing the embodiment.
  • the existing sensors are equipped with auxiliary sense sensors for peripherals. Therefore, in one example, the method for generating a virtual reality scene provided in this embodiment further includes: acquiring the real scene information by using a positioning sensor.
  • the target scenario information is obtained from the target remote reality scenario, and is used to generate information about the target virtual reality scenario, and includes at least real-time image information in the target remote reality scenario.
  • the target virtual reality scene can be generated and presented to the user, so that the user gets a new virtual reality experience as if he is in the real-time target scene.
  • the step S2100 of acquiring the target scenario information includes:
  • S2101 Set a target scene coordinate of the user in the scene coordinate system according to the current scene information of the user.
  • the scene coordinate system is the target remote reality scene, the real scene where the user is located, and the The coordinate system shared by the virtual reality scene is such that when the user moves or changes the angle of view in the real scene, the real scene information and the target scene information acquired from the target remote real scene can be combined to simulate the real experience of the user. Generate a target realistic virtual scene.
  • the coordinate origin of the scene coordinate system may be set according to the needs of the user, for example, providing an interface for the user to input or select to configure which point in all the scenes is used as the coordinate origin; or, by default, the center of all scenes is used as the scene coordinate.
  • the coordinate origin of the system is OK.
  • a movable panoramic camera when a movable panoramic camera is provided in a real scene including a target remote real scene (for example, a panoramic camera carried by a drone or a robot), the movable panoramic camera can be in a real scene.
  • the position is taken as the coordinate origin.
  • the coordinate axis direction of the scene coordinate system may be the direction of the existing three-dimensional map coordinate axis after the selected coordinate origin.
  • an interface may be provided for the user to input or select to configure which point in the scene coordinate system as its initial target scene coordinate, or the user's scene coordinate may be defaulted.
  • Set to the coordinate corresponding to the center of the scene, or the initial target scene coordinate of the default user is the coordinate origin (0, 0, 0); and when the user moves in the real scene or the body posture changes, the user's real scene information changes.
  • the user is initially located at the coordinate origin (0, 0, 0) of the scene coordinate system, and after shifting 10 meters in the local real scene,
  • the corresponding location information of the local reality scene changes, and according to the changed real scene information, the target scene coordinate of the set user of the user in the scene coordinate system is (10, 0, 0).
  • step S2102 After the target scene coordinates are set in step S2102, the process proceeds to S2102, and the target remote reality scene is located according to the target scene coordinates.
  • a plurality of panoramic cameras are disposed, and by calculating a coordinate range controlled by each panoramic camera, it is determined that the target scene coordinate of the user falls within a range of scene coordinates controlled by the panoramic camera, and the panorama is The scene corresponding to the target scene coordinate in the scene coordinate range controlled by the camera is positioned as the target remote real scene, and further, when the movement change in the local real scene exceeds the scene coordinate range controlled by the previous panoramic camera, User's changed target scene coordinates, repositioning the target far
  • the realistic scene assuming that the user translates 10 meters, correspondingly, the target scene coordinate changed by the initial position coordinate origin (0, 0, 0) is (10, 0, 0), and the user's target scene coordinate has been exceeded.
  • the scene coordinate range controlled by the previous panoramic camera correspondingly, switches to the panoramic camera including the changed target scene coordinate within the scene coordinate control range, and corresponds to the target scene coordinate within the scene coordinate range controlled by the panoramic camera
  • the scene is positioned as a target remote reality scene.
  • the user's initial target remote reality scene is a living room, and the target is a distance of 10 meters, and the target remote reality scene obtained by the positioning is a master bedroom.
  • the real scene including the target remote reality scene is provided with a movable panoramic camera, such as a panoramic camera mounted by a drone or a robot, when it is determined that the target scene coordinate is not within the range of the scene coordinate controlled by the movable panoramic camera.
  • the panoramic camera is driven to move (for example, driving the drone) to ensure that the scene coordinate range controlled by the panoramic camera includes the target scene coordinate, and the scene corresponding to the target scene coordinate is positioned as the target remote real scene.
  • the target remote reality scene may be located at a scene of the first scene space associated with the target remote reality scene.
  • the first scene space may be set by the user, or may be set by default.
  • the target remote reality scenario requested by the user is Egypt.
  • the first scene space may be set by the user to be an Egyptian airport, or the first scene space may be set to an Egyptian airport by default.
  • the target remote real-time scenario requested by the user is his own home.
  • the first scene space can be set by the user as the living room, or the first scene space can be set as the living room by default.
  • step S2202 After the target remote reality scene is located in step S2202, the process proceeds to step S2203, and the target scene information is generated according to the real-time panoramic video of the target remote reality scene acquired by the panoramic camera.
  • the panoramic camera is a camera that can independently realize a wide range of no dead angle monitoring, and can cover the scene without blind spots.
  • the panoramic camera may be a panoramic camera having a 360 degree angle of view.
  • the panoramic camera may be a movable panoramic camera initially set in a target remote real-life scene or in a real scene related to the target remote real-life scene, for example, a panoramic camera carried by a drone or a robot; or may be fixed It is disposed in a target remote real scene that is requested by the user, and a plurality of such panoramic settings of the panoramic camera may also be distributed in a real scene related to the target remote real scene.
  • the target remote real-time scene that is located is a scene corresponding to the target scene coordinate and falls within the scene coordinate control range of a panoramic camera. Therefore, the real-time panoramic video of the target remote real-life scene can be acquired by the panoramic camera.
  • the target scene information generated by the real-time panoramic video at least real-time image information may be extracted from the real-time panoramic video, or may be real-time panoramic video information corresponding to the real-time panoramic video.
  • the device that implements the virtual reality scene generation method and the virtual display device (eg, virtual reality glasses) that exhibits the target virtual reality scene are connected through a wireless or wired network, and in some application scenarios with small transmission bandwidth,
  • the real-time panoramic video can be simplified and compressed to generate the target scene information to facilitate transmission. In some application scenarios with sufficient transmission bandwidth, the real-time panoramic video can be transmitted without being compressed.
  • the target scene information may further include: the target scene information further includes real-time sound information, real-time temperature information, real-time humidity information, real-time scent information, and the like in the target remote real-life scene, for example, may be set to target remotely.
  • the temperature sensor in the real scene acquires the real-time temperature information of the target remote reality scene, and sets the corresponding real-time temperature according to the real-time temperature information in the generated target virtual reality scene, and the user experiences the real-time temperature experience through the adjustable temperature space bin.
  • the real-time humidity information of the target remote reality scene can also be obtained by the humidity sensor set in the target remote reality scene, and the corresponding real-time humidity is set according to the real-time humidity information in the generated target virtual reality scene, and the space is experienced through the adjustable humidity.
  • the user can obtain the real-time humidity experience; the real-time scent information of the target remote real-world scene can also be obtained through the scent sensor set in the target remote real-life scene, and the real-time odor information is set according to the generated target virtual reality scene.
  • Flavor generator produces a corresponding odor, so that users get real-time experience humidity. In this way, the virtual reality experience that the user obtains in the virtual reality scene can be enriched.
  • step S2100 After the step S2100 is completed by the example described above, the process proceeds to step S2200, and the target virtual reality scene is generated for presentation to the user according to the target scene information.
  • the target scene information includes at least real-time screen information in the target remote real-world scene, and correspondingly, the generated target virtual reality scene may be in a target remote real-life scene displayed by, for example, virtual display glasses.
  • the real-time picture gives the user a new virtual reality experience as if they were in real-time in a remote target reality scene.
  • the target scene information may further include: the target scene information further includes real-time sound information, real-time temperature information, real-time humidity information, real-time scent information, and the like in the target remote real-life scene, and correspondingly, the generated target virtual
  • the target scene information further includes real-time sound information, real-time temperature information, real-time humidity information, real-time scent information, and the like in the target remote real-life scene, and correspondingly, the generated target virtual
  • a real-life scenario not only real-time images in a target remote real-life scene that can be presented through, for example, virtual display glasses, but also sounds that can be synchronized with the target remote real-world scene through headphones or audio, by setting an adjustable temperature
  • the temperature and humidity experience provided by the tunable humidity virtual space experience bin is synchronized with the target remote current scene, and the scent generated by the scent generator synchronized with the target remote current scene, so that the user gets richer as if he is at a remote target.
  • the user can obtain a new virtual reality experience like being placed in a remote target real scene.
  • the target scene may be reacquired according to the changed real scene information.
  • Information to generate a changed target virtual reality scene is not described herein.
  • the virtual reality scene generating method in the embodiment may further include: when the real scene information changes, the virtual reality scene generating method in the embodiment may be obtained by merging the real scene information of the user into the target virtual reality scene. And acquiring the target scene information to generate the changed target virtual reality scene according to the changed real scene information.
  • a generating device 4000 for a virtual reality scene includes a scene information acquiring unit 4100 and a target scene generating unit 4200.
  • the method further includes a coordinate setting unit 4300 and a scene positioning.
  • the unit 4400, the information generating unit 4500, the change control unit 4600, and the real information acquiring unit 4700 are used to implement any of the virtual reality scene generating methods provided in this embodiment, and details are not described herein again.
  • the generating device 4000 of the virtual reality scenario includes:
  • the scene information obtaining unit 4100 is configured to acquire target scene information according to the real scene information of the user, in response to the user's request for the target remote real scene, where the real scene information includes at least the location information and the body posture of the user in the local real scene.
  • the target field The scene information includes at least real-time screen information in the target real-life scene;
  • the target scenario generating unit 4200 is configured to generate a corresponding target virtual reality scenario for displaying to the user according to the target scenario information.
  • the scenario information obtaining unit 4100 includes:
  • the coordinate setting unit 4300 is configured to set a target scene coordinate of the user in the scene coordinate system according to the current scene information of the user;
  • a scene positioning unit 4400 configured to locate a target remote real-life scene according to the target scene coordinate
  • the information generating unit 4500 is configured to generate target scene information according to the real-time panoramic video of the target remote real-life scene acquired by the panoramic camera.
  • the generating device 4000 of the virtual reality scenario further includes:
  • the change control unit 4600 is configured to re-acquire the target scene information to generate the changed target virtual reality scene according to the changed real scene information when the real scene information changes.
  • the generating device 4000 of the virtual reality scenario further includes:
  • the real information acquiring unit 4700 is configured to acquire the real scene information by using a positioning sensor.
  • a virtual reality system 5000 is further provided, including any one of the virtual reality scene generating devices 4000 provided in this embodiment.
  • the virtual reality system 5000 includes a virtual reality device 5100, a virtual reality scene generation device 4000, a plurality of panoramic cameras 5200 disposed in a remote reality scene, and networks 5301, 5302, wherein:
  • a virtual reality device 5100 configured to present a target virtual reality scene generated by the generating device 4000 of the virtual reality scene to the user
  • the virtual reality device 5100 may be a virtual reality glasses, a virtual reality helmet, or the like;
  • the generating device 4000 of the virtual reality scenario is used to implement the method for generating the virtual reality scenario provided by any one of the embodiments in the present embodiment, and details are not described herein again;
  • the network 5301 is configured to connect the generating device 4000 of the virtual reality scenario with the virtual reality device 5100.
  • the network may be a wireless network or a wired network, and may be a wide area network. It can also be a local area network;
  • the network 5302 is configured to connect the generating device 4000 and the panoramic camera 5200 of the virtual reality scenario.
  • the network may be a wireless network or a wired network, and may be a wide area network or a local area network.
  • the panoramic camera 5200 is configured to acquire real-time video of the remote reality scene to generate target scene information.
  • the panoramic camera 5200 may be fixedly disposed in the remote real-world scene, or may be movably located in the target remote real-life scene, for example, located A drone or robotic bearer of a remote reality scene.
  • the specific implementation form of the virtual reality scene generating device 4000 is not limited, and the virtual reality scene generating device 4000 may be a physical device other than the virtual reality device 5100, for example, as shown in FIG. 1 .
  • the illustrated computer 1100, the virtual reality scene generating device 4000 may also be a partial functional unit included in the virtual reality device 5100, and some of the functional units are included in the physical device independent of the virtual reality device 5100.
  • the protection scope of the embodiment cannot be avoided.
  • the method for generating a virtual reality scene is implemented by using the virtual reality system shown in FIG. 5.
  • the virtual reality device 5100 is a virtual reality glasses
  • the local reality scene of the user is at home, requesting
  • the target remote reality scene is the botanical garden
  • the method of generating the virtual reality scene includes:
  • Step S601 in response to the user's request for the real-time scene of the botanical garden, set the initial coordinates of the user to the center of all the scenes, consistent with the origin of the scene coordinate system, and proceed to step S602.
  • step S602 the default first scene space is the entrance of the botanical garden, and the real-time video of the entrance of the botanical garden is obtained through the panoramic camera set at the entrance of the botanical garden, and transmitted to the generating device 4000 of the virtual reality scene through the network 5302, and is transmitted through the network 5301 after being simplified and compressed.
  • the virtual reality glasses worn by the user are presented to the initial virtual reality scene of the user entrance of the botanical garden, and then proceeds to step S603;
  • Step S603 according to the real-time scene information of the user acquired by the positioning sensor in real time, including the location information and the body posture information of the user in the local real scene, and setting the target scene coordinates of the user in the scene coordinate system according to the real-time acquired real scene information, for example,
  • the user walks 10 meters along the X-axis direction of the scene coordinate system in the local reality scene, and the scene coordinate of the user is (10, 0, 0), and then proceeds to step S604;
  • Step S604 Locating a target remote reality scene according to the target scene coordinate of the user, for example, the target scene coordinate of the user is (10, 0, 0), which is changed compared with the initial scene coordinate, if the panoramic camera of the current remote target real-time scene is not exceeded.
  • the scene coordinate control range, the positioning target remote reality scene does not change, otherwise, if the target scene coordinate has exceeded the scene coordinate control range of the panoramic camera of the current remote target real-time scene, if the panoramic camera is fixedly set, then the switch
  • the panoramic camera causes the target scene coordinate to fall within the scene coordinate control range of the panoramic camera; if the panoramic camera is movable, for example, carried by a drone or a robot, the control drives the drone or the robot performs corresponding movement, so that The target scene coordinates fall within the scene coordinate control range of the panoramic camera.
  • the retargeted target remote reality scene is a remote reality scene corresponding to the target scene coordinates. For example, if the target scene coordinate is (10, 0, 0), the corresponding scene is the center square of the botanical garden seen from the entrance of the botanical garden 10 meters in the X-axis direction, and then proceeds to step S605;
  • Step S605 generating target scene information according to the real-time panoramic video of the target remote real-life scene acquired by the panoramic camera, for example, acquiring a real-time video of the central square of the botanical garden through a panoramic camera set in a central square of the botanical garden, through the network 5302
  • the generating device 4000 transmitted to the virtual reality scene, simplifying the processing and compressing the real-time video information to be transmitted to the virtual reality glasses worn by the user through the network 5301, and proceeding to step S606;
  • Step S606 generating a corresponding target virtual reality scene according to the received real-time video information, for example, generating a scene of the central square of the botanical garden, presenting to the user through the virtual reality glasses, and proceeding to step S607;
  • Step S607 in response to the user's request to terminate the acquisition of the virtual reality scene, ending the generation of the virtual reality scene. Otherwise, after presenting the target virtual reality scene corresponding to the target real scene to the user, the process returns to step S603, and the real-time through the positioning sensor is performed. Obtaining the actual scene information, and continuing to perform steps S603 to S604, so that when the actual scene information changes, for example, When the user's movement or action causes the user to change the location information or the body posture information in the local reality scene, correspondingly, according to the changed real scene information, the target scene information is re-acquired to generate the changed target virtual reality scene. .
  • the target scene information may be acquired according to the real scene information of the user in response to the user's request for the target remote real scene, and the target remote information is generated according to the target scene information.
  • the virtual reality scene corresponding to the real scene enables the user to obtain a brand new virtual reality experience in real time in the remote real scene.
  • the generating device 4000 of the virtual reality scene can be implemented in various ways.
  • the generating device 4000 of the virtual reality scene can be implemented by an instruction configuration processor.
  • the instructions may be stored in the ROM, and when the device is booted, the instructions are read from the ROM into the programmable device to implement the virtual reality scene generation device 4000.
  • the virtual reality scene generation device 4000 can be cured into a dedicated device (eg, an ASIC).
  • the generating device 4000 of the virtual reality scene may be divided into mutually independent units, or they may be combined and implemented.
  • the generating device 4000 of the virtual reality scene may be implemented by one of the various implementations described above, or may be implemented by a combination of two or more of the various implementations described above.

Abstract

L'invention concerne un procédé et un appareil permettant de générer une scène de réalité virtuelle, et un système de réalité virtuelle. Le procédé permettant de générer une scène de réalité virtuelle comprend les étapes consistant : à acquérir des informations de scène cible en réponse à une demande concernant une scène de réalité à distance cible provenant d'un utilisateur et selon des informations de scène réelle de l'utilisateur (S2100) ; et à générer, en fonction des informations de scène cible, une scène de réalité virtuelle cible correspondante pour l'afficher face à l'utilisateur (S2200). Le procédé de génération ci-décrit peut apporter à un utilisateur une nouvelle expérience de réalité virtuelle en temps réel donnant l'impression d'être dans une scène de réalité à distance.
PCT/CN2016/113087 2016-11-11 2016-12-29 Procédé et appareil permettant de générer une scène de réalité virtuelle, et système de réalité virtuelle WO2018086224A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611040313.5A CN106598229B (zh) 2016-11-11 2016-11-11 一种虚拟现实场景的生成方法、设备及虚拟现实系统
CN201611040313.5 2016-11-11

Publications (1)

Publication Number Publication Date
WO2018086224A1 true WO2018086224A1 (fr) 2018-05-17

Family

ID=58592831

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/113087 WO2018086224A1 (fr) 2016-11-11 2016-12-29 Procédé et appareil permettant de générer une scène de réalité virtuelle, et système de réalité virtuelle

Country Status (2)

Country Link
CN (1) CN106598229B (fr)
WO (1) WO2018086224A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112073299A (zh) * 2020-08-27 2020-12-11 腾讯科技(深圳)有限公司 一种剧情聊天方法
CN115546453A (zh) * 2022-12-01 2022-12-30 启迪数字科技(深圳)有限公司 基于线下展馆实现的虚拟展馆信息同步方法、装置和系统

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107315481B (zh) * 2017-07-17 2020-04-14 西交利物浦大学 一种虚拟环境下交互行为的控制方法及控制系统
CN109313484B (zh) * 2017-08-25 2022-02-01 深圳市瑞立视多媒体科技有限公司 虚拟现实交互系统、方法及计算机存储介质
CN107797836B (zh) * 2017-08-25 2020-03-27 深圳壹账通智能科技有限公司 虚拟现实页面的生成方法、装置、服务器和存储介质
CN107632705A (zh) * 2017-09-07 2018-01-26 歌尔科技有限公司 沉浸式交互方法、设备、系统及虚拟现实设备
CN107566510B (zh) * 2017-09-20 2020-12-01 歌尔光学科技有限公司 远程医疗诊断服务系统
CN109752951B (zh) * 2017-11-03 2022-02-08 腾讯科技(深圳)有限公司 控制系统的处理方法、装置、存储介质和电子装置
CN109995838B (zh) * 2018-01-02 2021-08-06 中国移动通信有限公司研究院 虚拟内容调度方法、装置、设备及计算机可读存储介质
CN108805985B (zh) * 2018-03-23 2022-02-15 福建数博讯信息科技有限公司 虚拟空间方法和装置
CN108573531A (zh) * 2018-04-23 2018-09-25 新华网股份有限公司 终端设备以及虚拟现实显示的方法
CN111383344A (zh) * 2018-12-29 2020-07-07 深圳市优必选科技有限公司 一种虚拟场景的生成方法及装置、计算机设备、存储介质
CN111414072A (zh) * 2019-01-07 2020-07-14 光宝电子(广州)有限公司 混合现实系统、气味提供方法以及用户设备
CN111479087A (zh) * 2019-01-23 2020-07-31 北京奇虎科技有限公司 3d监控场景控制方法、装置、计算机设备及存储介质
CN113608613B (zh) * 2021-07-30 2023-06-23 建信金融科技有限责任公司 虚拟现实互动方法、装置、电子设备及计算机可读介质
CN117440175A (zh) * 2022-07-14 2024-01-23 抖音视界有限公司 用于视频传输的方法、装置、系统、设备和介质
CN116594511B (zh) * 2023-07-17 2023-11-07 天安星控(北京)科技有限责任公司 基于虚拟现实的场景体验方法、装置、计算机设备和介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810353A (zh) * 2014-03-09 2014-05-21 杨智 一种虚拟现实中的现实场景映射系统和方法
CN104748746A (zh) * 2013-12-29 2015-07-01 刘进 智能机姿态测定及虚拟现实漫游方法
US20160175702A1 (en) * 2014-12-22 2016-06-23 Sony Computer Entertainment Inc. Peripheral Devices having Dynamic Weight Distribution to Convey Sense of Weight in HMD Environments
CN105872575A (zh) * 2016-04-12 2016-08-17 乐视控股(北京)有限公司 基于虚拟现实的直播方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103516808A (zh) * 2013-10-21 2014-01-15 上海佳世展览有限公司 智慧展馆移动终端虚实交互平台
CN105786178B (zh) * 2016-02-23 2019-04-09 广州视睿电子科技有限公司 场景对象信息呈现方法和系统
CN105608746B (zh) * 2016-03-16 2019-10-11 成都电锯互动科技有限公司 一种将现实进行虚拟实现的方法
CN105807931B (zh) * 2016-03-16 2019-09-17 成都电锯互动科技有限公司 一种虚拟现实的实现方法
CN105975077A (zh) * 2016-05-09 2016-09-28 句容美宅网络科技有限公司 一种vr看房系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104748746A (zh) * 2013-12-29 2015-07-01 刘进 智能机姿态测定及虚拟现实漫游方法
CN103810353A (zh) * 2014-03-09 2014-05-21 杨智 一种虚拟现实中的现实场景映射系统和方法
US20160175702A1 (en) * 2014-12-22 2016-06-23 Sony Computer Entertainment Inc. Peripheral Devices having Dynamic Weight Distribution to Convey Sense of Weight in HMD Environments
CN105872575A (zh) * 2016-04-12 2016-08-17 乐视控股(北京)有限公司 基于虚拟现实的直播方法及装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112073299A (zh) * 2020-08-27 2020-12-11 腾讯科技(深圳)有限公司 一种剧情聊天方法
CN115546453A (zh) * 2022-12-01 2022-12-30 启迪数字科技(深圳)有限公司 基于线下展馆实现的虚拟展馆信息同步方法、装置和系统
CN115546453B (zh) * 2022-12-01 2023-03-14 启迪数字科技(深圳)有限公司 基于线下展馆实现的虚拟展馆信息同步方法、装置和系统

Also Published As

Publication number Publication date
CN106598229A (zh) 2017-04-26
CN106598229B (zh) 2020-02-18

Similar Documents

Publication Publication Date Title
WO2018086224A1 (fr) Procédé et appareil permettant de générer une scène de réalité virtuelle, et système de réalité virtuelle
JP7419460B2 (ja) Vr観戦のための拡大された視野再レンダリング
JP6893868B2 (ja) 空間依存コンテンツのための力覚エフェクト生成
US9654734B1 (en) Virtual conference room
US20180356893A1 (en) Systems and methods for virtual training with haptic feedback
RU2621644C2 (ru) Мир массового одновременного удаленного цифрового присутствия
WO2015122108A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations, et programme
JP2018109984A5 (fr)
EP3691280B1 (fr) Procédé de transmission vidéo, serveur, terminal de lecture vr et support de stockage lisible par ordinateur
US10049496B2 (en) Multiple perspective video system and method
JP2015041385A (ja) スポーツイベントのハプティック可能な閲覧
JP2014525049A (ja) 立体ビデオ表現
CN111602104B (zh) 用于与所识别的对象相关联地呈现合成现实内容的方法和设备
CN106980378B (zh) 虚拟显示方法和系统
JPWO2019187862A1 (ja) 情報処理装置、情報処理方法、および記録媒体
WO2017173890A1 (fr) Casque à deux caméras
US10582190B2 (en) Virtual training system
JP2016513991A (ja) ハプティックアクチュエータ制御パラメータを有するオーディオビジュアルコンテンツのアイテムを再生するための方法、および方法を実施するデバイス
US11120612B2 (en) Method and device for tailoring a synthesized reality experience to a physical setting
WO2019235106A1 (fr) Dispositif de présentation de carte thermique et programme de présentation de carte thermique
GB2525304B (en) Interactive information display
WO2020017435A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP2020530218A (ja) 没入型視聴覚コンテンツを投影する方法
WO2018173206A1 (fr) Dispositif de traitement d'informations
WO2024060959A1 (fr) Procédé et appareil pour ajuster une image de visualisation dans un environnement virtuel, support de stockage et dispositif

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16921027

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16921027

Country of ref document: EP

Kind code of ref document: A1