WO2020034863A1 - 一种显示方法以及虚拟现实设备 - Google Patents

一种显示方法以及虚拟现实设备 Download PDF

Info

Publication number
WO2020034863A1
WO2020034863A1 PCT/CN2019/099271 CN2019099271W WO2020034863A1 WO 2020034863 A1 WO2020034863 A1 WO 2020034863A1 CN 2019099271 W CN2019099271 W CN 2019099271W WO 2020034863 A1 WO2020034863 A1 WO 2020034863A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
rule
virtual reality
keyword
environment
Prior art date
Application number
PCT/CN2019/099271
Other languages
English (en)
French (fr)
Inventor
辛鑫
郑维希
黄雪妍
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP19850126.4A priority Critical patent/EP3779647A4/en
Publication of WO2020034863A1 publication Critical patent/WO2020034863A1/zh
Priority to US17/090,642 priority patent/US11748950B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • H04L12/282Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the present application relates to the field of virtual reality technology, and in particular, to a display method and a virtual reality device.
  • FIG. 1A the main applications of virtual reality (VR) include VR viewing, VR games, VR shopping, and so on.
  • the VR user interface generally has a main interface as shown in FIG. 1A, which is used to enable a user to select a specific application to enter. After entering a specific application, there will be other interfaces shown to select specific content in the application.
  • FIG. 1B FIG. 1B is a schematic diagram of a video selection interface entered after selecting a YouTube application.
  • FIG. 1C FIG. 1C is a schematic diagram of a movie selection interface entered after a movie application is selected.
  • one is foreground content, including icons for selecting various applications (applications, apps) in the main interface, and preview images of each content in the app;
  • one is background content Such as a landscape picture shown in FIG. 1B or a shark picture shown in FIG. 1C.
  • the relationship between these background content and foreground content is relatively weak and lacks three-dimensionality.
  • FIG. 2A is a flowchart of an existing process.
  • the virtual environment on its homepage uses the open villa environment shown in Figure 2B.
  • massive video content simulates real life and is displayed on the TV.
  • Figure 2C the implemented virtual environment only creates a static panoramic picture, and users cannot interact with the virtual environment and cannot realize the interactive characteristics of virtual reality.
  • Embodiments of the present application provide a display method and a virtual reality device, which are used to match keywords identified by a user's selection of target content and preset matching rules to obtain a target 3D environment model and at least one target 3D environment data. To present the corresponding 3D virtual reality environment.
  • the first aspect of the present application provides a display method, which may include: a virtual reality device responds to a user operation to generate a selection instruction for target content; the target content may be content in a video content list, a game content list
  • the content in the content and the content in the application content list are not specifically limited.
  • the virtual reality device recognizes at least one keyword from the target content according to the selection instruction; for example, the virtual reality device identifies the target content's label, feature, profile, title and other information according to the selection instruction to obtain at least one keyword.
  • the virtual reality device matches to obtain a target 3D environment model and at least one target 3D environment data according to the at least one keyword and a preset matching rule.
  • the preset matching rule has multiple types, and may include a 3D environment model rule and at least A 3D environment data rule; the virtual reality device applies the at least one target 3D environment data in the target 3D environment model to present a corresponding 3D virtual reality environment.
  • the virtual reality device matches keywords identified according to the user's selection of target content and preset matching rules to obtain a target 3D environment model and at least one target 3D environment data, thereby presenting corresponding 3D Virtual reality environment. Effectively reduce the thematic operating requirements of matching content, reduce operating manpower / costs, and reduce the computing performance requirements for virtual reality equipment.
  • the at least one 3D environment data rule includes at least one of a smart home type rule, a texture mapping rule, a sky ball rule, a lighting rule, a particle rule, and a background sound rule.
  • the 3D environmental data rule here includes, but is not limited to, the above description.
  • the embodiment of the present application makes a brief description of at least one 3D environmental data rule, which makes the technical solution of the present application more specific and clear.
  • the at least one keyword includes a first target keyword
  • the virtual reality device matches and obtains a target 3D environment model according to the at least one keyword and a preset matching rule.
  • the method includes: when the first target keyword matches the 3D environment model rule, the virtual reality device matches to obtain a target 3D environment model corresponding to the first target keyword.
  • the target 3D environment model may be an ocean model, a glacier model, a desert model, a plain model, a grassland model, a forest model, a mountain model, a river valley model, and the like.
  • a specific implementation solution of the corresponding target 3D environment model is obtained, which increases the feasibility of the solution.
  • the at least one 3D environment data rule includes the texture mapping rule
  • the at least one target 3D environment data includes target texture mapping data
  • the at least one keyword includes a second target keyword
  • the virtual reality device matching to obtain at least one target 3D environment data according to the at least one keyword and a preset matching rule may include: when the second target keyword matches the texture mapping rule, the virtual reality device matches Get the target texture map data.
  • the second target keyword matches the texture mapping rule a specific implementation solution is obtained to obtain corresponding target texture mapping data, which increases the feasibility of the solution.
  • the at least one 3D environment data rule includes the sky ball rule
  • the at least one target 3D environment data includes target sky ball texture data
  • the at least one keyword includes a third target key Word
  • the virtual reality device matching to obtain at least one target 3D environment data according to the at least one keyword and a preset matching rule may include: when the third target keyword matches the skyball rule, the virtual reality device Match to get the target skyball texture data.
  • the third target keyword matches the sky ball rule a specific implementation solution is obtained to obtain the corresponding target sky ball texture data, which increases the feasibility of the solution.
  • the at least one 3D environment data rule includes the lighting rule
  • the at least one target 3D environment data includes target lighting data
  • the at least one keyword includes a fourth target keyword
  • the The virtual reality device matching to obtain at least one target 3D environment data according to the at least one keyword and a preset matching rule may include: when the fourth target keyword matches the lighting rule, the virtual reality device matches to obtain the target Lighting data.
  • the fourth target keyword matches the lighting rule a specific implementation solution is obtained to obtain the corresponding target lighting data, which increases the feasibility of the solution.
  • the at least one 3D environment data rule includes the particle rule
  • the at least one target 3D environment data includes target particle data
  • the at least one keyword includes a fifth target keyword
  • the The virtual reality device matches and obtains at least one target 3D environment data according to the at least one keyword and a preset matching rule, which may include: when the fifth target keyword matches the particle rule, the virtual reality device matches to obtain the target Particle parameters.
  • a preset matching rule which may include: when the fifth target keyword matches the particle rule, the virtual reality device matches to obtain the target Particle parameters.
  • a specific implementation solution is obtained to obtain corresponding target particle data, which increases the feasibility of the solution.
  • the at least one 3D environment data rule includes the background sound rule
  • the at least one target 3D environment data includes target audio file data
  • the at least one keyword includes a sixth target keyword.
  • the virtual reality device matching to obtain at least one target 3D environment data according to the at least one keyword and a preset matching rule may include: when the sixth target keyword matches the background sound rule, the virtual reality device matches Get the target audio file data.
  • the sixth target keyword matches the background sound rule a specific implementation solution is obtained to obtain the corresponding target audio file data, which increases the feasibility of the solution.
  • the method may further include: the virtual reality device sends a control instruction to the smart home device, the control instruction includes the target audio file data, and the target audio file data is used for the Smart home device for playback.
  • the target audio file data may be sent to the smart home device for playback by the smart home device, which improves the flexibility of the solution.
  • the method may further include: the virtual reality device playing according to the target audio file data.
  • the virtual reality device when the virtual reality device matches and obtains the target audio file data, it can also choose to play it by itself, which reduces the delay and saves the transmission resources.
  • the virtual reality device applies the at least one target 3D environment data to the target 3D environment model to present a corresponding 3D virtual reality environment, which may include: the virtual reality device according to The target 3D environment model and the at least one target environment data render the 3D virtual reality environment; the virtual reality device displays the 3D virtual reality environment.
  • the embodiment of the present application provides a solution implemented by a virtual reality device to present a 3D virtual reality environment, which increases the feasibility of the solution.
  • the method may further include: matching the virtual reality device to obtain operating parameters of the smart home device according to the at least one keyword and the preset matching rule; the virtual reality device Send the operating parameters of the smart home device to the server, and the operating parameters are used by the server to control the smart home device to operate according to the operating parameters.
  • the presented 3D virtual reality environment can also be combined with smart home equipment to effectively use resources, so that the user can experience a higher-quality quiet effect and improve the user experience.
  • the operating parameter includes at least one of a temperature parameter, a humidity parameter, an air volume parameter, a wind direction parameter, and an odor parameter; wherein the temperature parameter is used by the server to control the smart home device Run according to the temperature parameter; the humidity parameter is used by the server to control the smart home device to operate according to the humidity parameter; the air volume parameter and the wind direction parameter are used by the server to control the wind direction corresponding to the smart home device according to the wind direction parameter, Run according to the air volume parameter; the odor parameter is used by the server to control the smart home device to emit a corresponding odor.
  • a description was made of the operating parameters and corresponding functions of the smart home equipment.
  • the temperature, humidity, wind, and sound are adjusted according to the content in virtual reality (VR). Etc., effectively improving the user's multi-sensory experience; also reducing the space and price cost of virtual reality multi-sensory experience.
  • VR virtual reality
  • Another aspect of the embodiments of the present application provides a virtual reality device having keywords identified according to a user's selection of target content, and matching to obtain a target 3D environment model and at least one target 3D environment data, thereby presenting a corresponding 3D virtual reality
  • the function of the environment can be realized by hardware, and can also be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • a further aspect of the embodiments of the present application provides a virtual reality device, which may include: a transceiver for communicating with a device other than the virtual reality device; a memory for storing a computer executing instructions; one or more processors Connected to the memory and the transceiver through a bus, when the processor executes computer execution instructions stored in the memory, and one or more computer programs, wherein the one or more computer programs are stored In the memory, the one or more computer programs include instructions that, when the instructions are executed by the virtual reality device, cause the virtual reality device to execute the first aspect or any one of the first aspects optionally Way described.
  • a wireless communication device which may include:
  • At least one processor, memory, transceiver circuit, and bus system the processor, the memory, the transceiver circuit are coupled through the bus system, and the wireless communication device is connected to a remote access unit through the transceiver circuit Communication, the memory is configured to store program instructions, and the at least one processor is configured to execute the program instructions stored in the memory, so that the wireless communication device executes the method according to the first aspect of the embodiment of the present application
  • the wireless communication device may be either a virtual reality device or a chip applied to perform a corresponding function in the virtual reality device.
  • a further aspect of the embodiments of the present application provides a storage medium.
  • the technical solution of the present application is essentially a part that contributes to the existing technology or all or part of the technical solution may be in the form of a software production port.
  • the computer software product is stored in a storage medium for storing computer software instructions for the above-mentioned virtual reality device, which contains programs for executing the above aspects designed for the virtual reality device.
  • the storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and other media that can store program codes.
  • Another aspect of the embodiments of the present application provides a computer program product containing instructions, which when executed on a computer, causes the computer to execute the method described in the above aspects or any optional implementation manner of each aspect.
  • FIG. 1A is a schematic diagram showing a main interface of a virtual reality device
  • FIG. 1B is a schematic diagram of a video selection interface entered after the Youtu application is selected.
  • 1C is a schematic diagram of a movie selection interface entered after a movie application is selected
  • FIG. 2A is a flowchart of an existing process
  • FIG. 2B is a schematic diagram of an existing virtual environment showing a homepage
  • FIG. 2C is a schematic diagram of an existing display virtual environment
  • FIG. 3 is a schematic diagram of an existing immersive experience
  • FIG. 4 is a schematic diagram of various devices connected to the smart home control system
  • FIG. 5 is a system architecture diagram applied to this application.
  • 6A is a schematic diagram of a connection between a virtual reality device, a smart home control system, and an audio device according to an embodiment of the present application;
  • 6B is a schematic diagram of a functional module of the content label identification module
  • 6C is a schematic diagram of various results that can be matched by various matching rules
  • FIG. 7 is a schematic diagram of an embodiment of a display method according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a graphical user interface presented by a virtual reality device
  • FIG. 9A is a schematic diagram of an embodiment of a virtual reality device according to an embodiment of the present application.
  • 9B is a schematic diagram of an embodiment of a virtual reality device according to an embodiment of the present application.
  • 9C is a schematic diagram of an embodiment of a virtual reality device according to an embodiment of the present application.
  • FIG. 10 is a diagram of an embodiment of a virtual reality device in the embodiment of the present application.
  • Embodiments of the present application provide a display method and a virtual reality device, which are used to match keywords identified by a user's selection of target content and preset matching rules to obtain a target 3D environment model and at least one target 3D environment data. To present the corresponding 3D virtual reality environment.
  • the virtual reality device can also provide multiple sensory experiences by using external mechanical devices to cooperate with the content, such as smell, smoke, rain, fog, and seat vibration to better provide users with a multi-sensory immersive experience.
  • the user can control the steering wheel and other controls in the control module.
  • the input module of the virtual environment is responsible for receiving instructions from the control module and feeding them back to the racing program logic.
  • the racing program logic sends instructions to make the machine
  • the seat vibrates or lifts the seat according to the driving situation.
  • FIG. 3 is a schematic diagram of an existing immersive experience.
  • Virtual reality using computer simulation to generate a three-dimensional virtual world, providing users with simulations of senses such as sight, hearing, and touch, so that users can observe the three-dimensional space in a timely and unlimited manner as if they were in the real world. thing.
  • Inertial measurement unit A device that measures the three-axis attitude angle (or angular rate) and acceleration of an object.
  • an inertial measurement unit IMU
  • IMU inertial measurement unit
  • the accelerometer detects the acceleration signals of the object in the carrier coordinate system independently of the three axes
  • the gyroscope detects the carrier. Relative to the angular velocity signal of the navigation coordinate system, the angular velocity and acceleration of the object in the three-dimensional space are measured, and the attitude of the object is calculated from the solution.
  • 3D engine A collection of algorithms that abstracts the material in reality into polygons or various curves, and performs related calculations in the computer to output the final image.
  • the 3D engine as a low-level tool supports the development of high-level graphics software.
  • the 3D engine is like building a "real world" in a computer.
  • Three-dimensional model refers to a polygonal representation of an object, usually displayed by a computer or other video equipment.
  • the objects shown can be real-world entities or fictional objects.
  • 3D models are often generated using specialized software such as 3D modeling tools, but can also be generated using other methods. As data of points and other information collections, 3D models can be generated manually or according to certain algorithms.
  • Texture maps provide rich details to objects, simulating complex appearances in a simple way. An image (texture) is pasted (mapped) onto a simple shape in the scene, just like a print is pasted on a plane.
  • Lighting system The lighting system is also called the lighting system.
  • the function of the lighting system is to bring light to our scene to illuminate the scene.
  • Rendering In computer graphics, rendering refers to the process of generating images from a model using software.
  • a model is a description of a three-dimensional object in a strictly defined language or data structure. It includes geometry, viewpoints, textures, and lighting information.
  • Content tags refer to the main information of the content and are used to identify the content, including content category, duration, producer, keywords, profile, etc.
  • FIG. 4 is a schematic diagram of various devices connected to the smart home control system. That is, the smart home control system connects various devices in the home (such as audio and video equipment, lighting systems, curtain control, air conditioning control, digital cinema systems, audio and video servers, video cabinet systems, network appliances, etc.) through the Internet of Things technology, providing Home appliances control, lighting control, indoor and outdoor remote control, environmental monitoring, HVAC control, and programmable timing control, and other functions and means, smart home control system can be connected via telephone, mobile phone, computer, etc. to control.
  • various devices in the home such as audio and video equipment, lighting systems, curtain control, air conditioning control, digital cinema systems, audio and video servers, video cabinet systems, network appliances, etc.
  • the Internet of Things technology providing Home appliances control, lighting control, indoor and outdoor remote control, environmental monitoring, HVAC control, and programmable timing control, and other functions and means
  • smart home control system can be connected via telephone, mobile phone, computer, etc. to control.
  • the technical solution of the present application may include a browsing information area (foreground content) and a virtual environment layer (background content) in the virtual reality device.
  • the virtual environment presents a preset default environment; when the user selects and enters a
  • the virtual reality device recognizes the content tag selected by the user, generates a virtual environment matching the content tag, and sends it to the smart home control system according to the content tag.
  • the terminal device of the smart home control system operates according to the content tag.
  • virtual reality systems can be divided into four types: immersive virtual reality systems, augmented reality virtual reality systems, desktop virtual reality systems, and distributed virtual reality systems.
  • This patent mainly relates to an immersive virtual reality system.
  • a common immersive system is a system based on a head-mounted display.
  • FIG. 5 is a system architecture diagram applied to this application.
  • a virtual reality device is used to present virtual objects generated by the system, such as: environment, weather, and sound effects.
  • Virtual reality devices may include, but are not limited to: terminals with various head-mounted displays that close people's vision and hearing to the outside world, and controllers that can control and interact with displayed content, integrated virtual reality helmets, and mobile phones A virtual reality headset connected to the computer.
  • the virtual reality device is connected to a server that provides various services through a network.
  • the server may be a server that provides cloud services, a social server, and the like.
  • the virtual reality device can feed back to a server according to a user's operation.
  • the intelligent home control system can comprehensively manage information appliances, air-conditioning systems, floor heating, curtain control, light control, humidity control, etc. through the network.
  • the home system can be connected to virtual reality devices through Bluetooth, infrared and other methods.
  • FIG. 6A is a schematic diagram of a connection between a virtual reality device, a smart home control system, and an audio device according to an embodiment of the present application.
  • the virtual reality device 100 may include, but is not limited to, the following functional modules: a communication module 101, an input module 102, a graphical user interface module 103, a content label recognition module 104, a virtual environment module 105, a rendering module 106, a driver module 107, and a display module. 108, the audio module 109.
  • Communication module 101 can use cellular, Ethernet, wireless-fidelity (WiFi), Bluetooth, infrared and other communication methods to accept instructions or information from other devices, and can also send data from virtual reality devices to the cloud , Network, system, or other device. Information transmission with the smart home control system 200 is possible.
  • WiFi wireless-fidelity
  • Bluetooth infrared
  • Input module 102 The operation instruction information can be sent to the graphical user interface module 103 using input methods such as gestures, handles, voice, and touchpad.
  • Graphical user interface module 103 It is used to establish operation interfaces such as cards, texts, and buttons that interact with the user in the three-dimensional object.
  • the content label recognition module 104 may include a recognition module 1041 and a matching rule 1042, which can match the recognition result of the content to the corresponding virtual environment according to the matching rule, obtain a matching result, and according to the matching As a result, corresponding instructions are sent to the virtual environment module 105 and the audio module 109, respectively.
  • the identification module 1041 is used to identify keywords in information such as tags, characteristics, introduction, and title.
  • Matching rule 1042 may include, but is not limited to, (3D) environment model rules, texture mapping rules, skyball rules, lighting rules, particle rules, background sound rules, and smart home rules.
  • the matching rule refers to a rule for matching the identified keywords and obtaining the target 3D environment model and the target 3D environment data, that is, the matching rule defines the keyword and the target 3D environment model and the target 3D environment data. Mapping relationship.
  • the specific implementation of the matching rule is not limited, and can be implemented by various software and hardware methods, for example, the matching rule is implemented by methods such as search and regular expression.
  • there may be multiple types of matching rules including 3D environment model rules and at least one 3D environment data rule.
  • the environmental model involved in the 3D environmental model rule may include types such as an ocean model and a glacier model.
  • the columns shown in the texture map, sky ball, light, particles, background sound, smart home, etc. represent the type of each environmental data in the 3D environmental data rule.
  • Various rules are introduced below.
  • Environment model rule 10421 It is a 3D environment model rule. It is used to select the corresponding model in the title, tag, introduction or content introduction according to the environment keywords that appear. Taking the mountain model as an example, the keywords for selecting the mountain model may include, but are not limited to, related text such as mountains, peaks, mountain climbing, and mountain climbing.
  • the following rules are all 3D environmental data rules.
  • Material mapping rule 10422 According to the selected environment model, identify the emotional type of the movie in the title, tag, introduction or content introduction and select the corresponding material.
  • the gray tone material keywords are descriptions of negative emotion keywords, which can include, but are not limited to, sadness, sadness, anger, tension, anxiety, pain, fear, ashamed, death, and so on. When there is no above negative emotion keyword, it can default to colorful material.
  • Skyball Rule 10423 Select the corresponding skyball material according to the keywords that appear in the title, tag, introduction or content introduction. For example, if the "space" keyword appears, select the vast galaxy.
  • Illumination rule 10424 classify according to keywords that appear in titles, tags, introductions, or content introductions, and then select the corresponding light intensity. Take strong light as an example. If positive emotion keywords such as “comedy” and “youth” appear, select strong light. Conversely, when using keywords such as “thriller” and “horror” for negative emotions, weak lighting is selected.
  • Particle rule 10425 Adjust the corresponding particle parameters in the title, tag, introduction or content introduction based on the keywords that appear. Take “flame particles” as an example. If keywords such as “fire” and “fire” appear, these keywords are directly related to the flame, then select the flame particles. Or select corresponding particle parameters based on indirect related keywords. Taking “white particles” as an example, if keywords such as “peak”, “snow peak”, “first mountain peak” appear, and these keywords are indirectly related to white, then white particles are selected.
  • Background sound rule 10426 Select the corresponding audio file in the title, tag, introduction, or content introduction based on the keywords that appear. Take “horror music” as an example, if there are keywords such as “ghost” and “murder house”, then horror music is selected.
  • Smart home rule 10427 Set the operating parameters of smart home devices based on the keywords that appear in the title, tag, profile, or content description.
  • the operating parameters of smart home equipment include temperature, humidity, air volume, wind direction, degree of curtain opening and closing, and odor emission.
  • Virtual environment module 105 It is used to process all components of the virtual environment, and may include an environment model module 1051, a material texture module 1052, a sky ball module 1053, a lighting module 1054, a particle module 1055, and a physical model module.
  • Environment model module 1051 Select a corresponding environment model according to an instruction.
  • Material texture module 1052 Control the materials and textures of the 3D model according to instructions.
  • Skyball module 1053 Select the material of the sky environment according to the instruction.
  • Lighting module 1054 Control lighting system parameters of the virtual environment according to instructions, such as the position, intensity, color, and number of light sources.
  • Particle module 1055 Control the attributes of particles, such as color, size, speed, cycle time, transparency, etc., according to instructions.
  • Rendering module 106 used for graphic rendering and encapsulating data of each virtual three-dimensional object.
  • the 3D rendering module mainly manages the entire 3D engine. The scene's main camera determines which objects need to be rendered and sends them through the rendering pipeline. And the 3D engine encapsulates the most rendered details, but also provides access through pixel and vertex shaders.
  • the driving module 107 drives the graphics card to perform calculations, and is used to output the data rendered by the rendering module 106 for graphic output.
  • the display module 108 presents the rendered graphical user interface to a user in a virtual reality device.
  • Audio module 109 present the audio file to the user in a virtual reality device.
  • Smart home control system 200 used to control the devices in the smart home control system 200, such as regulating the temperature and air volume of air conditioners.
  • Smart home device 210 Specifically executes the instruction object issued from the smart home control system 200, which may include, but is not limited to, home appliances (such as air conditioners, humidifiers), curtains, doors, and devices that can emit odors.
  • home appliances such as air conditioners, humidifiers
  • curtains such as doors, and devices that can emit odors.
  • Audio device 300 a device used to play audio files to the user, such as a speaker / sound system.
  • FIG. 7 is a schematic diagram of an embodiment of a display method in the embodiment of the present application, which may include:
  • Step 701 The virtual reality device receives content delivered by a content server. That is, the communication module 101 in the virtual reality device receives the content delivered by the content server.
  • the content may be a video content list, a game content list, an application content list, and the like, which are not specifically limited.
  • Step 702 The virtual reality device displays content. That is, the graphic user interface module 103 in the virtual reality device presents the content.
  • steps 701 and 702 are optional steps.
  • Step 703 The virtual reality device generates a selection instruction for the target content in response to a user operation.
  • the user selects the target content from the content through the input module 102 in the virtual reality device, and generates a selection instruction.
  • the user selects the video content of "Climbing Meru" from the video content list through the input module 102 in the virtual reality device, and generates a selection instruction.
  • Step 704 The virtual reality device recognizes at least one keyword from the target content according to the selection instruction.
  • the identification module 1041 of the content label identification module 104 in the virtual reality device recognizes information such as a label, a feature, a profile, and a title of the target content according to a selection instruction, and obtains at least one keyword.
  • the identification module 1041 in the content label identification module 104 in the virtual reality device identifies at least one keyword by identifying information such as a label, a feature, a profile, and a title of "Climbing Meru Peak” according to a selection instruction.
  • the video information is as follows: Type: "Documentary / Adventure / Action / Sports", Location: “USA / USA / India”, Brief introduction: “3 elite climbers are leaving everything and heading to the shark fin of mountain climbing together. No one has ever successfully reached the summit of Meru. The natural environment here is very harsh. This is the first ascent in the adventure mountaineering in the Himalayas. "The keywords identified based on the above information are Climbing, Meru, adventure / action / sports, climber, harsh environment, Himalayas.
  • Step 705 The virtual reality device matches to obtain a target 3D environment model and at least one target 3D environment data according to at least one keyword and a preset matching rule.
  • the matching rule 1042 in the content label recognition module 104 of the virtual reality device matches and obtains a target 3D environment model and at least one target 3D environment data according to the identified at least one keyword and a preset matching rule. That is, the matching rule 1042 matches the target content based on the identified at least one keyword “climbing, Meru, adventure / action / sports, climber, harsh environment, Himalayas”.
  • the preset matching rules include a 3D environment model rule and at least one 3D environment data rule. It can be understood that the at least one 3D environment data rule may include at least one of a smart home type rule, a material mapping rule, a sky ball rule, a lighting rule, a particle rule, and a background sound rule.
  • At least one keyword includes a first target keyword
  • the virtual reality device matches the target 3D environment model according to the at least one keyword and a preset matching rule, which may include: when the first target keyword and the 3D environment model rule During matching, the virtual reality device is matched to obtain a target 3D environment model corresponding to the first target keyword.
  • the environment model rule 10421 selects the target 3D environment model according to the first target keyword and sends a first instruction to the environment model module 1051.
  • the environmental model rule 10421 selects a mountain model according to the keywords associated with the mountain model such as “climbing, Meru, climber, Himalaya mountains”, and sends a first instruction to the environment model module 1051.
  • At least one 3D environment data rule includes a texture map rule
  • at least one target 3D environment data includes a target texture map data
  • at least one keyword includes a second target keyword
  • the virtual reality device according to the at least one keyword and a preset match
  • the rule and the matching to obtain at least one target 3D environment data may include: when the second target keyword matches the texture mapping rule, the virtual reality device matches to obtain the target texture mapping data.
  • the texture mapping rule 10422 selects the target texture mapping data according to the second target keyword, and sends a second instruction to the texture mapping module 1052.
  • the material mapping rule 10422 selects a colorful mountain material based on keywords without negative emotions, and then sends a second instruction to the material mapping module 1052.
  • At least one 3D environment data rule includes a sky ball rule
  • at least one target 3D environment data includes a target sky ball material data
  • at least one keyword includes a third target keyword
  • the matching rule to obtain at least one target 3D environment data may include: when the third target keyword matches the sky ball rule, the virtual reality device matches to obtain the target sky ball texture data.
  • the sky ball rule 10423 selects the target sky ball texture data according to the third target keyword, and sends a third instruction to the sky ball module 1053.
  • the sky ball rule 10423 selects a dark cloud sky ball according to the keyword "poor environment", and issues a third instruction to the sky ball module 1053.
  • At least one 3D environment data rule includes a lighting rule
  • at least one target 3D environment data includes target lighting data
  • at least one keyword includes a fourth target keyword
  • the virtual reality device according to the at least one keyword and a preset matching rule
  • the matching to obtain at least one target 3D environment data may include: when the fourth target keyword matches the lighting rule, the virtual reality device matches to obtain the target lighting data.
  • the lighting rule 10424 selects the target lighting data according to the fourth target keyword and sends a fourth instruction to the lighting module 1054.
  • the lighting rule 10424 selects a weak light according to the keyword “environmentally poor”, and then issues a fourth instruction to the lighting module 1054.
  • At least one 3D environment data rule includes a particle rule
  • at least one target 3D environment data includes a target particle data
  • at least one keyword includes a fifth target keyword
  • the virtual reality device according to the at least one keyword and a preset matching rule
  • the matching to obtain at least one target 3D environment data may include: when the fifth target keyword matches a particle rule, the virtual reality device matches to obtain target particle parameters.
  • the particle rule 10425 selects a target particle parameter according to a fifth target keyword, and issues a fifth instruction to the particle module 1055.
  • the particle rule 10425 selects a white particle parameter according to the keyword "Himalayas", and issues a fifth instruction to the particle module 1055.
  • At least one 3D environment data rule includes a background sound rule
  • at least one target 3D environment data includes target audio file data
  • at least one keyword includes a sixth target keyword
  • the virtual reality device according to the at least one keyword and a preset match
  • the rule and matching to obtain at least one target 3D environment data may include: when the sixth target keyword matches the background sound rule, the virtual reality device matches to obtain target audio file data.
  • the background sound rule 10426 selects the target background sound file data name according to the sixth target keyword, and sends a sixth instruction to the audio module 109.
  • the background sound rule 10426 selects inspirational background sounds according to the keywords "adventure, sports, and dreams", and sends a sixth instruction to the audio module 109.
  • the virtual environment module 105 in the virtual reality device can adjust the environment model module 1051, the material texture module 1052, the sky ball module 1053, the lighting module 1054, and the particle module 1055 according to their corresponding instructions.
  • the audio module 109 in the virtual reality device selects a corresponding background sound file according to the sixth instruction and the background sound file name.
  • the audio module 109 selects a motivational background sound file according to the background sound file name.
  • a unified interface can also be provided.
  • the type and execution time of the matching rule are preset by the content provider and played during content playback. At that time, it is issued to the corresponding equipment for execution.
  • the VR movie content "Nie Xiaoqian”
  • the air conditioner is started, the temperature is set to "cold”, and the air volume is set to "weak", so that users feel the gloomy atmosphere more.
  • Step 706 The virtual reality device matches and obtains operating parameters of the smart home device according to at least one keyword and a preset matching rule.
  • the smart home rule 10427 matches and selects the operating parameters of the smart home device according to the seventh target keyword and the smart home rule.
  • the smart home rule 10427 selects terminal parameters of cold temperature and strong temperature according to the keywords "Meilu Peak, Himalaya environment, and harsh environment", and sends a seventh instruction to the smart home control system 200.
  • Step 707 The virtual reality device sends the operating parameters of the smart home device to the server.
  • the operating parameters are used by the server to control the smart home device to operate in accordance with the operating parameters. That is, the smart home rule 10427 sets the operating parameters of the smart home device according to the keywords appearing in the title, tag, profile or content introduction, and then sends a seventh instruction to the smart home control system 200.
  • This operating parameter is used by the server to control the smart The home control system is running. It can be understood that the smart home control system 200 can be considered as a server.
  • Step 708 The server controls the corresponding smart home device to run.
  • the operating parameters include at least one of temperature parameters, humidity parameters, air volume parameters, wind direction parameters, odor parameters, and curtain opening and closing degree parameters.
  • the temperature parameter is used by the server to control the smart home device to operate according to the temperature parameter;
  • the humidity parameter is used by the server to control the smart home device to operate according to the humidity parameter;
  • the air volume parameter and wind direction parameter are used by the server to control the smart home device according to the wind direction corresponding to the wind direction parameter.
  • the air volume parameter runs;
  • the odor parameter is used by the server to control the smart home equipment to emit the corresponding odor.
  • the smart home device 210 required by the seventh instruction starts to operate according to the operating parameters in the seventh instruction; if the smart home device is currently running, the smart device can be adjusted according to the operating parameters in the seventh instruction.
  • Configuration of home equipment Exemplarily, the air-conditioning terminal device in the smart home device 210 operates in a cold temperature and strong wind mode.
  • Step 709 The virtual reality device plays background music according to the target audio file.
  • the background music can be played according to the target audio file in the sixth instruction, or the virtual reality device can send the sixth instruction to the audio device 300, and on a separate audio device 300 according to the first
  • the target audio file in the six instructions plays background music.
  • This audio device 300 may also belong to a smart home device.
  • steps 706-709 are optional steps.
  • Step 710 The virtual reality device applies at least one target 3D environment data to the target 3D environment model to present a corresponding 3D virtual reality environment.
  • the virtual reality device renders a 3D virtual reality environment according to the target 3D environment model and at least one target environment data; the virtual reality device displays the 3D virtual reality environment. That is, the rendering module 106 in the virtual reality device performs graphic rendering and packaging of data of three-dimensional objects in various virtual reality environments.
  • the driving module 107 in the virtual reality device drives the graphics card to perform calculations, and outputs the data rendered by the rendering module 106 to a graphic output and sends it to the display module 108.
  • the rendering module 106 in the virtual reality device may be based on the mountain model indicated in the first instruction, the colorful mountain material indicated in the second instruction, the cloudy skyball indicated in the third instruction, and the fourth instruction.
  • the weak light and the data corresponding to the white particles indicated in the fifth instruction are used for graphic rendering.
  • the display module 108 in the virtual reality device presents the rendered graphical user interface to the user. As shown in FIG. 8, FIG. 8 is a schematic diagram of a graphical user interface presented by a virtual reality device.
  • this application uses a variety of combined virtual reality environment rendering methods such as environment models and texture materials, which effectively reduces the topicality of matching content. Operational requirements, reducing operating manpower / costs; also reducing the computing performance requirements for virtual reality devices; by combining smart home systems, adjusting temperature, humidity, wind, sound, etc. according to the content in VR, effectively improving the user's multi-sensory Experience; it also reduces the space and price costs of virtual reality multi-sensory experiences.
  • FIG. 9A is a schematic diagram of an embodiment of a virtual reality device according to an embodiment of the present application.
  • Can include:
  • a generating module 901 configured to generate a selection instruction for the target content in response to a user operation
  • An identification module 902 configured to identify at least one keyword from the target content according to the selection instruction
  • a matching module 903 is configured to obtain a target 3D environment model and at least one target 3D environment data according to at least one keyword and a preset matching rule.
  • the preset matching rule includes a 3D environment model rule and at least one 3D environment data rule.
  • a display module 904 is configured to apply at least one target 3D environment data to a target 3D environment model and present a corresponding 3D virtual reality environment.
  • the at least one 3D environment data rule includes at least one of a smart home type rule, a texture mapping rule, a sky ball rule, a lighting rule, a particle rule, and a background sound rule.
  • At least one keyword includes a first target keyword
  • the matching module 903 is specifically configured to obtain a target 3D environment model corresponding to the first target keyword when the first target keyword matches the 3D environment model rule.
  • At least one 3D environment data rule includes a texture map rule
  • at least one target 3D environment data includes a target texture map data
  • at least one keyword includes a second target keyword
  • the matching module 903 is specifically configured to obtain target texture map data when the second target keyword matches the texture map rule.
  • At least one 3D environment data rule includes a sky ball rule
  • at least one target 3D environment data includes target sky ball texture data
  • at least one keyword includes a third target keyword.
  • the matching module 903 is specifically configured to obtain the target skyball texture data when the third target keyword matches the skyball rule.
  • At least one 3D environment data rule includes a lighting rule
  • at least one target 3D environment data includes target lighting data
  • at least one keyword includes a fourth target keyword
  • the matching module 903 is specifically configured to obtain target illumination data when the fourth target keyword matches the illumination rule.
  • At least one 3D environment data rule includes a particle rule
  • at least one target 3D environment data includes target particle data
  • at least one keyword includes a fifth target keyword
  • the matching module 903 is specifically configured to obtain target particle parameters when the fifth target keyword matches the particle rule.
  • At least one 3D environment data rule includes a background sound rule
  • at least one target 3D environment data includes target audio file data
  • at least one keyword includes a sixth target keyword.
  • the matching module 903 is specifically configured to obtain target audio file data when the sixth target keyword matches the background sound rule.
  • FIG. 9B is a schematic diagram of an embodiment of a virtual reality device in the embodiment of the present application.
  • Virtual reality equipment also includes:
  • the sending module 905 is configured to send a control instruction to the smart home device, where the control instruction includes target audio file data, and the target audio file data is used by the smart home device for playback.
  • FIG. 9C is a schematic diagram of an embodiment of a virtual reality device in the embodiment of the present application.
  • Virtual reality equipment also includes:
  • the playback module 906 is configured to perform playback according to the target audio file data.
  • the display module 904 is specifically configured to render a 3D virtual reality environment according to the target 3D environment model and at least one target environment data; and display the 3D virtual reality environment.
  • the matching module 903 is further configured to match the operating parameters of the smart home device according to at least one keyword and a preset matching rule;
  • the sending module 905 is configured to send operating parameters of the smart home device to the server, and the operating parameters are used by the server to control the smart home device to perform operations according to the operating parameters.
  • the operating parameter includes at least one of a temperature parameter, a humidity parameter, an air volume parameter, a wind direction parameter, and an odor parameter;
  • the temperature parameter is used by the server to control the smart home equipment to operate according to the temperature parameter
  • the humidity parameter is used by the server to control the smart home equipment to operate according to the humidity parameter
  • the air volume parameter and wind direction parameter are used by the server to control the smart home device to operate according to the wind direction corresponding to the wind direction parameter and according to the air volume parameter;
  • the odor parameter is used by the server to control the smart home equipment to emit the corresponding odor.
  • FIG. 10 is a diagram of an embodiment of the virtual reality device in the embodiment of the present application.
  • the virtual reality device 10 may include at least one processor 1001, at least one transceiver 1002, at least one memory 1003, at least one display 1004, and an input device 1005 all connected to the bus.
  • the virtual reality device involved in this embodiment of the present application may be Having more or fewer components than shown in FIG. 10, two or more components may be combined, or may have different component configurations or settings, each component may include one or more signal processing and / or Implemented in hardware, software, or a combination of hardware and software, including application specific integrated circuits.
  • the processor 1001 can implement the generation module 901, the identification module 902, the matching module 903, and the playback module of the virtual reality device in the embodiment shown in FIGS. 9A, 9B, and 9C. 906.
  • the transceiver 1002 can implement the function of the sending module 905 of the virtual reality device in the embodiment shown in FIG. 9B.
  • the transceiver 1002 can also be used for information exchange between the virtual reality device and the server.
  • the memory 1003 has Various structures are used to store program instructions.
  • the processor 1001 is configured to execute the instructions in the memory 1003 to implement the display method in the embodiment shown in FIG. 7; the display 1004 can implement the methods shown in FIGS. 9A, 9B, and 9C.
  • the function of the display module 904 of the virtual reality device in the embodiment, the input device 1005 may be used for a user to input an operation to the virtual reality device.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, computer, server, or data center Transmission by wire (for example, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (for example, infrared, wireless, microwave, etc.) to another website site, computer, server, or data center.
  • wire for example, coaxial cable, optical fiber, digital subscriber line (DSL)
  • wireless for example, infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, a data center, and the like that includes one or more available medium integration.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (Solid State Disk (SSD)), and the like.
  • the disclosed systems, devices, and methods may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the integrated unit When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially a part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium. , Including a plurality of instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application.
  • the foregoing storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Hardware Design (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请实施例公开了一种显示方法,用于根据用户对目标内容的选择识别出的关键字和预置的匹配规则进行匹配,得到目标3D环境模型和至少一个目标3D环境数据,从而呈现对应的3D虚拟现实环境。本申请实施例方法包括:虚拟现实设备响应用户的操作,生成对目标内容的选择指令;所述虚拟现实设备根据所述选择指令从所述目标内容中识别出至少一个关键字;所述虚拟现实设备根据所述至少一个关键字和预置的匹配规则,匹配得到目标3D环境模型和至少一个目标3D环境数据,所述预置的匹配规则包括3D环境模型规则以及至少一个3D环境数据规则;所述虚拟现实设备将所述至少一个目标3D环境数据应用在所述目标3D环境模型中,呈现对应的3D虚拟现实环境。

Description

一种显示方法以及虚拟现实设备
本申请要求于2018年8月14日提交中国专利局、申请号为201810922645.9、申请名称为“一种显示方法以及虚拟现实设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及虚拟现实技术领域,尤其涉及一种显示方法以及虚拟现实设备。
背景技术
目前虚拟现实(virtual reality,VR)的主要应用,包括VR观影、VR游戏、VR购物等。VR用户界面一般会有如图1A所示的一个主界面,用于使用户选择要进入的具体应用。进入具体应用后,又会有其他所示的界面,用于选择应用中具体的内容。例如,如图1B所示,图1B为选择优兔(youtube)应用后进入的视频选择界面的示意图。或者,如图1C所示,图1C为选择电影应用后进入的影片选择界面的示意图。
在上述两种界面中,有两大类内容,一种是前景内容,包括主界面中选择各种应用程序(application,APP)的图标,以及APP内各内容的预览图;一种是背景内容,如图1B中所示的风景图片或图1C中所示的鲨鱼图片。这些背景内容与前景内容之间的关系比较弱,也缺少立体感。
在一种实现方式中,为了使用户能够有身临其境的沉浸感,业界普遍做法是根据首页、应用商店、虚拟影院等不同的使用场景来替换相应的虚拟环境。如图2A所示,图2A为现有的一个流程架构图。以Oculus为例,其首页的虚拟环境使用如图2B的开放式别墅环境,根据当进入到Netflix应用时虚拟环境切换为红色系的家居环境,海量的视频内容模拟现实生活显示在电视机位上,如图2C所示。但是所实现的虚拟环境只是制造一个静态的全景图片,用户无法对该虚拟环境进行交互,无法实现虚拟现实的交互特性。而且在虚拟现实内,内容与虚拟环境缺乏联系,以单一虚拟环境匹配海量不同的内容,缺乏想象力。另外,在现有技术中,给每一个内容匹配一套虚拟环境,对于现有的硬件设备性能造成极大的挑战。例如,一个标清的静态全景图片为3840*3840像素的文件,其大小是15Mb。对于成千上万个内容,现有技术无法实现每个内容匹配一个虚拟环境。
发明内容
本申请实施例提供了一种显示方法及虚拟现实设备,用于根据用户对目标内容的选择识别出的关键字和预置的匹配规则进行匹配,得到目标3D环境模型和至少一个目标3D环境数据,从而呈现对应的3D虚拟现实环境。
有鉴于此,本申请第一方面提供了一种显示方法,可以包括:虚拟现实设备响应用户的操作,生成对目标内容的选择指令;该目标内容可以是视频内容列表中的内容、游戏内容列表中的内容、应用内容列表中的内容等,具体不做限定。该虚拟现实设备根据该选择指令从该目标内容中识别出至少一个关键字;如虚拟现实设备根据选择指令,对目标内容 的标签、特点、简介、标题等信息进行识别,得到至少一个关键字。该虚拟现实设备根据该至少一个关键字和预置的匹配规则,匹配得到目标3D环境模型和至少一个目标3D环境数据,该预置的匹配规则有多种类型,可以包括3D环境模型规则以及至少一个3D环境数据规则;该虚拟现实设备将该至少一个目标3D环境数据应用在该目标3D环境模型中,呈现对应的3D虚拟现实环境。
在本申请实施例中,虚拟现实设备对根据用户对目标内容的选择识别出的关键字和预置的匹配规则进行匹配,得到目标3D环境模型和至少一个目标3D环境数据,从而呈现对应的3D虚拟现实环境。有效减少匹配内容的专题性运营需求,减少运营人力/成本支出;也降低了对虚拟现实设备的运算性能要求。
可选的,在本申请的一些实施例中,该至少一个3D环境数据规则包括:智能家居类型规则、材质贴图规则、天空球规则、光照规则、粒子规则和背景音规则中的至少一个。这里的3D环境数据规则包括但不限于上述的说明,本申请实施例对至少一个3D环境数据规则做了一个简要的说明,使得本申请技术方案更加具体、清楚。
可选的,在本申请的一些实施例中,该至少一个关键字包括第一目标关键字,该虚拟现实设备根据该至少一个关键字和预置的匹配规则,匹配得到目标3D环境模型,可以包括:当该第一目标关键字与该3D环境模型规则匹配时,该虚拟现实设备匹配得到与该第一目标关键字对应的目标3D环境模型。示例性的,该目标3D环境模型可以是海洋模型、冰川模型、沙漠模型、平原模型、草原模型、森林模型、山地模型、河谷模型等。在本申请实施例中,是对第一目标关键字与3D环境模型规则匹配时,得到对应的目标3D环境模型的一个具体实现方案,增加了方案的可行性。
可选的,在本申请的一些实施例中,该至少一个3D环境数据规则包括该材质贴图规则,该至少一个目标3D环境数据包括目标材质贴图数据,该至少一个关键字包括第二目标关键字,该虚拟现实设备根据该至少一个关键字和预置的匹配规则,匹配得到至少一个目标3D环境数据,可以包括:当该第二目标关键字与该材质贴图规则匹配时,该虚拟现实设备匹配得到该目标材质贴图数据。在本申请实施例中,是对第二目标关键字与材质贴图规则匹配时,得到对应的目标材质贴图数据的一个具体实现方案,增加了方案的可行性。
可选的,在本申请的一些实施例中,该至少一个3D环境数据规则包括该天空球规则,该至少一个目标3D环境数据包括目标天空球材质数据,该至少一个关键字包括第三目标关键字,该虚拟现实设备根据该至少一个关键字和预置的匹配规则,匹配得到至少一个目标3D环境数据,可以包括:当该第三目标关键字与该天空球规则匹配时,该虚拟现实设备匹配得到该目标天空球材质数据。在本申请实施例中,是对第三目标关键字与天空球规则匹配时,得到对应的目标天空球材质数据的一个具体实现方案,增加了方案的可行性。
可选的,在本申请的一些实施例中,该至少一个3D环境数据规则包括该光照规则,该至少一个目标3D环境数据包括目标光照数据,该至少一个关键字包括第四目标关键字,该虚拟现实设备根据该至少一个关键字和预置的匹配规则,匹配得到至少一个目标3D环境数据,可以包括:当该第四目标关键字与该光照规则匹配时,该虚拟现实设备匹配得到该目标光照数据。在本申请实施例中,是对第四目标关键字与光照规则匹配时,得到对应的目 标光照数据的一个具体实现方案,增加了方案的可行性。
可选的,在本申请的一些实施例中,该至少一个3D环境数据规则包括该粒子规则,该至少一个目标3D环境数据包括目标粒子数据,该至少一个关键字包括第五目标关键字,该虚拟现实设备根据该至少一个关键字和预置的匹配规则,匹配得到至少一个目标3D环境数据,可以包括:当该第五目标关键字与该粒子规则匹配时,该虚拟现实设备匹配得到该目标粒子参数。在本申请实施例中,是对第五目标关键字与粒子规则匹配时,得到对应的目标粒子数据的一个具体实现方案,增加了方案的可行性。
可选的,在本申请的一些实施例中,该至少一个3D环境数据规则包括该背景音规则,该至少一个目标3D环境数据包括目标音频文件数据,该至少一个关键字包括第六目标关键字,该虚拟现实设备根据该至少一个关键字和预置的匹配规则,匹配得到至少一个目标3D环境数据,可以包括:当该第六目标关键字与该背景音规则匹配时,该虚拟现实设备匹配得到该目标音频文件数据。在本申请实施例中,是对第六目标关键字与背景音规则匹配时,得到对应的目标音频文件数据的一个具体实现方案,增加了方案的可行性。
可选的,在本申请的一些实施例中,该方法还可以包括:该虚拟现实设备向该智能家居设备发送控制指令,该控制指令包括该目标音频文件数据,该目标音频文件数据用于该智能家居设备进行播放。在本申请实施例中,虚拟现实设备匹配得到目标音频文件数据时,可以选择向智能家居设备发送目标音频文件数据,由智能家居设备来进行播放,提高了方案的灵活性。
可选的,在本申请的一些实施例中,该方法还可以包括:该虚拟现实设备根据该目标音频文件数据进行播放。在本申请实施例中,虚拟现实设备匹配得到目标音频文件数据时,也可以选择由自身来播放,降低了时延,节约了传输资源。
可选的,在本申请的一些实施例中,该虚拟现实设备将该至少一个目标3D环境数据应用在该目标3D环境模型中,呈现对应的3D虚拟现实环境,可以包括:该虚拟现实设备根据该目标3D环境模型和该至少一个目标环境数据渲染出该3D虚拟现实环境;该虚拟现实设备显示该3D虚拟现实环境。本申请实施例提供了虚拟现实设备呈现3D虚拟现实环境所实现的一个方案,增加了方案的可行性。
可选的,在本申请的一些实施例中,该方法还可以包括:该虚拟现实设备根据该至少一个关键字和该预置的匹配规则,匹配得到智能家居设备的运行参数;该虚拟现实设备向服务器发送该智能家居设备的运行参数,该运行参数用于该服务器控制该智能家居设备按照该运行参数进行运行。在本申请实施例中,所呈现的3D虚拟现实环境,还可以结合智能家居设备,有效的利用资源,使得用户感受到更优质的沉静式效果,提高用户体验。
可选的,在本申请的一些实施例中,该运行参数包括温度参数、湿度参数、风量参数、风向参数、气味参数中的至少一个;其中,该温度参数用于该服务器控制该智能家居设备按照该温度参数进行运行;该湿度参数用于该服务器控制该智能家居设备按照该湿度参数进行运行;该风量参数和该风向参数用于该服务器控制该智能家居设备按照该风向参数对应的风向,根据该风量参数进行运行;该气味参数用于该服务器控制该智能家居设备散发对应的气味。在本申请实施例中国,对智能家居设备的运行参数以及所对应的功能做了一 个说明,通过联合智能家居系统,根据虚拟现实(virtual reality,VR)中的内容调节温、湿度、风、声音等,有效的提升用户的多感官体验;也降低了虚拟现实多感官体验的空间和价格成本。
本申请实施例又一方面提供了一种虚拟现实设备,具有根据用户对目标内容的选择识别出的关键字,匹配得到目标3D环境模型和至少一个目标3D环境数据,从而呈现对应的3D虚拟现实环境的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
本申请实施例又一方面提供一种虚拟现实设备,可以包括:收发器,用于与所述虚拟现实设备之外的装置进行通信;存储器,用于存储计算机执行指令;一个或多个处理器,通过总线与所述存储器和所述收发器连接,当所述处理器执行所述存储器中存储的计算机执行指令,以及一个或多个计算机程序,其中,所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述虚拟现实设备执行时,使得所述虚拟现实设备执行如上述第一方面或第一方面任一可选方式所述的方法。
本申请实施例又一方面提供一种无线通信装置,可以包括:
至少一个处理器,存储器,收发电路和总线系统,所述处理器,所述存储器,所述收发电路通过所述总线系统耦合,所述无线通信装置通过所述收发电路与远端接入单元相通信,所述存储器用于存储程序指令,所述至少一个处理器用于执行所述存储器中存储的所述程序指令,使得所述无线通信装置执行如本申请实施例上述第一方面所述的方法中所述虚拟现实设备操作的部分。所述无线通信装置既可以是虚拟现实设备,也可以是应用在虚拟现实设备中执行相应功能的芯片。
本申请实施例又一方面提供一种存储介质,需要说明的是,本申请技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产口的形式体现出来,该计算机软件产品存储在一个存储介质中,用于储存为上述虚拟现实设备所用的计算机软件指令,其包含用于执行上述各方面为虚拟现实设备所设计的程序。
该存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请实施例又一方面提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行如上述各方面或各方面任一可选实现方式中所述的方法。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例和现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,还可以根据这些附图获得其它的附图。
图1A为虚拟现实设备显示主界面的示意图;
图1B为选择优兔应用后进入的视频选择界面的示意图;
图1C为选择电影应用后进入的影片选择界面的示意图;
图2A为现有的一个流程架构图;
图2B为现有显示首页虚拟环境的一个示意图;
图2C为现有显示虚拟环境的一个示意图;
图3为现有沉浸式体验的一种示意图;
图4为智能家居控制系统连接各种设备的示意图;
图5为本申请所应用的系统架构图;
图6A为本申请实施例中虚拟现实设备与智能家居控制系统、音频设备连接的一个示意图;
图6B为内容标签识别模块的一个功能模块示意图;
图6C为各种匹配规则所能匹配的各种结果的示意图;
图7为本申请实施例中显示方法的一个实施例示意图;
图8为虚拟现实设备呈现的一个图形用户界面的示意图;
图9A为本申请实施例中虚拟现实设备的一个实施例示意图;
图9B为本申请实施例中虚拟现实设备的一个实施例示意图;
图9C为本申请实施例中虚拟现实设备的一个实施例示意图;
图10是本申请实施例中虚拟现实设备的一个实施例图。
具体实施方式
本申请实施例提供了一种显示方法及虚拟现实设备,用于根据用户对目标内容的选择识别出的关键字和预置的匹配规则进行匹配,得到目标3D环境模型和至少一个目标3D环境数据,从而呈现对应的3D虚拟现实环境。
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,都应当属于本申请保护的范围。
在一种实现方式中,虚拟现实设备还可以通过利用外部机械设备配合内容,增加多种感官体验,如气味、烟雨雾、座椅的震动等更好的为用户提供多感官的沉浸式体验。例如,在赛车游戏体验中,用户可以通过控制模块中的方向盘等操控,虚拟环境的输入模块负责接受控制模块的指令并反馈至赛车程序逻辑,根据用户的操控情况,赛车程序逻辑发送指令使机械座椅根据驾驶情况进行震动或升降座椅。如图3所示,图3为现有沉浸式体验的一种示意图。
上述现有技术存在如下缺陷:该技术只适用于公共的娱乐场景,除了需要一定的体验空间外,还需要大型的外部机械设备,因此存在成本高的问题。此外,这些机械设备具有匹配内容少、移动不方便、维护成本高等问题,不利于VR内容的推广和普及。
下面先对本申请中出现的术语做一个简单的说明,如下所示:
虚拟现实:利用电脑模拟产生一个三度空间的虚拟世界,提供使用者关于视觉、听觉、触觉等感官的模拟,让使用者如同身历其境一般,可以及时、没有限制地观察三度空间内的事物。
惯性测量单元:测量物体三轴姿态角(或角速率)以及加速度的装置。一般的,一个 惯性测量单元(inertial measurement unit,IMU)包含了三个单轴的加速度计和三个单轴的陀螺,加速度计检测物体在载体坐标系统独立三轴的加速度信号,而陀螺检测载体相对于导航坐标系的角速度信号,测量物体在三维空间中的角速度和加速度,并以此解算出物体的姿态。
3D引擎:将现实中的物质抽象为多边形或者各种曲线等表现形式,在计算机中进行相关计算并输出最终图像的算法实现的集合。通常来说,3D引擎作为一种底层工具支持着高层的图形软件开发,3D引擎就像是在计算机内建立一个“真实的世界”。
三维模型:指物体的多边形表示,通常用计算机或者其它视频设备进行显示。显示的物体是可以是现实世界的实体,也可以是虚构的物体。三维模型经常用三维建模工具这种专门的软件生成,但是也可以用其它方法生成。作为点和其它信息集合的数据,三维模型可以手工生成,也可以按照一定的算法生成。
材质贴图:又称纹理贴图,在计算机图形学中是把存储在内存里的位图包裹到3D渲染物体的表面。纹理贴图给物体提供了丰富的细节,用简单的方式模拟出了复杂的外观。一个图像(纹理)被贴(映射)到场景中的一个简单形体上,就像印花贴到一个平面上一样。
光照系统:光照系统又称照明系统,光照系统的作用就是给我们的场景带来光源,用于照亮场景。
渲染:渲染在电脑绘图中是指用软件从模型生成图像的过程。模型是用严格定义的语言或者数据结构对于三维物体的描述,它包括几何、视点、纹理以及照明信息。
内容标签:内容标签是指内容的主要信息,用于对内容进行标识,有内容类别、时长、制作者、关键词、简介等。
智能家居:智能家居是在互联网影响之下物联化的体现。如图4所示,图4为智能家居控制系统连接各种设备的示意图。即智能家居控制系统通过物联网技术将家中的各种设备(如音视频设备、照明系统、窗帘控制、空调控制、数字影院系统、影音服务器、影柜系统、网络家电等)连接到一起,提供家电控制、照明控制、室内外遥控、环境监测、暖通控制、以及可编程定时控制等多种功能和手段,智能家居控制系统能够通过电话、手机、电脑等进行连接后进行控制。
本申请技术方案在虚拟现实设备可以包括浏览式信息区(前景内容)与虚拟环境层(背景内容)构成,在用户未选择内容时,虚拟环境呈现预置的默认环境;当用户选择并进入一个内容后,虚拟现实设备识别用户所选择的内容标签,生成与内容标签匹配的虚拟环境,同时根据内容标签发至智能家居控制系统,智能家居控制系统的终端设备根据内容标签进行运行。本申请要解决的是单一固定的背景环境无法满足内容的匹配要求;以及如何在有限的系统资源环境下解决用户的多样化虚拟环境需求的问题。
需要说明的是,虚拟现实系统按其功能不同,可分成沉浸型虚拟现实系统、增强现实型的虚拟现实系统、桌面型虚拟现实系统和分布式虚拟现实系统等四种类型。本专利主要涉及的是沉浸型虚拟现实系统,常见的沉浸型系统是基于头盔式显示器的系统。
如图5所示,图5为本申请所应用的系统架构图。在本申请中,虚拟现实设备用于呈现系统生成的虚拟物件,例如:环境、天气、音效。虚拟现实设备可以包括但不限于:具 有各种头戴式显示器将人对外界的视觉、听觉封闭的终端,以及能够与显示内容能够进行控制和交互的控制器,虚拟现实一体式头盔,连接手机端的虚拟现实头盔,连接电脑端的虚拟现实头盔。
所述虚拟现实设备通过网络与提供各种服务的服务器相连,服务器可以是提供云服务的服务器、社交服务器等。所述虚拟现实设备根据用户的操作能够反馈至服务器。
智能家居控制系统可通过网络综合管理信息家电,空调系统、地板采暖、窗帘控制、灯光控制、湿度控制等,所述家居系统可通过蓝牙、红外线等方式与虚拟现实设备进行联结。
如图6A所示,图6A为本申请实施例中虚拟现实设备与智能家居控制系统、音频设备连接的一个示意图。其中,虚拟现实设备100可以包括但不限于以下功能模块:通讯模块101,输入模块102,图形用户界面模块103,内容标签识别模块104,虚拟环境模块105,渲染模块106,驱动模块107,显示模块108,音频模块109。
对各功能模块的说明如下所示:
通讯模块101:可使用蜂窝、以太网、无线保真(wireless-fidelity,WiFi)、蓝牙、红外等通讯方式接受其他设备发来的指令或信息,同时也可以将虚拟现实设备的数据发送给云端、网络、系统或者其他设备。能够与智能家居控制系统200进行信息传输。
输入模块102:可使用手势、手柄、语音、触控板等输入方式发送操作指令信息至图形用户界面模块103。
图形用户界面模块103:用于建立三维对象里面与用户发生交互的卡片、文本、按钮等操作界面。
内容标签识别模块104:如图6B所示,内容标签识别模块104可以包括识别模块1041和匹配规则1042组成,能够根据匹配规则将内容的识别结果匹配相应的虚拟环境,得到匹配结果,并根据匹配结果分别向虚拟环境模块105和音频模块109发送对应的指令。
其中,识别模块1041:用于识别内容的标签、特点、简介、标题等信息内的关键字。
匹配规则1042:可以包括但不限于(3D)环境模型规则、材质贴图规则、天空球规则、光照规则、粒子规则、背景音规则和智能家居规则。本申请中,匹配规则是指用于对识别出的关键字进行匹配并得到目标3D环境模型和目标3D环境数据的规则,也即匹配规则定义了关键字与目标3D环境模型以及目标3D环境数据之间的映射关系。匹配规则的具体实现并不限定,可以通过各种软硬件方法实现,例如,通过查找、正则表达式等方法来实现匹配规则。本申请中,匹配规则可以有多种类型,包括3D环境模型规则以及至少一个3D环境数据规则。
如图6C所示,为各种匹配规则所能匹配的各种结果的示意图。例如,3D环境模型规则中涉及的环境模型可以包括海洋模型、冰川模型等种类。材质贴图、天空球、光照、粒子、背景音、智能家居等所示的各列表示3D环境数据规则中的每个环境数据所具有的种类。下面分别对各种规则进行介绍。
环境模型规则10421:即3D环境模型规则,用于在标题、标签、简介或内容介绍里根据出现的环境关键字选取对应的模型。以山地模型为例,选取山地模型的关键字可以包括 但不限于山脉、山峰、登山、爬山等关联的文字。
下面各种规则都属于3D环境数据规则。
材质贴图规则10422:根据选取的环境模型,在标题、标签、简介或内容介绍里识别影片情感类型选取对应的材质。灰色调材质关键字为消极情绪类关键字描述,可以包括但不限于忧愁、悲伤、愤怒、紧张、焦虑、痛苦、恐惧、憎恨、死亡等。当没有以上消极情绪类关键字时,可以默认为多彩材质。
天空球规则10423:在标题、标签、简介或内容介绍里根据出现的关键字选取对应的天空球材质。例如出现“太空”关键字,则选取浩瀚星河。
光照规则10424:在标题、标签、简介或内容介绍里根据出现的关键字进行归类,然后选取对应的光照强度。以强光照为例,若出现“喜剧”、“青春”等积极情绪类的关键字时,则选取强光照。反之,“惊悚”、“恐怖”等消极情绪类的关键字时,则选取弱光照。
粒子规则10425:在标题、标签、简介或内容介绍里根据出现的关键字调控对应的粒子参数。以“火焰粒子”为例,若出现“烈火”、“火灾”等关键字时,这些关键字与火焰产生直接关联,则选取火焰粒子。或根据间接性的关联关键字选取相应的粒子参数。以“白色粒子”为例,若出现“高峰”、“雪峰”、“第一山峰”等关键字时,这些关键字与白色产生间接关联,则选取白色粒子。
背景音规则10426:在标题、标签、简介或内容介绍里根据出现的关键字选取对应的音频文件。以“恐怖类音乐”为例,若出现:“鬼”、“凶宅”等关键字时,则选取恐怖类音乐。
智能家居规则10427:在标题、标签、简介或内容介绍里根据出现的关键字设定智能家居设备的运行参数。智能家居设备的运行参数包括温度、湿度、风量、风向、窗帘开合程度、散发气味等。
例如:以温度为例,当出现“亚马逊”“原始森林”“火山”“非洲”“炎热”等跟高温相关联的关键字时,选取高温温度。再以风量为例,若出现“飓风”“台风”等关键字时,选取强烈风量。或者,关键字中带有“惊悚”“鬼故事”等恐怖关键字时,打开空调设定寒冷温度和微弱风量。或者,关键字中带有“沙漠”关键字时,打开地暖设备设定较高的温度。或者,关键字中带有“花”关键字时,打开可以散发气味的设备,设定花香类型散发香味。
虚拟环境模块105:用于处理虚拟环境所有组成部分,可以包含环境模型模块1051、材质纹理模块1052、天空球模块1053、光照模块1054、粒子模块1055和物理模型模块。
环境模型模块1051:根据指令选取对应环境模型。
材质纹理模块1052:根据指令控制三维模型的材质与纹理。
天空球模块1053:根据指令选取天空环境的材质。
光照模块1054:根据指令控制虚拟环境的光照系统参数,如光源的位置、强度、颜色、数量等。
粒子模块1055:根据指令控制粒子的属性,如颜色、大小、速度、循环时间、透明度等。
渲染模块106:用于将各项虚拟三维对象的数据进行图形渲染并封装。3D渲染模块主要管理整个3D引擎的。场景的主摄像头确定需要渲染的对象,并把它们发送通过渲染管线。以及3D引擎的封装最渲染的细节,而且还提供了通过像素和顶点着色器的访问。
驱动模块107:驱动显卡进行运算,用于将渲染模块106渲染后数据进行图形输出。
显示模块108:将渲染好的图形用户界面在虚拟现实设备中呈现给用户。
音频模块109:将音频文件在虚拟现实设备中呈现给用户。
智能家居控制系统200:用来控制智能家居控制系统200内的设备,例如调控空调的温度和风量等。
智能家居设备210:具体执行从智能家居控制系统200下发的指令对象,可以包括但不限于家电(如空调、加湿器)、窗帘、门,可以散发气味的设备等。
音频设备300:用来播放音频文件给用户的设备,如音箱/音响系统。
下面以实施例的方式,对本申请技术方案做进一步的说明,如图7所示,图7为本申请实施例中显示方法的一个实施例示意图,可以包括:
步骤701、虚拟现实设备接收内容服务器下发的内容。即虚拟现实设备中的通讯模块101接收内容服务器下发的内容。示例性的,该内容可以是视频内容列表、游戏内容列表、应用内容列表等,具体不做限定。
步骤702、虚拟现实设备显示内容。即虚拟现实设备中的图形用户界面模块103呈现该内容。
需要说明的是,步骤701和702为可选的步骤。
步骤703、虚拟现实设备响应用户的操作,生成对目标内容的选择指令。
即用户通过虚拟现实设备中的输入模块102,从该内容中选择目标内容,生成选择指令。示例性的,用户通过虚拟现实设备中的输入模块102,从该视频内容列表中选择《攀登梅鲁峰Meru》的视频内容,生成选择指令。
步骤704、虚拟现实设备根据选择指令从目标内容中识别出至少一个关键字。
虚拟现实设备中的内容标签识别模块104中的识别模块1041根据选择指令,对目标内容的标签、特点、简介、标题等信息进行识别,得到至少一个关键字。
示例性的,虚拟现实设备中的内容标签识别模块104中的识别模块1041根据选择指令,对《攀登梅鲁峰Meru》的标签、特点、简介、标题等信息进行识别得到至少一个关键字。其视频信息如下:类型为:“纪录片/冒险/动作/运动”,地点为:“美国/美国/印度”,简介为:“3个精英登山者在抛下一切,一同前往登山界的鱼翅之地,从未有人成功登顶的梅鲁峰,这里的自然环境十分恶劣,在喜马拉雅山脉进行的冒险登山运动中这是令人梦寐以求的第一次攀登。”根据以上信息识别出的关键字为攀登、梅鲁峰、冒险/动作/运动、登山者、环境恶劣、喜马拉雅山脉。
步骤705、虚拟现实设备根据至少一个关键字和预置的匹配规则,匹配得到目标3D环境模型和至少一个目标3D环境数据。
虚拟现实设备的内容标签识别模块104中的匹配规则1042根据识别出的至少一个关键字和预置的匹配规则,匹配得到目标3D环境模型和至少一个目标3D环境数据。即匹配规 则1042根据识别出的至少一个关键字“攀登、梅鲁峰、冒险/动作/运动、登山者、环境恶劣、喜马拉雅山脉”对目标内容进行匹配。
其中,预置的匹配规则包括3D环境模型规则以及至少一个3D环境数据规则。可以理解的是,至少一个3D环境数据规则可以包括:智能家居类型规则、材质贴图规则、天空球规则、光照规则、粒子规则和背景音规则中的至少一个。
对于至少一个关键字和预置的匹配规则进行匹配,可以详见下述说明:
(1)至少一个关键字包括第一目标关键字,虚拟现实设备根据至少一个关键字和预置的匹配规则,匹配得到目标3D环境模型,可以包括:当第一目标关键字与3D环境模型规则匹配时,虚拟现实设备匹配得到与第一目标关键字对应的目标3D环境模型。
示例性的,环境模型规则10421根据第一目标关键字选取目标3D环境模型后下发第一指令至环境模型模块1051。例如:环境模型规则10421根据“攀登、梅鲁峰、登山者、喜马拉雅山脉”这些与山地模型关联的关键字选取山地模型后,下发第一指令至环境模型模块1051。
(2)至少一个3D环境数据规则包括材质贴图规则,至少一个目标3D环境数据包括目标材质贴图数据,至少一个关键字包括第二目标关键字,虚拟现实设备根据至少一个关键字和预置的匹配规则,匹配得到至少一个目标3D环境数据,可以包括:当第二目标关键字与材质贴图规则匹配时,虚拟现实设备匹配得到目标材质贴图数据。
示例性的,材质贴图规则10422根据第二目标关键字选取目标材质贴图数据后下发第二指令至材质贴图模块1052。例如:材质贴图规则10422根据没有消极情绪类的关键字选取多彩山地材质后,下发第二指令至材质贴图模块1052。
(3)至少一个3D环境数据规则包括天空球规则,至少一个目标3D环境数据包括目标天空球材质数据,至少一个关键字包括第三目标关键字,虚拟现实设备根据至少一个关键字和预置的匹配规则,匹配得到至少一个目标3D环境数据,可以包括:当第三目标关键字与天空球规则匹配时,虚拟现实设备匹配得到目标天空球材质数据。
示例性的,天空球规则10423根据第三目标关键字选取目标天空球材质数据后下发第三指令至天空球模块1053。例如:天空球规则10423根据关键字“环境恶劣”选取乌云密布天空球后,下发第三指令至天空球模块1053。
(4)至少一个3D环境数据规则包括光照规则,至少一个目标3D环境数据包括目标光照数据,至少一个关键字包括第四目标关键字,虚拟现实设备根据至少一个关键字和预置的匹配规则,匹配得到至少一个目标3D环境数据,可以包括:当第四目标关键字与光照规则匹配时,虚拟现实设备匹配得到目标光照数据。
示例性的,光照规则10424根据第四名目标关键字选取目标光照数据后下发第四指令至光照模块1054。例如:光照规则10424根据关键字“环境恶劣”选取弱光照后,下发第四指令至光照模块1054。
(5)至少一个3D环境数据规则包括粒子规则,至少一个目标3D环境数据包括目标粒子数据,至少一个关键字包括第五目标关键字,虚拟现实设备根据至少一个关键字和预置的匹配规则,匹配得到至少一个目标3D环境数据,可以包括:当第五目标关键字与粒子规 则匹配时,虚拟现实设备匹配得到目标粒子参数。
示例性的,粒子规则10425根据第五目标关键字选取目标粒子参数后下发第五指令至粒子模块1055。例如:粒子规则10425根据关键字“喜马拉雅山脉”选取白色粒子参数后,下发第五指令至粒子模块1055。
(6)至少一个3D环境数据规则包括背景音规则,至少一个目标3D环境数据包括目标音频文件数据,至少一个关键字包括第六目标关键字,虚拟现实设备根据至少一个关键字和预置的匹配规则,匹配得到至少一个目标3D环境数据,可以包括:当第六目标关键字与背景音规则匹配时,虚拟现实设备匹配得到目标音频文件数据。
示例性的,背景音规则10426根据第六目标关键字选取目标背景音文件数据名称后下发第六指令至音频模块109。例如:背景音规则10426根据关键字“冒险、运动、梦寐以求”选取励志类背景音后,下发第六指令至音频模块109。
综上,虚拟现实设备中的虚拟环境模块105可以根据各自对应的指令调节环境模型模块1051、材质纹理模块1052、天空球模块1053、光照模块1054、粒子模块1055。可选的,虚拟现实设备中的音频模块109根据第六指令和背景音文件名称选择对应的背景音文件。示例性的,音频模块109根据背景音文件名称选择励志类背景音文件。
可选的,在步骤703和704运行的过程中,由于智能识别不可避免会存在准确性的问题,也可以提供统一的接口,由内容提供商预置匹配规则的类型和执行时间,在内容播放时,下发给相应的设备来执行。如VR电影内容《聂小倩》,通过该接口预先配置在电影播放到25分36秒时,启动空调,将温度设置为“寒冷”,风量设置为“微弱”,使用户更加感受到阴森的气氛。再如,某部电影中有一个门铃响起的情节,通过该接口预先配置在电影播放到该时间点时,发送指令给智能家居中的门的门铃使其响起,使用户感觉是自己家的门铃在响,增加身临其境感。
步骤706、虚拟现实设备根据至少一个关键字和预置的匹配规则,匹配得到智能家居设备的运行参数。
智能家居规则10427根据第七目标关键字和智能家居规则,匹配选取智能家居设备的运行参数。示例性的,智能家居规则10427根据关键字“梅鲁峰、喜马拉雅山环境、环境恶劣”选取寒冷温度、强烈温度的终端参数后下发第七指令至智能家居控制系统200。
步骤707、虚拟现实设备向服务器发送智能家居设备的运行参数。
其中,运行参数用于服务器控制智能家居设备按照运行参数进行运行。即智能家居规则10427根据在标题、标签、简介或内容介绍里出现的关键字设定智能家居设备的运行参数,然后向智能家居控制系统200下发第七指令,该运行参数用于服务器控制智能家居控制系统进行运行。可以理解的是,智能家居控制系统200可认为是服务器。
步骤708、服务器控制对应的智能家居设备进行运行。
其中,运行参数包括温度参数、湿度参数、风量参数、风向参数、气味参数、窗帘开合程度参数中的至少一个。
温度参数用于服务器控制智能家居设备按照温度参数进行运行;湿度参数用于服务器控制智能家居设备按照湿度参数进行运行;风量参数和风向参数用于服务器控制智能家居 设备按照风向参数对应的风向,根据风量参数进行运行;气味参数用于服务器控制智能家居设备散发对应的气味。
服务器接收到第七指令后,根据第七指令要求的智能家居设备210按照第七指令中的运行参数开始运行;如果智能家居设备当前已经在运行,则可按第七指令中的运行参数调整智能家居设备的配置。示例性的,智能家居设备210中的空调终端设备以寒冷温度和强风模式运行。
步骤709、虚拟现实设备根据目标音频文件播放背景音音乐。
可以理解的是,在虚拟现实设备上可以根据第六指令中的目标音频文件播放背景音音乐,也可以是虚拟现实设备将第六指令向音频设备300发送,在单独的音频设备300上根据第六指令中的目标音频文件播放背景音音乐。这个音频设备300也可以属于智能家居设备。
需要说明的是,步骤706-709为可选的步骤。
步骤710、虚拟现实设备将至少一个目标3D环境数据应用在目标3D环境模型中,呈现对应的3D虚拟现实环境。
虚拟现实设备根据目标3D环境模型和至少一个目标环境数据渲染出3D虚拟现实环境;虚拟现实设备显示3D虚拟现实环境。即虚拟现实设备中的渲染模块106将各项虚拟现实环境中三维对象的数据进行图形渲染并封装。虚拟现实设备中的驱动模块107驱动显卡进行运算,将渲染模块106渲染后的数据进行图形输出并下发至显示模块108。
示例性的,虚拟现实设备中的渲染模块106可以根据上述第一指令中指示的山地模型、第二指令中指示的多彩山地材质、第三指令中指示的乌云密布天空球、第四指令中指示的弱光照、第五指令中指示的白色粒子对应的数据进行图形渲染。虚拟现实设备中的显示模块108将渲染好的图形用户界面呈现给用户。如图8所示,图8为虚拟现实设备呈现的一个图形用户界面的示意图。
在本申请实施例中,不同于现有技术中事先制作好的背景图或虚拟环境,本申请采用环境模型、贴图材质等多样化的组合式虚拟现实环境渲染方法,有效减少匹配内容的专题性运营需求,减少运营人力/成本支出;也降低了对虚拟现实设备的运算性能要求;通过联合智能家居系统,根据VR中的内容调节温、湿度、风、声音等,有效的提升用户的多感官体验;也降低了虚拟现实多感官体验的空间和价格成本。
如图9A所示,图9A为本申请实施例中虚拟现实设备的一个实施例示意图。可以包括:
生成模块901,用于响应用户的操作,生成对目标内容的选择指令;
识别模块902,用于根据选择指令从目标内容中识别出至少一个关键字;
匹配模块903,用于根据至少一个关键字和预置的匹配规则,匹配得到目标3D环境模型和至少一个目标3D环境数据,预置的匹配规则包括3D环境模型规则以及至少一个3D环境数据规则;
显示模块904,用于将至少一个目标3D环境数据应用在目标3D环境模型中,呈现对应的3D虚拟现实环境。
可选的,在本申请的一些实施例中,至少一个3D环境数据规则包括:智能家居类型规 则、材质贴图规则、天空球规则、光照规则、粒子规则和背景音规则中的至少一个。
可选的,在本申请的一些实施例中,至少一个关键字包括第一目标关键字,
匹配模块903,具体用于当第一目标关键字与3D环境模型规则匹配时,匹配得到与第一目标关键字对应的目标3D环境模型。
可选的,在本申请的一些实施例中,至少一个3D环境数据规则包括材质贴图规则,至少一个目标3D环境数据包括目标材质贴图数据,至少一个关键字包括第二目标关键字,
匹配模块903,具体用于当第二目标关键字与材质贴图规则匹配时,匹配得到目标材质贴图数据。
可选的,在本申请的一些实施例中,至少一个3D环境数据规则包括天空球规则,至少一个目标3D环境数据包括目标天空球材质数据,至少一个关键字包括第三目标关键字,
匹配模块903,具体用于当第三目标关键字与天空球规则匹配时,匹配得到目标天空球材质数据。
可选的,在本申请的一些实施例中,至少一个3D环境数据规则包括光照规则,至少一个目标3D环境数据包括目标光照数据,至少一个关键字包括第四目标关键字,
匹配模块903,具体用于当第四目标关键字与光照规则匹配时,匹配得到目标光照数据。
可选的,在本申请的一些实施例中,至少一个3D环境数据规则包括粒子规则,至少一个目标3D环境数据包括目标粒子数据,至少一个关键字包括第五目标关键字,
匹配模块903,具体用于当第五目标关键字与粒子规则匹配时,匹配得到目标粒子参数。
可选的,在本申请的一些实施例中,至少一个3D环境数据规则包括背景音规则,至少一个目标3D环境数据包括目标音频文件数据,至少一个关键字包括第六目标关键字,
匹配模块903,具体用于当第六目标关键字与背景音规则匹配时,匹配得到目标音频文件数据。
可选的,在本申请的一些实施例中,如图9B所示,图9B为本申请实施例中虚拟现实设备的一个实施例示意图。虚拟现实设备还包括:
发送模块905,用于向智能家居设备发送控制指令,控制指令包括目标音频文件数据,目标音频文件数据用于智能家居设备进行播放。
可选的,在本申请的一些实施例中,如图9C所示,图9C为本申请实施例中虚拟现实设备的一个实施例示意图。虚拟现实设备还包括:
播放模块906,用于根据目标音频文件数据进行播放。
可选的,在本申请的一些实施例中,
显示模块904,具体用于根据目标3D环境模型和至少一个目标环境数据渲染出3D虚拟现实环境;显示3D虚拟现实环境。
可选的,在本申请的一些实施例中,
匹配模块903,还用于根据至少一个关键字和预置的匹配规则,匹配得到智能家居设备的运行参数;
发送模块905,用于向服务器发送智能家居设备的运行参数,运行参数用于服务器控制智能家居设备按照运行参数进行运行。
可选的,在本申请的一些实施例中,运行参数包括温度参数、湿度参数、风量参数、风向参数、气味参数中的至少一个;其中,
温度参数用于服务器控制智能家居设备按照温度参数进行运行;
湿度参数用于服务器控制智能家居设备按照湿度参数进行运行;
风量参数和风向参数用于服务器控制智能家居设备按照风向参数对应的风向,根据风量参数进行运行;
气味参数用于服务器控制智能家居设备散发对应的气味。
下面对本申请实施例中虚拟现实设备的结构进行描述,请参阅图10,图10是本申请实施例的虚拟现实设备的一个实施例图。其中,虚拟现实设备10可包括均与总线相连接的至少一个处理器1001、至少一个收发器1002、至少一个存储器1003、至少一个显示器1004和输入设备1005,本申请实施例涉及的虚拟现实设备可以具有比图10所示出的更多或更少的部件,可以组合两个或更多个部件,或者可以具有不同的部件配置或设置,各个部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件或硬件和软件的组合实现。
具体的,对于虚拟装置的实施例来说,该处理器1001能实现图9A、图9B、图9C所示实施例中的虚拟现实设备的生成模块901、识别模块902、匹配模块903、播放模块906的功能,该收发器1002能实现图9B所示实施例中的虚拟现实设备的发送模块905的功能,该收发器1002还可用于虚拟现实设备与服务器之间的信息交互;该存储器1003有多种结构,用于存储程序指令,处理器1001用于执行所述存储器1003中的指令以实现图7所示实施例中的显示方法;显示器1004能实现图9A、图9B、图9C所示实施例中虚拟现实设备的显示模块904的功能,该输入设备1005可用于用户向虚拟现实设备输入操作。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (29)

  1. 一种显示方法,其特征在于,包括:
    虚拟现实设备响应用户的操作,生成对目标内容的选择指令;
    所述虚拟现实设备根据所述选择指令从所述目标内容中识别出至少一个关键字;
    所述虚拟现实设备根据所述至少一个关键字和预置的匹配规则,匹配得到目标3D环境模型和至少一个目标3D环境数据,所述预置的匹配规则包括3D环境模型规则以及至少一个3D环境数据规则;
    所述虚拟现实设备将所述至少一个目标3D环境数据应用在所述目标3D环境模型中,呈现对应的3D虚拟现实环境。
  2. 根据权利要求1所述的方法,其特征在于,所述至少一个3D环境数据规则包括:智能家居类型规则、材质贴图规则、天空球规则、光照规则、粒子规则和背景音规则中的至少一个。
  3. 根据权利要求1或2所述的方法,其特征在于,所述至少一个关键字包括第一目标关键字,所述虚拟现实设备根据所述至少一个关键字和预置的匹配规则,匹配得到目标3D环境模型,包括:
    当所述第一目标关键字与所述3D环境模型规则匹配时,所述虚拟现实设备匹配得到与所述第一目标关键字对应的目标3D环境模型。
  4. 根据权利要求2所述的方法,其特征在于,所述至少一个3D环境数据规则包括所述材质贴图规则,所述至少一个目标3D环境数据包括目标材质贴图数据,所述至少一个关键字包括第二目标关键字,所述虚拟现实设备根据所述至少一个关键字和预置的匹配规则,匹配得到至少一个目标3D环境数据,包括:
    当所述第二目标关键字与所述材质贴图规则匹配时,所述虚拟现实设备匹配得到所述目标材质贴图数据。
  5. 根据权利要求2所述的方法,其特征在于,所述至少一个3D环境数据规则包括所述天空球规则,所述至少一个目标3D环境数据包括目标天空球材质数据,所述至少一个关键字包括第三目标关键字,所述虚拟现实设备根据所述至少一个关键字和预置的匹配规则,匹配得到至少一个目标3D环境数据,包括:
    当所述第三目标关键字与所述天空球规则匹配时,所述虚拟现实设备匹配得到所述目标天空球材质数据。
  6. 根据权利要求2所述的方法,其特征在于,所述至少一个3D环境数据规则包括所述光照规则,所述至少一个目标3D环境数据包括目标光照数据,所述至少一个关键字包括第四目标关键字,所述虚拟现实设备根据所述至少一个关键字和预置的匹配规则,匹配得到至少一个目标3D环境数据,包括:
    当所述第四目标关键字与所述光照规则匹配时,所述虚拟现实设备匹配得到所述目标光照数据。
  7. 根据权利要求2所述的方法,其特征在于,所述至少一个3D环境数据规则包括所述粒子规则,所述至少一个目标3D环境数据包括目标粒子数据,所述至少一个关键字包括 第五目标关键字,所述虚拟现实设备根据所述至少一个关键字和预置的匹配规则,匹配得到至少一个目标3D环境数据,包括:
    当所述第五目标关键字与所述粒子规则匹配时,所述虚拟现实设备匹配得到所述目标粒子参数。
  8. 根据权利要求2所述的方法,其特征在于,所述至少一个3D环境数据规则包括所述背景音规则,所述至少一个目标3D环境数据包括目标音频文件数据,所述至少一个关键字包括第六目标关键字,所述虚拟现实设备根据所述至少一个关键字和预置的匹配规则,匹配得到至少一个目标3D环境数据,包括:
    当所述第六目标关键字与所述背景音规则匹配时,所述虚拟现实设备匹配得到所述目标音频文件数据。
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    所述虚拟现实设备向所述智能家居设备发送控制指令,所述控制指令包括所述目标音频文件数据,所述目标音频文件数据用于所述智能家居设备进行播放。
  10. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    所述虚拟现实设备根据所述目标音频文件数据进行播放。
  11. 根据权利要求1-10任一项所述的方法,其特征在于,所述虚拟现实设备将所述至少一个目标3D环境数据应用在所述目标3D环境模型中,呈现对应的3D虚拟现实环境,包括:
    所述虚拟现实设备根据所述目标3D环境模型和所述至少一个目标环境数据渲染出所述3D虚拟现实环境;
    所述虚拟现实设备显示所述3D虚拟现实环境。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,所述方法还包括:
    所述虚拟现实设备根据所述至少一个关键字和所述预置的匹配规则,匹配得到智能家居设备的运行参数;
    所述虚拟现实设备向服务器发送所述智能家居设备的运行参数,所述运行参数用于所述服务器控制所述智能家居设备按照所述运行参数进行运行。
  13. 根据权利要求12所述的方法,其特征在于,所述运行参数包括温度参数、湿度参数、风量参数、风向参数、气味参数中的至少一个;其中,
    所述温度参数用于所述服务器控制所述智能家居设备按照所述温度参数进行运行;
    所述湿度参数用于所述服务器控制所述智能家居设备按照所述湿度参数进行运行;
    所述风量参数和所述风向参数用于所述服务器控制所述智能家居设备按照所述风向参数对应的风向,根据所述风量参数进行运行;
    所述气味参数用于所述服务器控制所述智能家居设备散发对应的气味。
  14. 一种虚拟现实设备,其特征在于,包括:
    生成模块,用于响应用户的操作,生成对目标内容的选择指令;
    识别模块,用于根据所述选择指令从所述目标内容中识别出至少一个关键字;
    匹配模块,用于根据所述至少一个关键字和预置的匹配规则,匹配得到目标3D环境模 型和至少一个目标3D环境数据,所述预置的匹配规则包括3D环境模型规则以及至少一个3D环境数据规则;
    显示模块,用于将所述至少一个目标3D环境数据应用在所述目标3D环境模型中,呈现对应的3D虚拟现实环境。
  15. 根据权利要求14所述的虚拟现实设备,其特征在于,所述至少一个3D环境数据规则包括:智能家居类型规则、材质贴图规则、天空球规则、光照规则、粒子规则和背景音规则中的至少一个。
  16. 根据权利要求14或15所述的虚拟现实设备,其特征在于,所述至少一个关键字包括第一目标关键字,
    所述匹配模块,具体用于当所述第一目标关键字与所述3D环境模型规则匹配时,匹配得到与所述第一目标关键字对应的目标3D环境模型。
  17. 根据权利要求15所述的虚拟现实设备,其特征在于,所述至少一个3D环境数据规则包括所述材质贴图规则,所述至少一个目标3D环境数据包括目标材质贴图数据,所述至少一个关键字包括第二目标关键字,
    所述匹配模块,具体用于当所述第二目标关键字与所述材质贴图规则匹配时,匹配得到所述目标材质贴图数据。
  18. 根据权利要求15所述的虚拟现实设备,其特征在于,所述至少一个3D环境数据规则包括所述天空球规则,所述至少一个目标3D环境数据包括目标天空球材质数据,所述至少一个关键字包括第三目标关键字,
    所述匹配模块,具体用于当所述第三目标关键字与所述天空球规则匹配时,匹配得到所述目标天空球材质数据。
  19. 根据权利要求15所述的虚拟现实设备,其特征在于,所述至少一个3D环境数据规则包括所述光照规则,所述至少一个目标3D环境数据包括目标光照数据,所述至少一个关键字包括第四目标关键字,
    所述匹配模块,具体用于当所述第四目标关键字与所述光照规则匹配时,匹配得到所述目标光照数据。
  20. 根据权利要求15所述的虚拟现实设备,其特征在于,所述至少一个3D环境数据规则包括所述粒子规则,所述至少一个目标3D环境数据包括目标粒子数据,所述至少一个关键字包括第五目标关键字,
    所述匹配模块,具体用于当所述第五目标关键字与所述粒子规则匹配时,匹配得到所述目标粒子参数。
  21. 根据权利要求15所述的虚拟现实设备,其特征在于,所述至少一个3D环境数据规则包括所述背景音规则,所述至少一个目标3D环境数据包括目标音频文件数据,所述至少一个关键字包括第六目标关键字,
    所述匹配模块,具体用于当所述第六目标关键字与所述背景音规则匹配时,匹配得到所述目标音频文件数据。
  22. 根据权利要求21所述的虚拟现实设备,其特征在于,所述虚拟现实设备还包括:
    发送模块,用于向所述智能家居设备发送控制指令,所述控制指令包括所述目标音频文件数据,所述目标音频文件数据用于所述智能家居设备进行播放。
  23. 根据权利要求21所述的虚拟现实设备,其特征在于,所述虚拟现实设备还包括:
    播放模块,用于根据所述目标音频文件数据进行播放。
  24. 根据权利要求14-23任一项所述的虚拟现实设备,其特征在于,
    所述显示模块,具体用于根据所述目标3D环境模型和所述至少一个目标环境数据渲染出所述3D虚拟现实环境;显示所述3D虚拟现实环境。
  25. 根据权利要求14-24任一项所述的虚拟现实设备,其特征在于,
    所述匹配模块,还用于根据所述至少一个关键字和所述预置的匹配规则,匹配得到智能家居设备的运行参数;
    发送模块,用于向服务器发送所述智能家居设备的运行参数,所述运行参数用于所述服务器控制所述智能家居设备按照所述运行参数进行运行。
  26. 根据权利要求25所述的虚拟现实设备,其特征在于,所述运行参数包括温度参数、湿度参数、风量参数、风向参数、气味参数中的至少一个;其中,
    所述温度参数用于所述服务器控制所述智能家居设备按照所述温度参数进行运行;
    所述湿度参数用于所述服务器控制所述智能家居设备按照所述湿度参数进行运行;
    所述风量参数和所述风向参数用于所述服务器控制所述智能家居设备按照所述风向参数对应的风向,根据所述风量参数进行运行;
    所述气味参数用于所述服务器控制所述智能家居设备散发对应的气味。
  27. 一种虚拟现实设备,其特征在于,包括:
    存储器、收发器和处理器,所述存储器、所述收发器和所述处理器通过总线连接;
    所述收发器用于与所述虚拟现实设备之外的装置进行通信;
    所述存储器用于存储操作指令;
    所述处理器用于调用所述操作指令,执行如权利要求1-13任一项所述的方法。
  28. 一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行如权利要求1-13任意一项所述的方法。
  29. 一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行如权利要求1-13任意一项所述的方法。
PCT/CN2019/099271 2018-08-14 2019-08-05 一种显示方法以及虚拟现实设备 WO2020034863A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19850126.4A EP3779647A4 (en) 2018-08-14 2019-08-05 DISPLAY METHOD AND DEVICE OF VIRTUAL REALITY
US17/090,642 US11748950B2 (en) 2018-08-14 2020-11-05 Display method and virtual reality device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810922645.9 2018-08-14
CN201810922645.9A CN109324687B (zh) 2018-08-14 2018-08-14 一种显示方法以及虚拟现实设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/090,642 Continuation US11748950B2 (en) 2018-08-14 2020-11-05 Display method and virtual reality device

Publications (1)

Publication Number Publication Date
WO2020034863A1 true WO2020034863A1 (zh) 2020-02-20

Family

ID=65264128

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/099271 WO2020034863A1 (zh) 2018-08-14 2019-08-05 一种显示方法以及虚拟现实设备

Country Status (4)

Country Link
US (1) US11748950B2 (zh)
EP (1) EP3779647A4 (zh)
CN (1) CN109324687B (zh)
WO (1) WO2020034863A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3876082A1 (en) * 2020-03-04 2021-09-08 Apple Inc. Environment application model
CN114077312A (zh) * 2021-11-15 2022-02-22 浙江力石科技股份有限公司 一种景区虚拟现实展示方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109324687B (zh) 2018-08-14 2021-10-01 华为技术有限公司 一种显示方法以及虚拟现实设备
CN110244582A (zh) * 2019-05-05 2019-09-17 浙江乌镇街科技有限公司 一种数字化气味数据关联平台
CN111176503A (zh) * 2019-12-16 2020-05-19 珠海格力电器股份有限公司 一种交互系统设置方法、装置及存储介质
CN113050796A (zh) * 2021-03-25 2021-06-29 海尔(深圳)研发有限责任公司 用于头戴设备的环境调节方法、装置、系统及头戴设备
CN114608167A (zh) * 2022-02-28 2022-06-10 青岛海尔空调器有限总公司 室内环境的智能调节方法与智能调节系统
CN114842701A (zh) * 2022-03-30 2022-08-02 中国人民解放军海军特色医学中心 极地环境训练的控制方法、系统、装置、设备和介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106648096A (zh) * 2016-12-22 2017-05-10 宇龙计算机通信科技(深圳)有限公司 虚拟现实场景互动实现方法、系统以及虚拟现实设备
CN106683201A (zh) * 2016-12-23 2017-05-17 深圳市豆娱科技有限公司 一种基于三维虚拟现实的场景编辑方法和装置
CN107665133A (zh) * 2017-09-04 2018-02-06 北京小鸟看看科技有限公司 头戴显示设备的运行场景的加载方法和头戴显示设备
CN107835403A (zh) * 2017-10-20 2018-03-23 华为技术有限公司 一种以3d视差效果显示的方法及装置
US20180205926A1 (en) * 2017-01-17 2018-07-19 Seiko Epson Corporation Cleaning of Depth Data by Elimination of Artifacts Caused by Shadows and Parallax
CN109324687A (zh) * 2018-08-14 2019-02-12 华为技术有限公司 一种显示方法以及虚拟现实设备

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6404426B1 (en) * 1999-06-11 2002-06-11 Zenimax Media, Inc. Method and system for a computer-rendered three-dimensional mannequin
US8751950B2 (en) * 2004-08-17 2014-06-10 Ice Edge Business Solutions Ltd. Capturing a user's intent in design software
US20080071559A1 (en) * 2006-09-19 2008-03-20 Juha Arrasvuori Augmented reality assisted shopping
US20110234591A1 (en) * 2010-03-26 2011-09-29 Microsoft Corporation Personalized Apparel and Accessories Inventory and Display
US9317133B2 (en) * 2010-10-08 2016-04-19 Nokia Technologies Oy Method and apparatus for generating augmented reality content
US9066200B1 (en) 2012-05-10 2015-06-23 Longsand Limited User-generated content in a virtual reality environment
US9928652B2 (en) * 2013-03-01 2018-03-27 Apple Inc. Registration between actual mobile device position and environmental model
US10002208B2 (en) * 2014-05-13 2018-06-19 Atheer, Inc. Method for interactive catalog for 3D objects within the 2D environment
CN105975066A (zh) * 2016-04-28 2016-09-28 乐视控股(北京)有限公司 基于虚拟现实设备的控制方法及装置
CN108391445B (zh) * 2016-12-24 2021-10-15 华为技术有限公司 一种虚拟现实显示方法及终端
US11132840B2 (en) * 2017-01-16 2021-09-28 Samsung Electronics Co., Ltd Method and device for obtaining real time status and controlling of transmitting devices
EP3358462A1 (en) * 2017-02-06 2018-08-08 Tata Consultancy Services Limited Context based adaptive virtual reality (vr) assistant in vr environments
US10509257B2 (en) 2017-03-29 2019-12-17 Shenzhen China Star Optoelectronics Technology Co., Ltd Display panels, wire grid polarizers, and the manufacturing methods thereof
US10606449B2 (en) * 2017-03-30 2020-03-31 Amazon Technologies, Inc. Adjusting audio or graphical resolutions for data discovery

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106648096A (zh) * 2016-12-22 2017-05-10 宇龙计算机通信科技(深圳)有限公司 虚拟现实场景互动实现方法、系统以及虚拟现实设备
CN106683201A (zh) * 2016-12-23 2017-05-17 深圳市豆娱科技有限公司 一种基于三维虚拟现实的场景编辑方法和装置
US20180205926A1 (en) * 2017-01-17 2018-07-19 Seiko Epson Corporation Cleaning of Depth Data by Elimination of Artifacts Caused by Shadows and Parallax
CN107665133A (zh) * 2017-09-04 2018-02-06 北京小鸟看看科技有限公司 头戴显示设备的运行场景的加载方法和头戴显示设备
CN107835403A (zh) * 2017-10-20 2018-03-23 华为技术有限公司 一种以3d视差效果显示的方法及装置
CN109324687A (zh) * 2018-08-14 2019-02-12 华为技术有限公司 一种显示方法以及虚拟现实设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3876082A1 (en) * 2020-03-04 2021-09-08 Apple Inc. Environment application model
US11354867B2 (en) 2020-03-04 2022-06-07 Apple Inc. Environment application model
US11776225B2 (en) 2020-03-04 2023-10-03 Apple Inc. Environment application model
CN114077312A (zh) * 2021-11-15 2022-02-22 浙江力石科技股份有限公司 一种景区虚拟现实展示方法

Also Published As

Publication number Publication date
CN109324687A (zh) 2019-02-12
EP3779647A4 (en) 2021-09-15
CN109324687B (zh) 2021-10-01
US20210056756A1 (en) 2021-02-25
EP3779647A1 (en) 2021-02-17
US11748950B2 (en) 2023-09-05

Similar Documents

Publication Publication Date Title
WO2020034863A1 (zh) 一种显示方法以及虚拟现实设备
CN111316334B (zh) 用于动态地改变虚拟现实环境的设备和方法
JP6967043B2 (ja) 3次元コンテンツ内の場所に基づく仮想要素モダリティ
US11494993B2 (en) System and method to integrate content in real time into a dynamic real-time 3-dimensional scene
WO2017177766A1 (zh) 虚拟现实设备的控制方法、装置及虚拟现实设备、系统
US20170206708A1 (en) Generating a virtual reality environment for displaying content
TWI718426B (zh) 呈現一擴增實境界面之裝置、系統及方法
US10880595B2 (en) Method and apparatus for adjusting virtual reality scene, and storage medium
CN106659937A (zh) 用户生成的动态虚拟世界
US20210409615A1 (en) Skeletal tracking for real-time virtual effects
JP7379603B2 (ja) 推奨を配信する方法、デバイス、及びシステム
CN108846886A (zh) 一种ar表情的生成方法、客户端、终端和存储介质
CN114245099B (zh) 视频生成方法、装置、电子设备以及存储介质
Jalal et al. IoT architecture for multisensorial media
US20200349976A1 (en) Movies with user defined alternate endings
CN107204026A (zh) 一种用于显示动画的方法和装置
WO2023142415A1 (zh) 社交互动方法、装置、设备及存储介质、程序产品
WO2019124850A1 (ko) 사물 의인화 및 인터랙션을 위한 방법 및 시스템
US20220254082A1 (en) Method of character animation based on extraction of triggers from an av stream
US20230156300A1 (en) Methods and systems for modifying content
US11684852B2 (en) Create and remaster computer simulation skyboxes
US11511190B2 (en) Merge computer simulation sky box with game world
US20240071008A1 (en) Generating immersive augmented reality experiences from existing images and videos
US12039793B2 (en) Automatic artificial reality world creation
US20240179291A1 (en) Generating 3d video using 2d images and audio with background keyed to 2d image-derived metadata

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19850126

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019850126

Country of ref document: EP

Effective date: 20201027

NENP Non-entry into the national phase

Ref country code: DE