WO2018082626A1 - 虚拟现实系统的实现方法及虚拟现实装置 - Google Patents

虚拟现实系统的实现方法及虚拟现实装置 Download PDF

Info

Publication number
WO2018082626A1
WO2018082626A1 PCT/CN2017/109174 CN2017109174W WO2018082626A1 WO 2018082626 A1 WO2018082626 A1 WO 2018082626A1 CN 2017109174 W CN2017109174 W CN 2017109174W WO 2018082626 A1 WO2018082626 A1 WO 2018082626A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
module
matching
user
virtual assistant
Prior art date
Application number
PCT/CN2017/109174
Other languages
English (en)
French (fr)
Inventor
刘哲
Original Assignee
惠州Tcl移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 惠州Tcl移动通信有限公司 filed Critical 惠州Tcl移动通信有限公司
Publication of WO2018082626A1 publication Critical patent/WO2018082626A1/zh
Priority to US16/286,650 priority Critical patent/US20190187782A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42136Administration or customisation of services
    • H04M3/42153Administration or customisation of services by subscriber
    • H04M3/42161Administration or customisation of services by subscriber via computer interface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/42Graphical user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42365Presence services providing information on the willingness to communicate or the ability to communicate in terms of media capability or network connectivity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5183Call or contact centers with computer-telephony arrangements
    • H04M3/5191Call or contact centers with computer-telephony arrangements interacting with the Internet

Definitions

  • the present invention relates to the field of virtual reality technology, and in particular to a method for implementing a virtual reality system and a virtual reality device.
  • VR Virtual Reality
  • the present invention provides a method for implementing a virtual reality system and a virtual reality device to solve the technical problem that the virtual reality application does not have an intelligent assistant in the prior art.
  • a technical solution proposed by the present invention is to provide a virtual reality device, where the virtual reality device includes:
  • a generating module configured to generate a stereo interaction scenario and generate a stereo virtual assistant in the stereo interaction scenario
  • An acquisition identification module for identifying the collected user input as computer identifiable data
  • a matching module configured to match the computer identifiable data and return matching response data
  • a conversion module configured to convert the response data into at least one of a voice signal, a tactile feedback vibration signal, and a visual form signal
  • An output module configured to output at least one of the voice signal, the tactile feedback vibration signal, and the visual form signal in an image of the stereo virtual assistant
  • the matching module includes an analysis module and a result module, where the analysis module is configured to analyze and match the computer identifiable data identified by the acquisition identification module, and the result module is configured to feed back the analysis module to analyze and match. the result of.
  • Another technical solution of the present invention provides a virtual reality device, where the virtual reality device includes:
  • a processor an earpiece coupled to the processor, a camera, a button handle, a speaker, a display vibration motor, and a memory;
  • the earpiece is configured to collect a voice input signal of the user;
  • the camera is configured to collect a gesture input signal of the user;
  • the button handle is configured to collect a key input signal of the user;
  • the speaker is configured to play a voice signal for the stereo virtual assistant;
  • the display is configured to display a visual form signal for the stereo virtual assistant;
  • the vibration motor is configured to output a tactile feedback vibration signal for the stereo virtual assistant;
  • the memory is configured to store form data of the stereo virtual assistant and an input signal and an associated identification signal, a matching signal, and a conversion signal collected by the processor;
  • the processor is configured to collect, for the stereo virtual assistant, a voice, a gesture, and a key input signal of the user, and identify the input signal as an identification signal recognizable by the processor, and perform matching by the matching signal with the memory, and then convert A conversion signal recognizable by the user, and the converted user-recognizable conversion type is output by the stereo virtual assistant.
  • the invention also provides a technical solution: providing a method for implementing a virtual reality system, the method comprising:
  • At least one of the voice signal, the tactile feedback vibration signal, and the visual form signal is outputted in the image of the stereoscopic virtual assistant.
  • the beneficial effects of the present invention are: different from the prior art, the implementation method of the virtual reality system provided by the present invention is provided with a stereo virtual assistant, and the stereo virtual assistant collects, matches, and converts by collecting the user's income, so that The three-dimensional virtual assistant can output intelligent services that meet the needs of users with visual, auditory and tactile sensations, giving users a humanized resonance, enhancing user experience and enhancing user experience.
  • FIG. 1 is a schematic flowchart of a first implementation manner of a method for implementing a virtual reality system provided by the present invention
  • FIG. 2 is a schematic flowchart of a second implementation manner of a method for implementing a virtual reality system provided by the present invention
  • FIG. 3 is a schematic flowchart of a third implementation manner of a method for implementing a virtual reality system provided by the present invention.
  • FIG. 4 is a schematic flowchart of a fourth implementation manner of a method for implementing a virtual reality system provided by the present invention.
  • FIG. 5 is a schematic flowchart diagram of a fifth implementation manner of a method for implementing a virtual reality system according to the present invention.
  • FIG. 6 is a schematic flowchart of a sixth implementation manner of a method for implementing a virtual reality system provided by the present invention.
  • FIG. 7 is a schematic structural diagram of a first embodiment of a virtual reality device provided by the present invention.
  • FIG. 8 is a schematic structural diagram of another embodiment of a virtual reality device provided by the present invention.
  • FIG. 9 is a schematic structural diagram of a first embodiment of a virtual reality system provided by the present invention.
  • a first implementation manner of a method for implementing a virtual reality system includes the following steps:
  • S101 Generate a stereo interaction scenario and generate a stereo virtual assistant in the stereo interaction scenario.
  • the shape of the three-dimensional virtual assistant can be a 3D simulation human type, which can simulate animations with real interactions such as blinking, gazing, nodding, etc., and has rich expressions and emotional elements such as emotions, sorrows and griefs, showing smile, sadness and anger.
  • the shape of the stereo virtual assistant can be customized according to specific products and applications, and the stereo virtual assistant can be highly labeled.
  • the stereo virtual assistant can collect user input in the stereo interaction scenario, and the user input information includes but is not limited to the user's voice, button operation, gesture operation, etc., and the stereo virtual assistant can recognize the input of the collection user as computer identifiable data. , that is, information conversion.
  • the three-dimensional virtual assistant can analyze the user's input information, that is, analyze the computer identifiable data, and classify the input information, and simultaneously process and respond to the basic information, that is, return matching response data.
  • S104 Convert the response data into at least one of a voice signal, a haptic feedback vibration signal, and a visual modal signal.
  • the computer-recognizable response data is converted into at least one of a voice signal received by the user, a tactile feedback vibration signal, and a visual form signal.
  • S105 Output at least one of a voice signal, a tactile feedback vibration signal, and a visual form signal in an image of the stereo virtual assistant.
  • the form of the three-dimensional virtual assistant is a 3D simulation person or a 3D simulation cartoon person with intuitive help and guidance functions, which enables users to reduce communication barriers and save costs.
  • the implementation method of the virtual reality system provided by the invention is provided with a three-dimensional virtual assistant, and the three-dimensional virtual assistant collects, matches and converts the user's income, so that the three-dimensional virtual assistant can output visual, auditory and tactile functions that meet the user's needs.
  • Intelligent service giving users a humanized resonance, enhancing user experience and enhancing user experience.
  • the step S103 is specifically: performing context matching on the computer identifiable data and returning the matched response data.
  • the stereo virtual assistant has an emotional chat function.
  • the stereo virtual assistant can understand the contextual meaning of the user's speech, and perform context analysis on the computer identifiable data, thereby returning matching response data, ie The content or answer that the user wants.
  • the stereo virtual assistant of the embodiment has a context-aware function, and can continuously understand the content continuously interacting with the user.
  • the stereo virtual assistant can be regarded as a smart virtual sprite that can give the user the most timely and emotional and companion functions. assistant.
  • Step S103 may further be: sending the computer identifiable data to the remote server, so that the remote server performs a matching search in the webpage information or the expert system according to the computer identifiable data, and generates response data according to the search result.
  • the stereo virtual assistant can send the computer identifiable data to the remote server, and the remote server performs the matching search in the webpage information or the expert system according to the computer identifiable data. And generate response data based on the search results.
  • the stereo virtual assistant stores the response data obtained from the remote server each time, so that the next time the user or other user asks again, the user can quickly and accurately provide relevant help and guidance.
  • a second implementation manner of the virtual reality system of the present invention includes the following steps:
  • S201 Generate a stereo interaction scenario and generate a stereo virtual assistant in the stereo interaction scenario.
  • S203 Perform emotional analysis on the user according to user input and/or computer identifiable data.
  • the stereoscopic virtual assistant can analyze the emotion of the user according to user input and/or computer identifiable data, specifically analyzing the user's emotion according to the tone, the speech rate, the gesture action, and the text information of the computer identifiable data input by the user.
  • User emotions include happiness, pride, hope, relaxation, anger, anxiety, shame, disappointment, boredom, and so on.
  • the three-dimensional virtual assistant matches the analyzed user emotions and returns matching response data.
  • the three-dimensional virtual assistant can simulate animations with real interactions such as blinking, gazing, nodding, etc., showing smiles, sadness, anger, etc. with real Emotional expression animation, which gives the user emotional resonance; for example, when the user's emotions are happy, they can feedback the speech signal with fast speech speed and the smiling expression animation; when the user's emotion is anxious, the feedback speed can be slow.
  • the voice signal as well as the sad expression animation and so on.
  • S205 Convert the response data into at least one of a voice signal, a tactile feedback vibration signal, and a visual form signal.
  • S206 Output at least one of a voice signal, a tactile feedback vibration signal, and a visual form signal in an image of a stereo virtual assistant.
  • the three-dimensional virtual assistant of the embodiment can analyze the user's emotions and interact with the user, giving the user a sense of companionship of the friends, allowing the user to relieve the emotional troubles in time, and enhancing the willingness and fun of the user's communication.
  • FIG. 3 is a schematic flowchart of a third implementation manner of a method for implementing a virtual reality system according to the present invention.
  • This embodiment is the same as the basic procedure of the second embodiment. The difference is the following two steps: Step S303 is replaced by In step S203, step S304 is replaced with step S204.
  • S303 Acquire user preferences and/or personal data by learning computer identifiable data.
  • the stereoscopic virtual assistant can acquire user preferences and/or personal data by learning computer identifiable data, including but not limited to the user's age, gender, height, weight, occupation, hobbies, and beliefs. Etc. to intelligently recommend relevant service content; it can also include smart recommendations based on geographic location information, and the virtual wizard will provide suggestions and information alerts for matching locations based on your country, region, work and living location information, such as The content of local traffic conditions.
  • the stereo virtual assistant learns to obtain user preferences and/or personal data, thereby actively recommending content, matching the content and returning matching response data.
  • the stereo virtual assistant of the embodiment can learn to obtain user preferences and/or personal data, can more accurately understand and predict the user's needs, and provide users with better service, so that the content of the appropriate user can be intelligently recommended, and the user is enriched. In your spare time, expand the user's after-school knowledge and enhance the user's experience.
  • FIG. 4 is a schematic flowchart of a fourth implementation manner of a method for implementing a virtual reality system according to the present invention.
  • This embodiment is the same as the basic procedure of the first embodiment, except that the following three steps are added: Step S406, Step S407 and step S408.
  • S406 Generate recommended content according to a current process of an application run by the virtual reality system.
  • the three-dimensional virtual assistant can generate recommended content according to different applications run by the virtual reality system, and can also generate virtual content according to the virtual reality device.
  • the current process of the application running by the real system is different, so that real-time recommended content is generated.
  • the stereo virtual assistant can give help and guidance in advance according to the difficulty or the doubt point.
  • S408 Change the image output of the stereo virtual assistant based on the response data and/or present the recommended content in the stereo interaction scenario.
  • the stereo virtual assistant recommendation content may be presented by the stereo virtual assistant or directly in the stereo interaction scene.
  • FIG. 5 is a schematic flowchart of a fifth implementation manner of a method for implementing a virtual reality system according to the present invention.
  • This embodiment is the same as the basic procedure of the first embodiment, except that the following two steps are added: step S506 and Step S507.
  • S506 Acquire a current state of the controlled system interconnected with the virtual reality system.
  • the virtual reality system can also be associated with other devices outside the system, for example, smart phones, smart cars, smart homes, etc. Other devices can also be referred to as controlled systems, and the stereo virtual assistant can obtain the connected with the virtual reality system. Control the current state of the system.
  • the three-dimensional virtual assistant can periodically and periodically return the matched response data of the current state of the controlled system to the user, so that the user can understand the current state of the controlled system.
  • FIG. 6 is a schematic flowchart of a sixth implementation manner of a method for implementing a virtual reality system according to the present invention.
  • the basic implementation process is the same as the basic procedure flow of the fifth embodiment, except that step S607, step S608, and step S609 are replaced. Step S507.
  • S607 Perform corresponding operations on the controlled system based on the current state and the user input and/or the processing rule preset by the user.
  • the stereo virtual assistant first presets rules for processing the controlled system, and then the stereo virtual assistant performs corresponding operations on the controlled system based on the current state of the controlled system and the user input and/or processing rules preset by the user.
  • S609 Change the image output of the stereo virtual assistant based on the response data and/or present the operation result in the stereo interaction scenario.
  • the controlled system is a mobile terminal as an application example, and the current state of the mobile terminal is assumed to be the incoming call state of the mobile terminal, and the preset processing rule is to hang up or answer the mobile terminal, for example, when the user is playing virtual
  • the mobile terminal has an incoming call or notification message
  • the stereo virtual assistant will intelligently identify the importance of the incoming call or the notification message, and then perform classification processing: if it is a very urgent call, the user will be notified by the method of the suspended call call or directly Answer and alert the user by vibrating or pausing the game; otherwise it will automatically hang up and reply to the call with a text message, such as "I am using a VR device and will contact you later".
  • the corresponding operations of the stereo virtual assistant include a hanging operation or an answering operation.
  • FIG. 7 is a schematic structural diagram of a first embodiment of a virtual reality device according to the present invention.
  • the virtual reality device 100 includes a generation module 110 , an acquisition identification module 120 , a matching module 130 , a conversion module 140 , and an output module 150 .
  • the generating module 110 is configured to generate a stereo interaction scenario and generate a stereo virtual assistant in the stereo interaction scenario;
  • the collection identification module 120 is configured to identify the collected user input as computer identifiable data; and the matching module 130 is configured to use the computer The identifiable data is matched and returned to match the response data;
  • the conversion module 140 is configured to convert the response data into at least one of a voice signal, a haptic feedback vibration signal, and a visual modal signal;
  • the output module 150 is configured to use the stereo virtual assistant The image outputs at least one of a voice signal, a tactile feedback vibration signal, and a visual form signal.
  • the generating module 110 is configured to generate a stereo interaction scenario and generate a stereo virtual assistant in the stereo interaction scenario.
  • the virtual reality scenario is a 360-degree panoramic real 3D interactive environment
  • the stereo virtual assistant can be designed into 3D.
  • Dynamic elves, characters, or cartoon characters can interact with users in a variety of virtual scenes through stereoscopic forms and motion animations.
  • the collection and recognition module 120 is configured to generate the input of the user by the stereo virtual assistant generated by the module 110.
  • the input information of the user includes but is not limited to the voice of the user, the operation of the button, and the operation of the gesture; and the stereo virtual assistant also collects the input of the user. Computer identifiable data.
  • the collection and identification module 120 can also be divided into an acquisition module 121 and an identification module 122.
  • the collection module 121 is configured to collect input information of the user
  • the identification module 122 is configured to identify the input information collected by the collection module 121 into computer identifiable data.
  • the collection module 121 is further divided into a voice collection module 1211, a gesture collection module 1212, and a button collection module 1213.
  • the voice collection module 1211 is configured to collect voice input signals of the user
  • the gesture collection module 1212 is configured to collect gesture input of the user.
  • the signal acquisition module 1213 is configured to collect a user's key input signal.
  • the recognition module 122 is further divided into a voice recognition module 1221, a gesture recognition module 1222, and a button recognition module 1223, wherein the voice recognition module 1221 is configured to identify the input information collected by the voice collection module 1211 into a computer.
  • the matching module 130 can also be divided into an analysis module 131 and a result module 132.
  • the analysis module 131 is configured to analyze and match the computer identifiable data identified by the identification module 122, and the result module 132 is used to feedback the analysis and matching results of the analysis module 131. That is, the matching response data is returned.
  • the matching module 130 may further include a self-learning module 133 for learning and memorizing the user's usage habits, and may provide targeted reference suggestions when the analysis module 131 performs analysis and matching.
  • the conversion module 140 is configured to convert the matched response data returned by the matching module 130 into at least one of a voice signal, a haptic feedback vibration signal, and a visual modal signal.
  • the output module 150 is configured to output the signal converted by the conversion module 140 as an image of the stereo virtual assistant.
  • the output module 150 can also be divided into a voice output module 151, a haptic output module 152, and a visual output module 153.
  • the voice output module 151 is configured to output a signal converted by the conversion module 140 into a voice signal of a stereo virtual assistant, such as a voice broadcast.
  • the haptic output module 152 is configured to output the signal converted by the conversion module 140 to the haptic feedback vibration signal in the image of the stereo virtual assistant, such as a tremor form; the visual output module 153 outputs the signal converted by the conversion module 140 as a stereo virtual assistant.
  • Visual form signals such as animations, expressions, and colors.
  • the modules of the virtual reality device 100 can perform the corresponding steps in the foregoing method embodiments, and therefore, the modules are not described herein. For details, refer to the description of the corresponding steps.
  • FIG. 8 is a schematic structural diagram of another embodiment of a virtual reality device according to the present invention.
  • the virtual reality device 200 includes a processor 210, an earpiece 220 coupled to the processor 210, a camera 230, a button handle 240, a speaker 250, a display 260, a vibration motor 270, and a memory 280.
  • the earpiece 220 is configured to collect a voice input signal of the user; the camera 230 is configured to collect a gesture input signal of the user; and the button handle 240 is configured to collect a key input signal of the user.
  • the speaker 250 is used to play a voice signal for the stereo virtual assistant; the display 260 is used to display a visual form signal for the stereo virtual assistant; and the vibration motor 270 is used to output a tactile feedback vibration signal for the stereo virtual assistant.
  • the memory 280 is configured to store the morphological data of the stereoscopic virtual assistant and the input signals collected by the processor 210 and the identification signals, matching signals, and conversion signals associated with the processor 210, and the like.
  • the processor 210 is configured to collect a voice, a gesture, and a key input signal of the user for the stereo virtual assistant, and identify the input signal as an identification signal recognizable by the processor, and match the matching signal with the memory 280, and then convert the user into a signal. The identified conversion signal is ultimately output by the stereo virtual assistant.
  • the processor 210 is configured to execute the steps of any one of the first to sixth embodiments of the implementation method of the virtual reality system shown in FIG. 1 to FIG.
  • FIG. 9 is a schematic structural diagram of a first embodiment of a virtual reality system provided by the present invention.
  • the virtual reality system 10 includes a remote server 20 and the virtual reality device 100 described above.
  • the structure of the virtual reality device 100 is described above, and details are not described herein again.
  • the remote server 20 specifically includes a processing module 21, a search module 22, and an expert module 23.
  • the three modules of the processing module 21, the search module 22, and the expert module 23 are connected to each other and cooperate with each other.
  • the processing module 21 is coupled to the matching module 130 of the virtual reality device 100 for processing the information transmitted by the matching module 130 and feeding back the processing result.
  • the processing module 21 transmits the information to the search module 22 by using the knowledge computing technology. And filtering, reorganizing, and recalculating the knowledge searched by the module 22; and using the question and answer recommendation technology, the information with a high degree of localization can be more accurately recommended according to the user's regional and personal preference information.
  • the search module 22 is configured to search the information provided by the processing module 21 and feed back the search result.
  • the search module 22 uses the network search technology and the knowledge search technology to match the existing webpage information and the information stored by the expert module 23.
  • the expert module 23 is used for storing structured knowledge, including but not limited to expert suggestion data with more human participation factors, for the processing module 21 and the search module 22 for reference; meanwhile, the expert module 23 also has a predictive function, part of which Forecasts can prepare questions for users before they know they need help. Case.
  • the implementation method of the virtual reality system provided by the present invention is provided with a three-dimensional virtual assistant, and the three-dimensional virtual assistant collects, matches, and converts the user's income, so that the three-dimensional virtual assistant is enabled. It can output intelligent services that meet the needs of users with visual, auditory and tactile sensations, giving users a humanized resonance, enhancing user experience and enhancing user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种虚拟现实系统的实现方法及虚拟现实装置,该方法包括步骤:生成一立体交互场景并在所述立体交互场景中生成一立体虚拟助手(S101);将采集到的用户输入识别成计算机可识别数据(S102);对计算机可识别数据进行匹配并返回相匹配的响应数据(S103);将响应数据转换成语音信号、触觉反馈震动信号和视觉形态信号中的至少一种(S104);以立体虚拟助手的形象输出语音信号、触觉反馈震动信号和视觉形态信号中的至少一种(S105)。该方法由于设有立体虚拟助手,立体虚拟助手通过采集用户的输入,从而进行识别、匹配和转换,使得立体虚拟助手可以输出视觉、听觉以及触觉都符合用户需求的智能服务,给用户以人性化的共鸣,增强用户体验乐趣和提升用户体验。

Description

虚拟现实系统的实现方法及虚拟现实装置
【技术领域】
本发明涉及虚拟现实技术领域,具体是指一种虚拟现实系统的实现方法及虚拟现实装置。
【背景技术】
随着虚拟现实设备的普及,越来越多的用户会在虚拟现实(Virtual Reality,简称VR)的游戏和应用中花越来越多的时间,但目前所有的虚拟现实应用都没有一个智能助手在用户需要帮助时及时给出帮助。
【发明内容】
本发明提供一种虚拟现实系统的实现方法及虚拟现实装置,以解决现有技术中虚拟现实应用没有智能助手的技术问题。
为解决上述技术问题,本发明提出的一个技术方案是:提供一种虚拟现实装置,所述虚拟现实装置包括:
生成模块,用于生成一立体交互场景并在所述立体交互场景中生成一立体虚拟助手;
采集识别模块,用于将采集到的用户输入识别成计算机可识别数据;
匹配模块,用于对所述计算机可识别数据进行匹配并返回相匹配的响应数据;
转换模块,用于将所述响应数据转换成语音信号、触觉反馈震动信号和视觉形态信号中的至少一种;
输出模块,用于以所述立体虚拟助手的形象输出所述语音信号、触觉反馈震动信号和视觉形态信号中的至少一种;
其中,所述匹配模块包括分析模块和结果模块,所述分析模块用于将采集识别模块识别的所述计算机可识别数据进行分析和匹配,所述结果模块用于反馈所述分析模块分析和匹配的结果。
本发明还提出的另一个技术方案,提供一种虚拟现实装置,该虚拟现实装置包括:
处理器,与所述处理器耦合的听筒、摄像头、按键手柄、喇叭、显示器震动马达和存储器;
其中,所述听筒用于采集用户的语音输入信号;所述摄像头用于采集用户的手势输入信号;所述按键手柄用于采集用户的按键输入信号;
所述喇叭用于为立体虚拟助手播放语音信号;所述显示器用于为所述立体虚拟助手显示视觉形态信号;震动马达用于为所述立体虚拟助手输出触觉反馈震动信号;
所述存储器用于存储所述立体虚拟助手的形态数据以及所述处理器采集的输入信号和相关的识别信号、匹配信号以及转换信号;
所述处理器用于为所述立体虚拟助手采集用户的语音、手势以及按键输入信号,并将输入信号识别成处理器可识别的识别信号,并经过与所述存储器的匹配信号进行匹配,再转换成用户可识别的转换信号,所述转换后的用户可识别的转换型由所述立体虚拟助手来输出。
本发明还提出的一个技术方案:提供一种虚拟现实系统的实现方法,该方法包括:
生成一立体交互场景并在所述立体交互场景中生成一立体虚拟助手;
将采集到的用户输入识别成计算机可识别数据;
对所述计算机可识别数据进行匹配并返回相匹配的响应数据;
将所述响应数据转换成语音信号、触觉反馈震动信号和视觉形态信号中的至少一种;
以所述立体虚拟助手的形象输出所述语音信号、触觉反馈震动信号和视觉形态信号中的至少一种。
本发明的有益效果是:区别于现有技术的情况,本发明提供的虚拟现实系统的实现方法由于设有立体虚拟助手,立体虚拟助手通过采集用户的收入,从而进行识别、匹配和转换,使得立体虚拟助手可以输出视觉、听觉以及触觉都符合用户需求的智能服务,给用户以人性化的共鸣,增强用户体验乐趣和提升用户体验。
【附图说明】
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图,其中:
图1是本发明提供的虚拟现实系统的实现方法第一实施方式的流程示意图;
图2是本发明提供的虚拟现实系统的实现方法第二实施方式的流程示意图;
图3是本发明提供的虚拟现实系统的实现方法第三实施方式的流程示意图;
图4是本发明提供的虚拟现实系统的实现方法第四实施方式的流程示意图;
图5是本发明提供的虚拟现实系统的实现方法第五实施方式的流程示意图;
图6是本发明提供的虚拟现实系统的实现方法第六实施方式的流程示意图;
图7是本发明提供的虚拟现实装置第一实施例的结构示意图;
图8是本发明提供的虚拟现实装置另一实施例的结构示意图;
图9是本发明提供的虚拟现实系统第一实施例的结构示意图。
【具体实施方式】
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参阅图1,本发明虚拟现实系统的实现方法第一实施方式包括以下步骤:
S101:生成一立体交互场景并在立体交互场景中生成一立体虚拟助手。
在用户进行虚拟现实设备的体验时,用户即可进入立体交互场景,本发明是在立体交互场景中,再生成一个立体虚拟助手。其中,立体虚拟助手的形态可以是3D模拟人型,可以模拟出眨眼、注视、点头等带有真实互动感的动画,同时具备喜怒哀乐等丰富表情和情感元素,呈现出微笑、伤心、愤怒等带有真实情感的表情动画,给用户以人性化的共鸣,给人更真实的感受;也可以是3D模拟卡通人型,例如加菲猫,比卡丘等等。在其他实施例中,立体虚拟助手的形态可以根据具体产品和应用场合进行定制,可以使得立体虚拟助手具有极强的标识性。
S102:将采集到的用户输入识别成计算机可识别数据。
立体虚拟助手在立体交互场景中可以采集用户的输入,用户的输入信息包括但不限于用户的语音、按键操作以及手势操作等,同时,立体虚拟助手可以将采集用户的输入识别成计算机可识别数据,即进行信息转换。
S103:对计算机可识别数据进行匹配并返回相匹配的响应数据。
立体虚拟助手可以分析用户的输入信息,即对计算机可识别数据进行分析,并将输入信息进行分类处理,同时对基本信息进行快速处理和应答,即返回相匹配的响应数据。
S104:将响应数据转换成语音信号、触觉反馈震动信号和视觉形态信号中的至少一种。
具体的,将计算机可识别的响应数据转换成用户接收的语音信号、触觉反馈震动信号和视觉形态信号中的至少一种。
S105:以立体虚拟助手的形象输出语音信号、触觉反馈震动信号和视觉形态信号中的至少一种。
立体虚拟助手的形态是3D模拟人型或者3D模拟卡通人型,具有直观帮助与指引功能,使得用户降低沟通障碍,节约成本。
本发明提供的虚拟现实系统的实现方法由于设有立体虚拟助手,立体虚拟助手通过采集用户的收入,从而进行识别、匹配和转换,使得立体虚拟助手可以输出视觉、听觉以及触觉都符合用户需求的智能服务,给用户以人性化的共鸣,增强用户体验乐趣和提升用户体验。
其中,步骤S103具体为:对计算机可识别数据进行上下文匹配并返回相匹配的响应数据。
在实际应用中,立体虚拟助手具有情感聊天功能,在情感聊天功能场景中,立体虚拟助手可以理解用户说话的上下文意思,并且对计算机可识别数据进行上下文分析,从而返回相匹配的响应数据,即用户内心想要的内容或答案。本实施例的立体虚拟助手具有上下文感知功能,可以连贯地理解与用户连续互动的内容,此时可以将立体虚拟助手看做是一个可以给用户最及时并且带有情感和陪伴功能的智能虚拟精灵助手。
步骤S103还可以具体为:将计算机可识别数据发送至远程服务器,以由远程服务器根据计算机可识别数据在网页信息或专家系统中进行匹配搜索,并根据搜索结果生成响应数据。
当立体虚拟助手本身储存的信息不足以解答用户的请求或问题时,立体虚拟助手可以将计算机可识别数据发送至远程服务器,由远程服务器根据计算机可识别数据在网页信息或专家系统中进行匹配搜索,并根据搜索结果生成响应数据。立体虚拟助手会将每次从远程服务器获得的响应数据进行存储,以备下次用户或者其他使用者再次进行询问时,可以快速准确地给予用户相关的帮助和指引。
请参阅图2,本发明虚拟现实系统的实现方法第二实施方式包括以下步骤:
S201:生成一立体交互场景并在立体交互场景中生成一立体虚拟助手。
可如上S101所述,在此不作赘述。
S102:将采集到的用户输入识别成计算机可识别数据。
可如上S102所述,在此不作赘述。
S203:根据用户输入和/或计算机可识别数据对用户进行情绪分析。
立体虚拟助手可以根据用户输入和/或计算机可识别数据对用户进行情绪分析,具体是根据用户输入的声调、语速、手势动作以及计算机可识别数据的文本信息等等,来分析用户的情绪,用户的情绪包括愉快、自豪、希望、放松、气愤、焦虑、羞愧、失望、厌倦等等。
S204:将分析得到的用户情绪进行匹配并返回相匹配的响应数据。
立体虚拟助手将分析得到的用户情绪进行匹配并返回相匹配的响应数据,立体虚拟助手可以模拟出眨眼、注视、点头等带有真实互动感的动画,呈现出微笑、伤心、愤怒等带有真实情感的表情动画,从而给用户以情感的共鸣;例如,当用户的情绪为愉快时,可以反馈语速快的语音信号以及微笑的表情动画;当用户的情绪为焦虑时,可以反馈语速慢的语音信号以及伤心的表情动画等等。
S205:将响应数据转换成语音信号、触觉反馈震动信号和视觉形态信号中的至少一种。
可如上S104所述,在此不作赘述。
S206:以立体虚拟助手的形象输出语音信号、触觉反馈震动信号和视觉形态信号中的至少一种。
可如上S105所述,在此不作赘述。
本实施例的立体虚拟助手可以分析用户情绪并与用户进行互动,给用户一种朋友的陪伴感,让用户及时疏解情绪上的烦恼,增强了用户沟通的意愿和乐趣。
请参阅图3,图3是本发明虚拟现实系统的实现方法第三实施方式的流程示意图,本实施方式与第二实施方式的基本步骤流程相同,不同的是以下两个步骤:步骤S303替换为步骤S203,步骤S304替换为步骤S204。
S303:通过对计算机可识别数据进行学习来获取用户喜好和/或个人数据。
本实施例中,立体虚拟助手可以通过对计算机可识别数据进行学习,来获取用户喜好和/或个人数据,个人数据包括但那不限于用户的年龄、性别、身高、体重、职业、爱好以及信仰等等,从而智能地推荐相关服务内容;还可以包括基于地理位置信息的智能推荐,虚拟精灵助手会基于你的国家、区域、工作和生活位置信息,从而提供匹配地点的建议和信息提醒,譬如当地交通情况的内容。
S304:将推荐内容进行匹配并返回相匹配的响应数据。
立体虚拟助手通过学习来获取用户喜好和/或个人数据,从而进行主动的推荐内容,并将内容进行匹配并返回相匹配的响应数据。
本实施例的立体虚拟助手可以学习来获取用户喜好和/或个人数据,可以更准确地了解和预测用户的需求,为用户提供更优质的服务,从而可以智能推荐合适用户的内容,丰富用户的业余时间,扩展用户的课余知识,增强用户的体验乐趣。
请参阅图4,图4是本发明虚拟现实系统的实现方法第四实施方式的流程示意图,本实施方式与第一实施方式的基本步骤流程相同,不同的是增加以下三个步骤:步骤S406、步骤S407以及步骤S408。
S406:根据虚拟现实系统所运行的应用的当前进程生成推荐内容。
当用户进行虚拟现实设备的体验时,用户还可以进入其他应用,如游戏、健身、学习或者娱乐时,立体虚拟助手可以根据虚拟现实系统所运行的应用不同,从而生成推荐内容,也可以根据虚拟现实系统所运行的应用的当前进程不同,从而生成实时的推荐内容,例如,在玩VR游戏时,立体虚拟助手可以依据难关或者疑惑点,提前给出帮助和指导。
S407:将推荐内容进行匹配并返回相匹配的响应数据。
S408:基于响应数据改变立体虚拟助手的形象输出和/或在立体交互场景内呈现推荐内容。
立体虚拟助手推荐内容可以由立体虚拟助手呈现,也可以在立体交互场景内直接呈现。
请参阅图5,图5是本发明虚拟现实系统的实现方法第五实施方式的流程示意图,本实施方式与第一实施方式的基本步骤流程相同,不同的是增加以下两个步骤:步骤S506和步骤S507。
S506:获取与虚拟现实系统互联的被控系统的当前状态。
虚拟现实系统还可以与系统外的其他设备进行关联,例如,智能手机、智能车载、智能家居等等,其他设备也可以被称为被控系统,立体虚拟助手可以获取与虚拟现实系统互联的被控系统的当前状态。
S507:将当前状态进行匹配并返回相匹配的响应数据。
立体虚拟助手可以定时或者定期地将被控系统的当前状态进行返回相匹配的响应数据呈送给用户,使用户能够了解被控系统的当前状态。
请参阅图6,图6是本发明虚拟现实系统的实现方法第六实施方式的流程示意图,本实施方式与第五实施方式的基本步骤流程相同,不同的是步骤S607、步骤S608以及步骤S609替换步骤S507。
S607:基于当前状态以及用户输入和/或用户预先设置的处理规则对被控系统进行相应操作。
立体虚拟助手首先预先设置处理被控系统的规则,然后立体虚拟助手基于被控系统的当前状态以及用户输入和/或用户预先设置的处理规则,来对被控系统进行相应操作。
S608:将对被控系统的操作结果进行匹配并返回相匹配的响应数据。
S609:基于响应数据改变立体虚拟助手的形象输出和/或在立体交互场景内呈现操作结果。
本实施例中,以被控系统为移动终端为应用例,假设移动终端的当前状态为移动终端的来电状态,其预先设置的处理规则是挂断或接听移动终端,例如,当用户在玩虚拟现实游戏时,移动终端有来电或通知消息,立体虚拟助手将智能识别来电或通知消息的重要性,然后进行分类处理:如果是非常紧急的来电,会通过悬浮来电通话的形式通知用户处理或者直接接听且通过震动或者暂停游戏的方式提醒用户;否则会自动挂断并且用短信回复来电,如“我正在使用VR设备,稍后再与您联系”。其中,立体虚拟助手的相应操作包括挂断操作或接听操作。
请参阅图7,图7是本发明提供的虚拟现实装置第一实施例的结构示意图。
如图7所示,该虚拟现实装置100包括生成模块110、采集识别模块120、匹配模块130、转换模块140以及输出模块150。
其中,生成模块110用于生成一立体交互场景并在立体交互场景中生成一立体虚拟助手;采集识别模块120用于将采集到的用户输入识别成计算机可识别数据;匹配模块130用于对计算机可识别数据进行匹配并返回相匹配的响应数据;转换模块140用于将响应数据转换成语音信号、触觉反馈震动信号和视觉形态信号中的至少一种;输出模块150用于以立体虚拟助手的形象输出语音信号、触觉反馈震动信号和视觉形态信号中的至少一种。
生成模块110用于生成一立体交互场景并在立体交互场景中生成一立体虚拟助手,在一实际应用中,虚拟现实场景是一个360度全景真实的3D交互环境,该立体虚拟助手可以设计成3D动态的精灵、人物或者卡通人物,能在各种虚拟场景中通过立体的形态和动作动画形态与用户进行互动。
采集识别模块120用于生成模块110生成的立体虚拟助手采集用户的输入,用户的输入信息包括但不限于用户的语音、按键操作以及手势操作等;同时,立体虚拟助手还将采集用户的输入识别成计算机可识别数据。
采集识别模块120还可以划分为采集模块121和识别模块122,采集模块121用于采集用户的输入信息,识别模块122用于将采集模块121采集的输入信息识别成计算机可识别数据。
其中,采集模块121进一步具体划分为语音采集模块1211、手势采集模块1212以及按键采集模块1213,其中,语音采集模块1211用于采集用户的语音输入信号;手势采集模块1212用于采集用户的手势输入信号;按键采集模块1213用于采集用户的按键输入信号。识别模块122对应于采集模块121的划分也细分为语音识别模块1221、手势识别模块1222以及按键识别模块1223,其中,语音识别模块1221用于将语音采集模块1211采集的输入信息识别成计算机可识别数据;手势识别模块1222用于将手势采集模块1212采集的输入信息识别成计算机可识别数据;按键识别模块1223用于将按键采集模块1213采集的输入信息识别成计算机可识别数据。
匹配模块130还可以划分为分析模块131和结果模块132,分析模块131用于将识别模块122识别的计算机可识别数据进行分析和匹配,结果模块132用于反馈分析模块131分析和匹配的结果,即返回相匹配的响应数据。在其他实施例中,匹配模块130还可以进一步包括自学习模块133,自学习模块133用于学习和记忆用户的使用习惯,在分析模块131进行分析和匹配时可以提供针对性的参考建议。
转换模块140用于将匹配模块130返回相匹配的响应数据转换成语音信号、触觉反馈震动信号和视觉形态信号中的至少一种。
输出模块150用于将转换模块140转换的信号以立体虚拟助手的形象输出。其中,输出模块150还可以划分为语音输出模块151、触觉输出模块152以及视觉输出模块153;语音输出模块151用于将转换模块140转换的信号以立体虚拟助手的形象输出语音信号,如语音播报形式;触觉输出模块152用于将转换模块140转换的信号以立体虚拟助手的形象输出触觉反馈震动信号,如颤抖形式;视觉输出模块153于将转换模块140转换的信号以立体虚拟助手的形象输出视觉形态信号,如动画、表情和色彩等形式。
上述虚拟现实装置100的各个模块可分别执行上述方法实施例中对应步骤,故在此不对各模块进行赘述,详细请参阅以上对应步骤的说明。
请参阅图8,图8是本发明提供的虚拟现实装置另一实施例的结构示意图。
如图8所示,该虚拟现实装置200包括:处理器210、与处理器210耦合的听筒220、摄像头230、按键手柄240、喇叭250、显示器260、震动马达270和存储器280。
其中,听筒220用于采集用户的语音输入信号;摄像头230用于采集用户的手势输入信号;按键手柄240用于采集用户的按键输入信号。
喇叭250用于为立体虚拟助手播放语音信号;显示器260用于为立体虚拟助手显示视觉形态信号;震动马达270用于为立体虚拟助手输出触觉反馈震动信号。
存储器280用于存储立体虚拟助手的形态数据以及处理器210采集的输入信号和处理器210相关的识别信号、匹配信号以及转换信号等等。
处理器210用于为立体虚拟助手采集用户的语音、手势以及按键输入信号,并将输入信号识别成处理器可识别的识别信号,并经过与存储器280的匹配信号进行匹配,再转换成用户可识别的转换信号,最终由立体虚拟助手来输出。其中,处理器210用于执行图1至图6所示的虚拟现实系统的实现方法第一实施例至第六实施例中的任意一个实施例的各个步骤。
请参阅图9,图9是本发明提供的虚拟现实系统第一实施例的结构示意图。
如图9所示,该虚拟现实系统10包括远程服务器20和上述所述的虚拟现实装置100,虚拟现实装置100的结构参见上文,此处不再赘述。远程服务器20具体包括处理模块21、搜索模块22以及专家模块23,其中,处理模块21、搜索模块22以及专家模块23三个模块之间均相互连接,相互配合工作。
具体的,处理模块21耦接虚拟现实装置100的匹配模块130,用于处理匹配模块130传输的信息,并反馈处理结果,具体的,处理模块21利用知识计算技术,将信息传输至搜索模块22,并将索模块22搜索到的知识进行过滤、重组和二次计算等操作;再利用问答推荐技术,将地域化程度很高的信息,可以根据用户的区域和个人喜好信息进行更准确地推荐服务;搜索模块22用于搜索处理模块21提供的信息,并反馈搜索结果,具体的,搜索模块22利用网络搜索技术、知识搜索技术,从现有网页信息和专家模块23存储的信息中进行匹配搜寻;专家模块23用于存储结构化的知识,包括但不限于人为参与因素较多的专家建议数据,供处理模块21和搜索模块22作参考;同时,专家模块23还具有预测功能,其中一部分的预测可以在用户知道需要帮助之前就会为用户提前准备问题的答案。
综上所述,本领域技术人员容易理解,本发明提供的虚拟现实系统的实现方法由于设有立体虚拟助手,立体虚拟助手通过采集用户的收入,从而进行识别、匹配和转换,使得立体虚拟助手可以输出视觉、听觉以及触觉都符合用户需求的智能服务,给用户以人性化的共鸣,增强用户体验乐趣和提升用户体验。
以上所述仅为本发明的实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (14)

  1. 一种虚拟现实装置,其中,所述装置包括:
    生成模块,用于生成一立体交互场景并在所述立体交互场景中生成一立体虚拟助手;
    采集识别模块,用于将采集到的用户输入识别成计算机可识别数据;
    匹配模块,用于对所述计算机可识别数据进行匹配并返回相匹配的响应数据;
    转换模块,用于将所述响应数据转换成语音信号、触觉反馈震动信号和视觉形态信号中的至少一种;
    输出模块,用于以所述立体虚拟助手的形象输出所述语音信号、触觉反馈震动信号和视觉形态信号中的至少一种;
    其中,所述匹配模块包括分析模块和结果模块,所述分析模块用于将采集识别模块识别的所述计算机可识别数据进行分析和匹配,所述结果模块用于反馈所述分析模块分析和匹配的结果。
  2. 根据权利要求1所述的虚拟现实装置,其中,所述采集识别模块包括采集模块和识别模块;
    所述采集模块用于采集用户的输入信息;
    所述识别模块用于将所述采集模块采集的信息识别成计算机可识别数据;
    其中,所述采集模块还包括语音采集模块、手势采集模块和按键采集模块。
  3. 根据权利要求1所述的虚拟现实装置,其中,所述输出模块包括语音输出模块、触觉输出模块和视觉输出模块;
    所述语音输出模块用于将所述转换模块转换的信号以立体虚拟助手的形象输出语音信号;
    所述触觉输出模块用于将所述转换模块转换的信号以立体虚拟助手的形象输出触觉反馈震动信号;
    所述视觉输出模块用于将所述转换模块转换的信号以立体虚拟助手的形象输出视觉形态信号。
  4. 一种虚拟现实装置,其中,包括处理器,与所述处理器耦合的听筒、摄像头、按键手柄、喇叭、显示器震动马达和存储器;
    其中,所述听筒用于采集用户的语音输入信号;所述摄像头用于采集用户的手势输入信号;所述按键手柄用于采集用户的按键输入信号;
    所述喇叭用于为立体虚拟助手播放语音信号;所述显示器用于为所述立体虚拟助手显示视觉形态信号;震动马达用于为所述立体虚拟助手输出触觉反馈震动信号;
    所述存储器用于存储所述立体虚拟助手的形态数据以及所述处理器采集的输入信号和相关的识别信号、匹配信号以及转换信号;
    所述处理器用于为所述立体虚拟助手采集用户的语音、手势以及按键输入信号,并将输入信号识别成处理器可识别的识别信号,并经过与所述存储器的匹配信号进行匹配,再转换成用户可识别的转换信号,所述转换后的用户可识别的转换信号由所述立体虚拟助手来输出。
  5. 根据权利要求4所述的虚拟现实装置,其中,所述用户可识别的转换信号包括语音信号、触觉反馈震动信号和视觉心态信号中的至少一种。
  6. 一种虚拟现实系统的实现方法,其中,所述方法包括步骤:
    生成一立体交互场景并在所述立体交互场景中生成一立体虚拟助手;
    将采集到的用户输入识别成计算机可识别数据;
    对所述计算机可识别数据进行匹配并返回相匹配的响应数据;
    将所述响应数据转换成语音信号、触觉反馈震动信号和视觉形态信号中的至少一种;
    以所述立体虚拟助手的形象输出所述语音信号、触觉反馈震动信号和视觉形态信号中的至少一种。
  7. 根据权利要求6所述的方法,其中,所述对所述计算机可识别数据进行匹配并返回相匹配的响应数据的步骤包括:
    对所述计算机可识别数据进行上下文匹配并返回相匹配的响应数据。
  8. 根据权利要求6所述的方法,其中,所述对所述计算机可识别数据进行匹配并返回相匹配的响应数据的步骤包括:
    根据所述用户输入和/或所述计算机可识别数据对用户进行情绪分析;
    将分析得到的用户情绪进行匹配并返回相匹配的响应数据。
  9. 根据权利要求6所述的方法,其中,所述对所述计算机可识别数据进行匹配并返回相匹配的响应数据的步骤包括:
    将所述计算机可识别数据发送至远程服务器,以由所述远程服务器根据所述计算机可识别数据在网页信息或专家系统中进行匹配搜索,并根据搜索结果生成所述响应数据。
  10. 根据权利要求6所述的方法,其中,所述对所述计算机可识别数据进行匹配并返回相匹配的响应数据的步骤包括:
    通过对所述计算机可识别数据进行学习来获取用户喜好和/或个人数据;
    将所述推荐内容进行匹配并返回相匹配的响应数据。
  11. 根据权利要求6所述的方法,其中,所述方法进一步包括:
    根据所述虚拟现实系统所运行的应用的当前进程生成推荐内容;
    将所述推荐内容进行匹配并返回相匹配的响应数据;
    基于所述响应数据改变所述立体虚拟助手的形象输出和/或在所述立体交互场景内呈现所述推荐内容。
  12. 根据权利要求6所述的方法,其中,所述方法进一步包括:
    获取与所述虚拟现实系统互联的被控系统的当前状态;
    将所述当前状态进行匹配并返回相匹配的响应数据。
  13. 根据权利要求12所述的方法,其中,所述将所述当前状态进行匹配并返回相匹配的响应数据的步骤进一步包括:
    基于所述当前状态以及所述用户输入和/或用户预先设置的处理规则对所述被控系统进行相应操作;
    将对所述被控系统的操作结果进行匹配并返回相匹配的响应数据;
    基于所述响应数据改变所述立体虚拟助手的形象输出和/或在所述立体交互场景内呈现所述操作结果。
  14. 根据权利要求13所述的方法,其中,所述被控系统为移动终端,所述当前状态为所述移动终端的来电状态,所述相应操作包括挂断操作或接听操作。
PCT/CN2017/109174 2016-11-02 2017-11-02 虚拟现实系统的实现方法及虚拟现实装置 WO2018082626A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/286,650 US20190187782A1 (en) 2016-11-02 2019-02-27 Method of implementing virtual reality system, and virtual reality device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610949735.8 2016-11-02
CN201610949735.8A CN106598215B (zh) 2016-11-02 2016-11-02 虚拟现实系统的实现方法及虚拟现实装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/286,650 Continuation US20190187782A1 (en) 2016-11-02 2019-02-27 Method of implementing virtual reality system, and virtual reality device

Publications (1)

Publication Number Publication Date
WO2018082626A1 true WO2018082626A1 (zh) 2018-05-11

Family

ID=58589788

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/109174 WO2018082626A1 (zh) 2016-11-02 2017-11-02 虚拟现实系统的实现方法及虚拟现实装置

Country Status (3)

Country Link
US (1) US20190187782A1 (zh)
CN (1) CN106598215B (zh)
WO (1) WO2018082626A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110822648A (zh) * 2019-11-25 2020-02-21 广东美的制冷设备有限公司 空调器及其控制方法、计算机可读存储介质
CN113672155A (zh) * 2021-07-02 2021-11-19 浪潮金融信息技术有限公司 一种基于vr技术的自助操作系统、方法及介质

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106598215B (zh) * 2016-11-02 2019-11-08 Tcl移动通信科技(宁波)有限公司 虚拟现实系统的实现方法及虚拟现实装置
CN107329990A (zh) * 2017-06-06 2017-11-07 北京光年无限科技有限公司 一种用于虚拟机器人的情绪输出方法以及对话交互系统
CN107454074A (zh) * 2017-07-31 2017-12-08 广州千煦信息科技有限公司 一种手游管理系统
CN107577661B (zh) * 2017-08-07 2020-12-11 北京光年无限科技有限公司 一种针对虚拟机器人的交互输出方法以及系统
CN107767869B (zh) * 2017-09-26 2021-03-12 百度在线网络技术(北京)有限公司 用于提供语音服务的方法和装置
CN107734166A (zh) * 2017-10-11 2018-02-23 上海展扬通信技术有限公司 一种基于智能终端的控制方法及控制系统
CN111819565A (zh) * 2018-02-27 2020-10-23 松下知识产权经营株式会社 数据转换系统、数据转换方法和程序
US10802894B2 (en) * 2018-03-30 2020-10-13 Inflight VR Software GmbH Method, apparatus, and computer-readable medium for managing notifications delivered to a virtual reality device
CN110503449A (zh) * 2018-05-18 2019-11-26 开利公司 用于购物场所的交互系统及其实现方法
CN108717270A (zh) * 2018-05-30 2018-10-30 珠海格力电器股份有限公司 智能设备的控制方法、装置、存储介质和处理器
EP3620319B1 (en) 2018-09-06 2022-08-10 Audi Ag Method for operating a virtual assistant for a motor vehicle and corresponding backend system
CN109346076A (zh) * 2018-10-25 2019-02-15 三星电子(中国)研发中心 语音交互、语音处理方法、装置和系统
US11574553B2 (en) * 2019-09-18 2023-02-07 International Business Machines Corporation Feeling experience correlation
CN110751734B (zh) * 2019-09-23 2022-06-14 华中科技大学 一种适用于工作现场的混合现实助手系统
CN110767220B (zh) * 2019-10-16 2024-05-28 腾讯科技(深圳)有限公司 一种智能语音助手的交互方法、装置、设备及存储介质
CN110822649B (zh) * 2019-11-25 2021-12-17 广东美的制冷设备有限公司 空调器的控制方法、空调器及存储介质
CN110822641A (zh) * 2019-11-25 2020-02-21 广东美的制冷设备有限公司 空调器及其控制方法、装置和可读存储介质
CN110822647B (zh) * 2019-11-25 2021-12-17 广东美的制冷设备有限公司 空调器的控制方法、空调器及存储介质
CN110822661B (zh) * 2019-11-25 2021-12-17 广东美的制冷设备有限公司 空调器的控制方法、空调器及存储介质
CN110822642B (zh) * 2019-11-25 2021-09-14 广东美的制冷设备有限公司 空调器及其控制方法和计算机存储介质
CN110822646B (zh) * 2019-11-25 2021-12-17 广东美的制冷设备有限公司 空调器的控制方法、空调器及存储介质
CN110822644B (zh) * 2019-11-25 2021-12-03 广东美的制冷设备有限公司 空调器及其控制方法和计算机存储介质
CN110822643B (zh) * 2019-11-25 2021-12-17 广东美的制冷设备有限公司 空调器及其控制方法和计算机存储介质
CN110764429B (zh) * 2019-11-25 2023-10-27 广东美的制冷设备有限公司 家电设备的交互方法、终端设备及存储介质
CN112272259B (zh) * 2020-10-23 2021-06-01 北京蓦然认知科技有限公司 一种自动化助手的训练方法、装置
CN113643047B (zh) * 2021-08-17 2024-05-10 中国平安人寿保险股份有限公司 虚拟现实控制策略的推荐方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102446428A (zh) * 2010-09-27 2012-05-09 北京紫光优蓝机器人技术有限公司 基于机器人的交互式学习系统及其交互方法
CN105126355A (zh) * 2015-08-06 2015-12-09 上海元趣信息技术有限公司 儿童陪伴机器人与儿童陪伴系统
CN105345818A (zh) * 2015-11-04 2016-02-24 深圳好未来智能科技有限公司 带有情绪及表情模块的3d视频互动机器人
CN105843382A (zh) * 2016-03-18 2016-08-10 北京光年无限科技有限公司 一种人机交互方法及装置
US20160311115A1 (en) * 2015-04-27 2016-10-27 David M. Hill Enhanced configuration and control of robots
CN106598215A (zh) * 2016-11-02 2017-04-26 惠州Tcl移动通信有限公司 虚拟现实系统的实现方法及虚拟现实装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8964298B2 (en) * 2010-02-28 2015-02-24 Microsoft Corporation Video display modification based on sensor input for a see-through near-to-eye display
US9338493B2 (en) * 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US10664741B2 (en) * 2016-01-14 2020-05-26 Samsung Electronics Co., Ltd. Selecting a behavior of a virtual agent
US10026229B1 (en) * 2016-02-09 2018-07-17 A9.Com, Inc. Auxiliary device as augmented reality platform

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102446428A (zh) * 2010-09-27 2012-05-09 北京紫光优蓝机器人技术有限公司 基于机器人的交互式学习系统及其交互方法
US20160311115A1 (en) * 2015-04-27 2016-10-27 David M. Hill Enhanced configuration and control of robots
CN105126355A (zh) * 2015-08-06 2015-12-09 上海元趣信息技术有限公司 儿童陪伴机器人与儿童陪伴系统
CN105345818A (zh) * 2015-11-04 2016-02-24 深圳好未来智能科技有限公司 带有情绪及表情模块的3d视频互动机器人
CN105843382A (zh) * 2016-03-18 2016-08-10 北京光年无限科技有限公司 一种人机交互方法及装置
CN106598215A (zh) * 2016-11-02 2017-04-26 惠州Tcl移动通信有限公司 虚拟现实系统的实现方法及虚拟现实装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110822648A (zh) * 2019-11-25 2020-02-21 广东美的制冷设备有限公司 空调器及其控制方法、计算机可读存储介质
CN110822648B (zh) * 2019-11-25 2021-12-17 广东美的制冷设备有限公司 空调器及其控制方法、计算机可读存储介质
CN113672155A (zh) * 2021-07-02 2021-11-19 浪潮金融信息技术有限公司 一种基于vr技术的自助操作系统、方法及介质
CN113672155B (zh) * 2021-07-02 2023-06-30 浪潮金融信息技术有限公司 一种基于vr技术的自助操作系统、方法及介质

Also Published As

Publication number Publication date
CN106598215A (zh) 2017-04-26
CN106598215B (zh) 2019-11-08
US20190187782A1 (en) 2019-06-20

Similar Documents

Publication Publication Date Title
WO2018082626A1 (zh) 虚拟现实系统的实现方法及虚拟现实装置
CN106874265B (zh) 一种与用户情绪匹配的内容输出方法、电子设备及服务器
WO2019156332A1 (ko) 증강현실용 인공지능 캐릭터의 제작 장치 및 이를 이용한 서비스 시스템
CN108470533A (zh) 基于虚拟人的增强型智能交互广告系统及装置
JP2018014094A (ja) 仮想ロボットのインタラクション方法、システム及びロボット
WO2019125082A1 (en) Device and method for recommending contact information
JP2020521995A (ja) 代替インタフェースでのプレゼンテーションのための電子会話の解析
WO2015020354A1 (en) Apparatus, server, and method for providing conversation topic
CN107632706A (zh) 多模态虚拟人的应用数据处理方法和系统
EP3652925A1 (en) Device and method for recommending contact information
Yu et al. Inferring user profile attributes from multidimensional mobile phone sensory data
Zimmermann Context-awareness in user modelling: Requirements analysis for a case-based reasoning application
WO2022196921A1 (ko) 인공지능 아바타에 기초한 인터랙션 서비스 방법 및 장치
CN111414506A (zh) 基于人工智能情绪处理方法、装置、电子设备及存储介质
WO2019112154A1 (ko) 텍스트-리딩 기반의 리워드형 광고 서비스 제공 방법 및 이를 수행하기 위한 사용자 단말
CN112860169A (zh) 交互方法及装置、计算机可读介质和电子设备
Kim et al. Beginning of a new standard: Internet of Media Things
WO2016182393A1 (ko) 사용자의 감성을 분석하는 방법 및 디바이스
WO2019031621A1 (ko) 통화 중 감정을 인식하여 인식된 감정을 활용하는 방법 및 시스템
WO2016206645A1 (zh) 为机器装置加载控制数据的方法及装置
CN113656557A (zh) 消息回复方法、装置、存储介质及电子设备
KR102293743B1 (ko) 인공지능 챗봇 기반 케어 시스템
KR20000017756A (ko) 수화 번역 장치
CN109087644B (zh) 电子设备及其语音助手的交互方法、具有存储功能的装置
WO2019190243A1 (ko) 사용자와의 대화를 위한 정보를 생성하는 시스템 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17866983

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17866983

Country of ref document: EP

Kind code of ref document: A1