WO2018032970A1 - 基于虚拟现实场景的认证方法、虚拟现实设备及存储介质 - Google Patents

基于虚拟现实场景的认证方法、虚拟现实设备及存储介质 Download PDF

Info

Publication number
WO2018032970A1
WO2018032970A1 PCT/CN2017/095640 CN2017095640W WO2018032970A1 WO 2018032970 A1 WO2018032970 A1 WO 2018032970A1 CN 2017095640 W CN2017095640 W CN 2017095640W WO 2018032970 A1 WO2018032970 A1 WO 2018032970A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
authentication
virtual object
virtual reality
information
Prior art date
Application number
PCT/CN2017/095640
Other languages
English (en)
French (fr)
Inventor
人巴加瓦达维尔·J
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201610695148.0A external-priority patent/CN106131057B/zh
Priority claimed from CN201610907039.0A external-priority patent/CN106527887B/zh
Priority claimed from CN201610954866.5A external-priority patent/CN107992213B/zh
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP17840945.4A priority Critical patent/EP3502939B1/en
Publication of WO2018032970A1 publication Critical patent/WO2018032970A1/zh
Priority to US16/205,708 priority patent/US10868810B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan

Definitions

  • the present invention relates to the field of virtual reality, and in particular to an authentication method based on a virtual reality scenario, a virtual reality device, and a storage medium.
  • Virtual Reality is a virtual world that uses computer simulation to generate a three-dimensional space to provide users with simulations of visual, auditory, tactile and other senses, so that users can be as immersive as they can.
  • the helmet display is a common device for virtual reality technology. While shielding the real world, it can provide a high-resolution, large-angle virtual scene with stereo headphones, which can create a strong sense of immersion.
  • service providers are able to offer a variety of services and products based on the needs of users in virtual reality scenarios, for example, users can purchase products and pay in virtual reality scenarios.
  • related technologies need to establish a payment cognitive system in a virtual reality scenario when paying in a virtual reality scenario, which will result in lower efficiency of payment in a virtual reality scenario.
  • An embodiment of the present invention provides an authentication method based on a virtual reality scenario, a virtual reality device, and
  • the storage medium needs to at least solve the related technology to establish a payment cognition system in the virtual reality scenario when paying in the virtual reality scenario, which will result in a technical problem of low payment efficiency in the virtual reality scenario.
  • a method for authenticating a virtual reality scenario including: receiving an authentication request in a virtual reality scenario; collecting fingerprint information to be authenticated by a fingerprint collection device in a real scene; The fingerprint information is sent to the authentication device in the real-world scenario; the authentication result information sent by the authentication device is received in the virtual reality scenario, where the authentication result information is used to indicate that the fingerprint information to be authenticated passes the authentication or fails the authentication.
  • a virtual reality device including: one or more processors, a memory for storing a software program and a module, and the processor is stored by running The software program and the module in the memory perform the following operations: receiving an authentication request in a virtual reality scenario; collecting fingerprint information to be authenticated by a fingerprint collection device in a real scene; and transmitting the to-be-authenticated fingerprint information to the reality An authentication device in the scenario; the authentication result information sent by the authentication device is received in the virtual reality scenario, where the authentication result information is used to indicate that the to-be-authenticated fingerprint information passes the authentication or does not pass the authentication.
  • a storage medium having stored therein at least one piece of program code, the at least one piece of program code being loaded and executed by a processor to implement an aspect as described above An authentication method based on a virtual reality scenario.
  • the authentication request is received in the virtual reality scenario; the fingerprint information to be authenticated is collected by the fingerprint collection device in the real scene; the authentication information to be authenticated is sent to the authentication device in the real scene; Receiving the authentication result information sent by the authentication device, where the authentication result information is used to indicate that the fingerprint information to be authenticated is authenticated or not authenticated, and the fingerprint in the real scene is collected when the authentication request is received in the virtual reality scenario.
  • the authentication information collected by the device is sent to the authentication device in the real-world scenario for authentication. This ensures that the payment authentication system can be implemented without establishing a payment authentication system in the virtual reality scenario, thereby improving payment in the virtual reality scenario.
  • the technical effect of efficiency further solves the problem that the related technology needs to establish a payment cognition system in the virtual reality scenario when paying in the virtual reality scenario, which will result in a technical problem of low payment efficiency in the virtual reality scenario.
  • FIG. 1 is a schematic diagram of a hardware environment of a virtual reality scenario based authentication method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of an optional virtual reality scenario based authentication method according to an embodiment of the present invention
  • FIG. 3 is a flowchart of a virtual reality scenario based authentication method in accordance with a preferred embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an authentication interface in a virtual reality scenario in accordance with a preferred embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a fingerprint collection device according to a preferred embodiment of the present invention.
  • FIG. 6 is a schematic diagram of an indication identifier pointing to an authentication area in accordance with a preferred embodiment of the present invention.
  • FIG. 7 is a schematic diagram of displaying authentication result information in a virtual reality scenario according to a preferred embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a data interaction process between a virtual reality scene and a real scene according to a preferred embodiment of the present invention
  • FIG. 9 is a schematic diagram of an optional virtual reality scene based authentication apparatus according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of another optional virtual reality scene based authentication apparatus according to an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of another optional virtual reality scene based authentication apparatus according to an embodiment of the present invention.
  • FIG. 12 is a schematic diagram of another optional virtual reality scene based authentication apparatus according to an embodiment of the present invention.
  • FIG. 13 is a schematic diagram of another optional virtual reality scene based authentication apparatus according to an embodiment of the present invention.
  • FIG. 14 is a schematic diagram of another optional virtual reality scenario based authentication device according to an embodiment of the present invention.
  • FIG. 15 is a structural block diagram of a terminal according to an embodiment of the present invention.
  • FIG. 16 is a schematic structural diagram of a VR system according to an embodiment of the present invention.
  • FIG. 17 is a flowchart of a method for selecting a virtual object according to an embodiment of the present invention.
  • 18A to 18C are schematic views of controlled points provided by an embodiment of the present invention.
  • FIG. 18D is a schematic diagram of the virtual object selection method provided in FIG. 17 in a specific implementation.
  • FIG. 19 is a flowchart of a method for selecting a virtual object according to another embodiment of the present invention.
  • 20A is a flowchart of a method for selecting a virtual object according to another embodiment of the present invention.
  • FIG. 20B is a schematic diagram of the virtual object selection method provided in FIG. 20A in a specific implementation
  • 21A is a flowchart of a method for selecting a virtual object according to another embodiment of the present invention.
  • 21B is a schematic diagram of the virtual object selection method provided in FIG. 21A in a specific implementation
  • FIG. 22 is a flowchart of a method for selecting a virtual object according to another embodiment of the present invention.
  • FIG. 22B is a schematic diagram of the virtual object selection method provided in FIG. 22A in a specific implementation
  • 23A is a flowchart of a method for selecting a virtual object according to another embodiment of the present invention.
  • 23B is a flowchart of a method for selecting a virtual object according to another embodiment of the present invention.
  • FIG. 24 is a block diagram of a virtual object selection apparatus according to an embodiment of the present invention.
  • FIG. 25 is a block diagram of a virtual object selection apparatus according to another embodiment of the present invention.
  • FIG. 26 is a block diagram of a virtual object selection apparatus according to another embodiment of the present invention.
  • FIG. 27 is a block diagram of a VR system according to another embodiment of the present invention.
  • FIG. 29 is a structural diagram of a handle-based VR device according to another embodiment of the present invention.
  • FIG. 30 is a block diagram of a virtual reality based identifier generating apparatus according to another embodiment of the present invention.
  • FIG. 31 is a block diagram of a selection result obtaining module according to another embodiment of the present invention.
  • FIG. 33 is a block diagram of an identity verification system according to another embodiment of the present invention.
  • Virtual Reality is a technology that comprehensively utilizes computer graphics systems and various interfaces such as reality and control to provide an immersive feeling in a three-dimensional environment that can be generated on a computer.
  • the computer-generated, interactive three-dimensional environment is called a virtual environment.
  • Virtual reality technology is a technology that can create and experience a virtual world computer simulation system. It uses a computer to generate a simulation environment, and uses the multi-source information fusion interactive 3D dynamic view and system simulation of entity behavior to immerse users. In this environment.
  • a method embodiment of an authentication method based on a virtual reality scenario is provided.
  • the foregoing virtual reality scenario-based authentication method may be applied to a hardware environment formed by the server 102 and the terminal 104 as shown in FIG. 1.
  • the server 102 is connected to the terminal 104 through a network, including but not limited to a wide area network, a metropolitan area network, or a local area network.
  • the terminal 104 is not limited to a personal computer (PC), a mobile phone, or a tablet computer. Wait.
  • the virtual reality scenario-based authentication method of the embodiment of the present invention may be executed by the server 102, may be performed by the terminal 104, or may be performed by the server 102 and the terminal 104 in common.
  • the virtual reality scenario-based authentication method performed by the terminal 104 in the embodiment of the present invention may also be performed by a client installed thereon.
  • FIG. 2 is a flow diagram of an optional virtual reality scene based authentication method according to an embodiment of the present invention. As shown in FIG. 2, the method may include the following steps:
  • Step S202 receiving an authentication request in a virtual reality scenario
  • Step S204 collecting fingerprint information to be authenticated by using a fingerprint collection device in a real scene
  • Step S206 the fingerprint information to be authenticated is sent to the authentication device in the real scene
  • step S208 the authentication result information sent by the authentication device is received in the virtual reality scenario, where the authentication result information is used to indicate that the fingerprint information to be authenticated passes the authentication or fails the authentication.
  • the above steps S202 to S208 may be performed by a virtual reality device, such as a helmet display, a light valve glasses, or the like.
  • a virtual reality device such as a helmet display, a light valve glasses, or the like.
  • the fingerprint information to be authenticated collected by the fingerprint collection device in the real scene is sent to the authentication device in the real scene for authentication, and the virtual authentication is achieved.
  • the establishment of a payment authentication system in a real-world scenario can also achieve the purpose of payment authentication, thereby realizing the technical effect of improving the efficiency of payment in a virtual reality scenario, and further solving the need for the related technology to be established in a virtual reality scenario when paying in a virtual reality scenario. Paying for cognitive systems will result in less efficient technical issues of payment in virtual reality scenarios.
  • the virtual reality scene may be a scene displayed by the virtual reality device capable of presenting the virtual reality scene.
  • the virtual reality device may be a helmet display, a light valve glasses, or the like.
  • the authentication request may be a request triggered by performing a target authentication event in a virtual reality scenario, where the target authentication event is not specifically limited.
  • the target authentication event may be a payment authentication event that needs to be performed when the virtual reality scenario is paid, or a rights authentication event that needs to be executed when the event with the rights authentication is executed in the virtual reality scenario.
  • the virtual reality device may detect whether there is a target authentication event in the virtual reality scenario.
  • the virtual reality device can shorten the response time to the authentication request by detecting the authentication request in real time, thereby improving the execution efficiency of the target authentication event in the virtual reality scenario.
  • the virtual reality device may display the prompt information in the virtual reality scenario, where the prompt information may be used to prompt the user to input the authentication information indicated by the authentication request to complete the target authentication event.
  • the indicated authentication operation This embodiment is passed in virtual
  • the prompt information is displayed in the real scene, and the user can be prompted to perform the authentication operation more intuitively, thereby effectively improving the user experience.
  • the user may perform an authentication operation according to the prompt information displayed in the virtual reality scenario.
  • the authentication operation performed by the user may be inputting information to be authenticated, such as fingerprint information, voice information, face recognition information, and the like.
  • the embodiment of the present invention takes fingerprint information as an example for description.
  • the user can input the fingerprint information to be authenticated through the fingerprint collection device in the real scene.
  • the fingerprint collection device may be a fingerprint scanner or other device capable of scanning and collecting fingerprints, and the fingerprint collection device may be configured.
  • the user can place a finger in the fingerprint collection area to complete fingerprint collection.
  • the fingerprint information to be authenticated may be information of a fingerprint input by the user on the fingerprint collection device, where the fingerprint information to be authenticated may be a fingerprint image or fingerprint feature information.
  • the fingerprint collection device can be communicatively coupled to the virtual reality device, and the communication connection is preferably a wireless connection, such as Bluetooth, Wireless Fidelity (WiFi), or the like.
  • the communication connection is preferably a wireless connection, such as Bluetooth, Wireless Fidelity (WiFi), or the like.
  • WiFi Wireless Fidelity
  • the fingerprint collection device can send the collected fingerprint information to be authenticated to the virtual reality device, so that the virtual reality device can be in the virtual reality scene according to the received fingerprint information to be authenticated.
  • the purpose of the authentication is completed in response to the authentication request.
  • the virtual reality device may send the information to be authenticated to the authentication device in the real scene for authentication.
  • the authentication device in the real-world scenario is not specifically limited in the embodiment of the present invention.
  • the authentication device may be an Alipay, a bank payment verification platform, or the like.
  • the authentication device in the real-world scenario may be in communication with the virtual reality device, and the communication connection is preferably a wireless connection, such as Bluetooth, WiFi, etc., and the virtual reality device may use the fingerprint information to be authenticated by using a communication connection between the authentication device and the virtual reality device. Send to the authentication device for authentication by the authentication device.
  • the fingerprint information of the user may be pre-stored in the authentication device. It should be noted that the authentication device may store fingerprint information of multiple users, and the fingerprint information has a unique correspondence with the user. After receiving the fingerprint information to be authenticated, the authentication device may first determine whether there is the same fingerprint information as the to-be-authenticated fingerprint information. If not, the authentication device may directly determine that the to-be-authenticated fingerprint information fails the authentication; if yes, the authentication device According to the fingerprint information and the user's correspondence The relationship continues to authenticate the information of the user corresponding to the fingerprint information to be authenticated. If the information of the user corresponding to the fingerprint information to be authenticated is legal, the authentication device may determine that the authentication information to be authenticated passes the authentication, otherwise the information to be authenticated is determined not to pass. Certification.
  • the authentication result information may be fed back to the virtual reality device by using a communication connection between the authentication device and the virtual reality device.
  • the authentication result information may be used to indicate whether the to-be-authenticated fingerprint information passes the authentication, and may include passing the authentication and failing the authentication.
  • the embodiment may further include: step S210, displaying the authentication result information in the virtual reality scenario.
  • step S210 displaying the authentication result information in the virtual reality scenario.
  • the display manner of the authentication result information in the virtual reality scenario is not specifically limited. The embodiment can display the authentication result information in the virtual reality scenario, so that the user can clearly and intuitively obtain the authentication result information of the to-be-authenticated fingerprint information, which is more in line with the user's requirement, and effectively improves the user experience.
  • the embodiment may further include the following steps:
  • Step S2032 determining whether the indication identifier points to an authentication area in the virtual reality scenario, where the indication identifier is generated by the fingerprint collection device in the virtual reality scenario;
  • step S2034 when it is determined that the indication identifier points to the authentication area, the prompt information is displayed in the virtual reality scene, wherein the prompt information is used to prompt input of the fingerprint information to be authenticated.
  • an authentication area may be displayed in the virtual reality scenario, and content that requires user authentication may be displayed in the authentication area. For example, the amount of money that the user needs to pay and the authentication content required for the payment process may be displayed in the authentication area.
  • the indication identifier may be generated by the fingerprint collection device in the virtual reality scenario. It should be noted that the embodiment of the present invention does not specifically limit the form of the indication identifier.
  • the indication identifier may be a mouse arrow, an indicator line, or the like.
  • the control indication identifier is moved in the virtual reality scenario by performing corresponding operations on the fingerprint collection device, so that the indication identifier is directed to the authentication region.
  • the prompt information may be displayed in the virtual reality scene, where the prompt information may be used to prompt the user to set the fingerprint collection in the display scene. Enter the fingerprint information to be authenticated.
  • the embodiment may first determine whether the indication identifier generated by the fingerprint collection device in the virtual reality scenario points to the authentication area, and if it is determined that the indication identifier points to the authentication area, the user is to be The content displayed in the authentication area that requires user authentication is authenticated.
  • the prompt information may be displayed in the virtual reality scenario, and the user is prompted to input the fingerprint information to be authenticated on the fingerprint collection device, and the corresponding authentication is performed by using the fingerprint information to be authenticated.
  • the embodiment by determining whether the indication identifier generated by the fingerprint collection device in the virtual reality scenario points to the authentication area, the content that needs the user authentication can be intuitively determined in the virtual reality scenario, so that the authentication target of the user is relatively clear. Moreover, the embodiment displays the prompt information in the virtual reality scene when the indication identifier is directed to the authentication area, prompting the user to input the fingerprint information to be authenticated on the fingerprint collection device, so that the user can more clearly understand the operation flow of performing the authentication, which not only improves the user. The experience of using, but also improve the efficiency of users to perform authentication.
  • the embodiment may further include the following steps:
  • Step S212 When the authentication result information indicates that the fingerprint information to be authenticated passes the authentication, the resource transfer event corresponding to the authentication area is executed in the virtual reality scenario.
  • content that requires user authentication may be displayed in the authentication area.
  • the content required for user authentication displayed in the authentication area is the amount required for the user to pay and the authentication content required for the payment process
  • the authentication device performs authentication
  • the obtained authentication result information indicates that the to-be-authenticated fingerprint information is authenticated
  • the embodiment may perform a resource transfer event corresponding to the authentication area in the virtual reality scenario, that is, the amount that the user needs to pay is transferred from the user account.
  • the authentication process in the virtual reality scenario can be completed by using the fingerprint collection device and the authentication device in the virtual reality scenario, so that the authentication system can be reduced in the virtual reality scenario. Resource consumption can also improve the efficiency of authentication in virtual reality scenarios.
  • the sending the information to be authenticated to the authentication device in the real scene in step S206 may include the following steps S2062 and S2064, specifically:
  • Step S2062 The first timestamp is sent to the authentication device from the virtual reality scenario, where the first timestamp is a time point at which the fingerprint collection device collects the fingerprint information to be authenticated.
  • the fingerprint collection device may collect the fingerprint information input by the user, and may also record the time when the fingerprint information is collected, and the time may exist in the form of a time stamp.
  • the first timestamp in this embodiment may be a time point at which the fingerprint collection device collects the fingerprint information to be authenticated, and the fingerprint collection device records the first timestamp while collecting the information to be authenticated input by the user, and uses the fingerprint collection device.
  • the communication connection with the virtual reality device sends the collected fingerprint information to be authenticated together with the first time stamp to the virtual reality device.
  • the virtual reality device may send the authentication device to the authentication device by using a communication connection between the virtual reality device and the authentication device for authentication.
  • Step S2064 The fingerprint collection device and the communication terminal device send the to-be-authenticated fingerprint information and the second timestamp to the authentication device, where the second timestamp is a time point at which the fingerprint collection device collects the information to be authenticated, and the fingerprint collection device passes The connection established with the communication terminal device performs data transmission with the communication terminal device; wherein the first timestamp and the second timestamp are used by the authentication device to authenticate the authentication fingerprint information.
  • the type of the communication terminal device is not specifically limited in the embodiment of the present invention.
  • the communication terminal device may be a device such as a mobile phone or a computer.
  • the fingerprint collection device may be in communication with the communication terminal device.
  • the communication connection may be a wired connection or a wireless connection.
  • the embodiment preferably sets a wireless connection between the fingerprint collection device and the communication terminal device, such as Bluetooth, WiFi, and the like. .
  • the wireless fingerprint connection between the fingerprint collection device and the communication terminal device can be used to send the collected fingerprint information and the second timestamp to the communication terminal device.
  • the second timestamp can be collected by the fingerprint collection device.
  • the communication terminal device may be in communication with the authentication device.
  • the communication connection may be a wired connection or a wireless connection, which is not specifically limited in the present invention.
  • the communication terminal device may use the communication connection between the communication terminal device and the authentication device to send the to-be-authenticated fingerprint information and the second timestamp to the authentication device for authentication by the authentication device.
  • the fingerprint information pre-stored in the authentication device may be reported to the fingerprint collection device after the fingerprint information is collected by the communication terminal device.
  • the time point of the fingerprint information, that is, the time stamp is reported to the authentication device together with the fingerprint information, and the fingerprint device, the time stamp, and the corresponding information of the user may be stored in the authentication device. relationship.
  • the authentication device may further authenticate the first timestamp and the second timestamp, that is, whether the first timestamp and the second timestamp match, while authenticating the fingerprint information to be authenticated. This can achieve the effect of improving the authentication accuracy of the authentication device.
  • the step S208 of receiving the authentication result information sent by the authentication device in the virtual reality scenario may include the following steps:
  • Step S2082 After the authentication device determines that the first timestamp matches the second timestamp, and the fingerprint information that matches the fingerprint information to be authenticated exists in the fingerprint database, the first sent by the authentication device is received in the virtual reality scenario.
  • the authentication result information where the first authentication result information is used to indicate that the fingerprint information to be authenticated passes the authentication;
  • Step S2084 If the authentication device determines that the first timestamp does not match the second timestamp, and/or the fingerprint information that matches the fingerprint information to be authenticated does not exist in the fingerprint database, the authentication device is received in the virtual reality scenario. The second authentication result information is sent, where the second authentication result information is used to indicate that the fingerprint information to be authenticated fails to pass the authentication.
  • the authentication process of the authentication device may include: determining whether the first timestamp and the second timestamp match; and whether there is fingerprint information matching the fingerprint information to be authenticated in the fingerprint database, wherein the fingerprint database may be an authentication device A database used to store fingerprint information.
  • the authentication device may send the first authentication result to the virtual reality device.
  • the first authentication result information may be used to indicate that the fingerprint information to be authenticated passes the authentication; if it is determined that the first timestamp does not match the second timestamp, or the fingerprint information that matches the fingerprint information to be authenticated does not exist in the fingerprint database If the authentication device determines that the first timestamp does not match the second timestamp and the fingerprint information does not exist in the fingerprint database, the authentication device may send the second authentication result information to the virtual reality device, where The second authentication result information may be used to indicate that the fingerprint information to be authenticated fails to pass the authentication.
  • the authentication device determines whether the authentication device has the fingerprint information to be authenticated in the authentication fingerprint database At the same time as the matching fingerprint information, it is also required to authenticate whether the first timestamp and the second timestamp match, and the multiple authentication mechanism can greatly improve the authentication accuracy of the authentication device. Moreover, after the authentication device determines the authentication result, by timely feeding back to the virtual reality device and displaying it in the virtual reality scenario, the user can intuitively and clearly know the authentication result, thereby improving the user experience.
  • the present invention also provides a preferred embodiment, which is described by taking payment authentication in a virtual reality scenario as an example. It should be noted that the application scenario of the payment authentication in the virtual reality scenario is only a preferred embodiment of the present invention, and the embodiment of the present invention can also be applied to scenarios such as rights authentication in a virtual reality scenario.
  • FIG. 3 is a flowchart of a virtual reality scenario based authentication method according to a preferred embodiment of the present invention. As shown in FIG. 3, the preferred embodiment may include the following steps:
  • Step S301 displaying an authentication interface in the virtual reality scene.
  • the authentication interface may be used to instruct the user to perform payment authentication.
  • the authentication interface in the virtual reality scenario can be as shown in FIG. 4 .
  • the authentication interface can display the content of the payment authentication.
  • the authentication interface displays “100 yuan to be paid, please enter the payment password”.
  • Other contents may be realized in the virtual reality scene as shown in FIG. 4, for example, the computer shown in FIG. 4, where FIG. 4 is only a schematic diagram of the virtual reality scene, and does not show all the contents in the virtual reality scene.
  • step S302 the indication identifier generated by the fingerprint collection device in the virtual reality scenario is directed to the authentication area in the authentication interface.
  • This step is for indicating that the authentication content indicated in the authentication area is authenticated.
  • the structure of the fingerprint collection device in the real scene may be as shown in FIG. 5, and the fingerprint collection device and the corresponding function operation area, such as a game function button, may be disposed in the fingerprint collection device.
  • the fingerprint indication device control indication indicator is pointed to the authentication area, as shown in FIG. 6, wherein the indicator line in FIG. 6 is used to indicate that the fingerprint collection device is generated in the virtual reality scene.
  • the indication identifier points to an authentication area in the virtual reality scene.
  • Step S303 prompting the user to input the fingerprint to be authenticated in the fingerprint collection device in the virtual reality scenario.
  • the prompt information may be displayed in the virtual reality scenario, and the prompt information may be used to prompt the user to input the fingerprint information to be authenticated in the fingerprint collection device.
  • Step S304 the user inputs the fingerprint information to be authenticated in the fingerprint collection device.
  • the fingerprint collection device may also record the time when the fingerprint information to be authenticated is collected, and the time is recorded in the form of a time stamp.
  • the virtual reality device can receive the fingerprint information to be authenticated and the timestamp collected by the fingerprint collection device, and send the same to the authentication device in the real scene.
  • the authentication device in the real scene may be an Alipay, a bank payment platform, or the like.
  • the fingerprint collection device may send the collected fingerprint information to be authenticated and the time stamp to the communication terminal device having a communication connection with the fingerprint collection device.
  • the communication terminal device may be a mobile phone, a computer, or the like, where the communication connection may be Bluetooth, WiFi, or the like.
  • Step S307 The communication terminal device may send the received fingerprint information to be authenticated and the timestamp to the authentication device in the real device.
  • the step is used for the authentication device to authenticate the fingerprint information to be authenticated and the time stamp sent by the virtual reality device according to the information.
  • Step S308 the authentication device in the real scene authenticates the authentication fingerprint information.
  • the authentication device may authenticate the fingerprint information to be authenticated and the time stamp sent by the virtual reality device according to the fingerprint information to be authenticated and the time stamp sent by the communication terminal.
  • the authentication process of the authentication device may include: determining whether the timestamp sent by the virtual reality device matches the timestamp sent by the communication terminal device; determining whether the fingerprint information in the fingerprint database is the same as the fingerprint information to be authenticated sent by the virtual reality device
  • the fingerprint information stored in the fingerprint database may be the fingerprint information collected by the fingerprint collection device in the real scene and sent by the communication terminal device. If any one of the foregoing authentication steps is not satisfied, the authentication device determines that the fingerprint information to be authenticated fails to pass the authentication; if the foregoing authentication steps are all satisfied, the authentication device determines that the fingerprint information to be authenticated passes the authentication.
  • Step S309 outputting the authentication result information of the authentication device in the virtual reality scenario.
  • the authentication result information is that the authentication is passed or the authentication is not passed.
  • Display recognition in virtual reality scene The certificate result information can be as shown in FIG. 7, and the authentication result information shown in FIG. 7 is authenticated.
  • the preferred embodiment does not need to establish an authentication system in a virtual reality scenario, but performs data interaction with a fingerprint collection device and an authentication device in a real scene through a virtual reality device, such as a helmet display, a light valve glasses, etc., to implement virtual reality. Payment authentication is performed in the scenario.
  • a virtual reality device such as a helmet display, a light valve glasses, etc.
  • FIG. 8 is a schematic diagram of a data interaction process between a virtual reality scene and a real scene according to a preferred embodiment of the present invention.
  • the data interaction process between the virtual reality device, the fingerprint collection device, and the authentication device includes: The indication information generated by the fingerprint collection device in the virtual reality scene points to the authentication area in the virtual reality scene; the prompt information is displayed in the virtual reality scene, and the user is prompted to input the information to be authenticated on the fingerprint collection device; the fingerprint collection device is collected.
  • the collected fingerprint information to be authenticated and the time point of the fingerprint information to be authenticated are sent to the virtual reality device in the form of a time stamp; the virtual reality device may receive the fingerprint information to be authenticated and the corresponding time.
  • the stamp is sent to the authentication device in the real scene for authentication.
  • the collected fingerprint information and the corresponding time stamp may be sent to a communication terminal device having a communication connection with the fingerprint collection device, such as a mobile phone, a computer, etc.; the communication terminal device The fingerprint information collected by the fingerprint collection device and the corresponding time stamp may be sent to the authentication device, and the authentication device stores the fingerprint database in the fingerprint device.
  • the authentication device may authenticate the authentication fingerprint information according to the information stored in the fingerprint database, and feed back the authentication result information to the virtual reality device; if the authentication passes, Then, the authentication pass information is returned to the virtual reality device; if the authentication fails, the indication information indicating that the user fails the authentication is output in the virtual reality scenario.
  • the data interaction process between the virtual reality device and the fingerprint collection device and the authentication device in the real-world scenario can achieve the purpose of establishing a payment authentication system in a virtual reality scenario and implementing payment authentication.
  • a technology pays in a virtual reality scenario it is necessary to establish a payment cognitive system in a virtual reality scenario, which will result in a technical problem of low payment efficiency in the virtual reality scenario, thereby improving the efficiency of payment in the virtual reality scenario.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium, including a plurality of instructions for causing a A terminal device (which may be a cell phone, a computer, a server, or a network device, etc.) performs the methods described in various embodiments of the present invention.
  • FIG. 9 is a schematic diagram of an optional virtual reality scenario-based authentication device according to an embodiment of the present invention. As shown in FIG. 9, the device may include:
  • the first receiving unit 22 is configured to receive the authentication request in the virtual reality scenario
  • the collecting unit 24 is configured to collect the fingerprint information to be authenticated by the fingerprint collecting device in the real scene
  • the sending unit 26 is configured to send the fingerprint information to be authenticated.
  • the authentication device is configured to receive the authentication result information sent by the authentication device in the virtual reality scenario, where the authentication result information is used to indicate that the fingerprint information to be authenticated passes the authentication or fails the authentication.
  • the first receiving unit 22 in this embodiment may be used to perform step S202 in Embodiment 1 of the present application.
  • the collecting unit 24 in this embodiment may be used to perform step S204 in Embodiment 1 of the present application.
  • the sending unit 26 in this embodiment may be used to perform step S206 in the first embodiment of the present application.
  • the second receiving unit 28 in this embodiment may be used to perform step S208 in Embodiment 1 of the present application.
  • the above modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the above embodiment 1. It should be noted that the above module can be operated as a part of the device in the hardware environment as shown in FIG. 1 , and can be implemented by software or through Through hardware implementation.
  • the embodiment may further include: a determining unit 232, configured to: after receiving the authentication request in the virtual reality scenario, and collecting by using the fingerprint collection device in the real scene Before the fingerprint information is to be authenticated, it is determined whether the indication identifier points to the authentication area in the virtual reality scenario, where the indication identifier is generated by the fingerprint collection device in the virtual reality scenario; the first display unit 234 is configured to determine that the indication identifier points to the authentication In the area, the prompt information is displayed in the virtual reality scene, wherein the prompt information is used to prompt for the fingerprint information to be authenticated.
  • a determining unit 232 configured to: after receiving the authentication request in the virtual reality scenario, and collecting by using the fingerprint collection device in the real scene Before the fingerprint information is to be authenticated, it is determined whether the indication identifier points to the authentication area in the virtual reality scenario, where the indication identifier is generated by the fingerprint collection device in the virtual reality scenario
  • the first display unit 234 is configured to determine that the indication identifier points to the authentication In the area, the prompt information
  • the determining unit 232 in this embodiment may be used to perform step S2032 in the first embodiment of the present application.
  • the first display unit 234 in the embodiment may be used to perform step S2034 in Embodiment 1 of the present application. .
  • modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the above embodiment 1. It should be noted that the foregoing module may be implemented in a hardware environment as shown in FIG. 1 as part of the device, and may be implemented by software or by hardware.
  • the embodiment may further include: an executing unit 212, configured to: after receiving the authentication result information sent by the authentication device in the virtual reality scenario, indicating that the authentication result information is to be When the authentication fingerprint information is authenticated, a resource transfer event corresponding to the authentication area is executed in the virtual reality scenario.
  • an executing unit 212 configured to: after receiving the authentication result information sent by the authentication device in the virtual reality scenario, indicating that the authentication result information is to be When the authentication fingerprint information is authenticated, a resource transfer event corresponding to the authentication area is executed in the virtual reality scenario.
  • execution unit 212 in this embodiment may be used to perform step S212 in Embodiment 1 of the present application.
  • modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the above embodiment 1. It should be noted that the foregoing module may be implemented in a hardware environment as shown in FIG. 1 as part of the device, and may be implemented by software or by hardware.
  • the sending unit 26 may include: a first sending module 262, configured to send a first timestamp from the virtual reality scenario to the authentication device, where the first timestamp is The time at which the fingerprint collection device collects the fingerprint information to be authenticated; the second sending module 264 is configured to send the fingerprint information to be authenticated and the second time stamp to the fingerprint collection device and the communication terminal device
  • the authentication device wherein the second timestamp is a time point at which the fingerprint collection device collects the fingerprint information to be authenticated, and the fingerprint collection device performs data transmission with the communication terminal device through a connection established with the communication terminal device; wherein, the first timestamp And the second timestamp is used by the authentication device to authenticate the authentication fingerprint information.
  • first sending module 262 in this embodiment may be used to perform step S2062 in the first embodiment of the present application.
  • the second sending module 264 in this embodiment may be used to perform the method in the first embodiment of the present application. Step S2064.
  • modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the above embodiment 1. It should be noted that the foregoing module may be implemented in a hardware environment as shown in FIG. 1 as part of the device, and may be implemented by software or by hardware.
  • the second receiving unit 28 may include: a first receiving module 282, configured to: at the authentication device, determine that the first timestamp matches the second timestamp, and the fingerprint database
  • the first authentication result information that is sent by the authentication device is received in the virtual reality scenario, where the first authentication result information is used to indicate that the to-be-authenticated fingerprint information is authenticated
  • the receiving module 284 is configured to: in the virtual reality scenario, if the authentication device determines that the first timestamp does not match the second timestamp, and/or the fingerprint information that matches the fingerprint information to be authenticated does not exist in the fingerprint database.
  • the second authentication result information sent by the authentication device is received, where the second authentication result information is used to indicate that the fingerprint information to be authenticated fails to pass the authentication.
  • first receiving module 282 in this embodiment may be used to perform step S2082 in the first embodiment of the present application.
  • the second receiving module 284 in this embodiment may be used to perform the method in the first embodiment of the present application. Step S2084.
  • modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the above embodiment 1. It should be noted that the foregoing module may be implemented in a hardware environment as shown in FIG. 1 as part of the device, and may be implemented by software or by hardware.
  • the embodiment may further include: a second display unit 210, after receiving the authentication result information sent by the authentication device in the virtual reality scenario, The authentication result information is displayed in the virtual reality scenario.
  • the second display unit 210 in this embodiment may be used to perform step S210 in Embodiment 1 of the present application.
  • modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the contents disclosed in the above embodiment 1. It should be noted that the foregoing module may be implemented in a hardware environment as shown in FIG. 1 as part of the device, and may be implemented by software or by hardware.
  • the purpose of establishing a payment authentication system in a virtual reality scenario can also be achieved, and the payment authentication system needs to be established in a virtual reality scenario when the related technology pays in a virtual reality scenario.
  • a server or a terminal for implementing the above-described virtual reality scenario-based authentication method is further provided.
  • the server or the terminal in this embodiment may be applied to the virtual reality device of the present invention, wherein the virtual reality device may present a virtual reality scenario, and the virtual reality device may be used to perform the steps in Embodiment 1 to Implement authentication in virtual reality scenarios.
  • FIG. 15 is a structural block diagram of a terminal according to an embodiment of the present invention.
  • the terminal may include: one or more (only one shown in the figure) processor 201, memory 203, and transmission device 205.
  • the terminal may further include an input and output device 207.
  • the memory 203 can be used to store a software program and a module, such as a virtual reality scene based authentication method and a program instruction/module corresponding to the device in the embodiment of the present invention.
  • the processor 201 runs the software program and the module stored in the memory 203.
  • various functional applications and data processing are performed, that is, the above-described virtual reality scenario-based authentication method is implemented.
  • Memory 203 can include high speed random access memory, and can also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 203 can further include memory remotely located relative to processor 201, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the above described transmission device 205 is used to receive or transmit data via a network, and can also be used for data transmission between the processor and the memory. Specific examples of the above network may include a wired network and a wireless network.
  • the transmission device 205 includes a Network Interface Controller (NIC) that can be connected to other network devices and routers via a network cable to communicate with the Internet or a local area network.
  • the transmission device 205 is a Radio Frequency (RF) module for communicating with the Internet wirelessly.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • the memory 203 is used to store an application.
  • the processor 201 can call the application stored in the memory 203 through the transmission device 205 to perform the following steps: receiving an authentication request in the virtual reality scenario; collecting the fingerprint information to be authenticated by the fingerprint collection device in the real scene; The information is sent to the authentication device in the real-world scenario; the authentication result information sent by the authentication device is received in the virtual reality scenario, where the authentication result information is used to indicate that the fingerprint information to be authenticated passes the authentication or fails the authentication.
  • the processor 201 is further configured to: after receiving the authentication request in the virtual reality scenario, and before collecting the to-be-authenticated fingerprint information by the fingerprint collection device in the real scene, determining whether the indication identifier points to the authentication in the virtual reality scenario An area, where the indication identifier is generated by the fingerprint collection device in the virtual reality scenario; when it is determined that the indication identifier points to the authentication area, the prompt information is displayed in the virtual reality scene, wherein the prompt information is used to prompt input of the fingerprint information to be authenticated.
  • the processor 201 is further configured to: after receiving the authentication result information sent by the authentication device in the virtual reality scenario, when the authentication result information indicates that the fingerprint information to be authenticated passes the authentication, perform the corresponding to the authentication area in the virtual reality scenario. Resource transfer event.
  • the processor 201 is further configured to: send the first timestamp to the authentication device from the virtual reality scenario, where the first timestamp is a time point at which the fingerprint collection device collects the fingerprint information to be authenticated;
  • the communication terminal device sends the fingerprint information to be authenticated and the second time stamp to the authentication device, where the second timestamp is a time point at which the fingerprint collection device collects the fingerprint information to be authenticated, and the fingerprint collection device is established between the communication terminal device and the communication terminal device. Connecting and communicating with the communication terminal device for data transmission; wherein the first timestamp and the second timestamp are used by the authentication device to authenticate the authentication fingerprint information.
  • the processor 201 is further configured to perform the following steps: when the authentication device determines the first timestamp and the second time
  • the first authentication result information sent by the authentication device is received in the virtual reality scenario, where the first authentication result information is used to indicate that the information is to be displayed.
  • the authentication fingerprint information is authenticated; if the authentication device determines that the first timestamp does not match the second timestamp, and/or the fingerprint information that matches the fingerprint information to be authenticated does not exist in the fingerprint database, the virtual reality scene is received.
  • the second authentication result information sent by the authentication device, where the second authentication result information is used to indicate that the fingerprint information to be authenticated fails to pass the authentication.
  • the processor 201 is further configured to: after receiving the authentication result information sent by the authentication device in the virtual reality scenario, displaying the authentication result information in the virtual reality scenario.
  • a scheme for authentication based on a virtual reality scenario is provided.
  • the fingerprint information to be authenticated collected by the fingerprint collection device in the real-world scenario is sent to the authentication device in the real-world scenario for authentication, thereby eliminating the need to establish a payment authentication system in the virtual reality scenario.
  • the purpose of the payment authentication can also be realized, thereby solving the technical problem that the related technology needs to establish a payment cognitive system in the virtual reality scenario when paying in the virtual reality scenario, which will result in low efficiency of payment in the virtual reality scenario. Thereby, the technical effect of improving the efficiency of payment in the virtual reality scene is achieved.
  • the terminal may be a terminal device such as a helmet display or a light valve glasses capable of presenting a virtual reality scene.
  • Fig. 15 does not limit the structure of the above electronic device.
  • the terminal may also include more or less components (such as a network interface, display device, etc.) than shown in FIG. 15, or have a different configuration than that shown in FIG.
  • Embodiments of the present invention also provide a storage medium.
  • the storage medium can be used to execute program code of a virtual reality scene based authentication method.
  • the storage medium in this embodiment can be applied to the virtual reality device of the present invention, such as a helmet display, a light valve glasses, and the like.
  • the virtual reality device may perform the virtual reality scenario-based authentication method of the embodiment of the present invention by using the storage medium of the embodiment to implement authentication in the virtual reality scenario.
  • the storage medium is arranged to store program code for performing the following steps:
  • the fingerprint information to be authenticated is sent to the authentication device in the real scene;
  • the authentication result information sent by the authentication device is received in the virtual reality scenario, where the authentication result information is used to indicate that the fingerprint information to be authenticated passes the authentication or fails the authentication.
  • the storage medium is further configured to store program code for performing the following steps: after receiving the authentication request in the virtual reality scenario, and determining the indication before collecting the fingerprint information to be authenticated by the fingerprint collection device in the real scene Whether the identifier is directed to the authentication area in the virtual reality scenario, where the indication identifier is generated by the fingerprint collection device in the virtual reality scenario; when it is determined that the indication identifier is directed to the authentication region, the prompt information is displayed in the virtual reality scenario, wherein the prompt information is displayed. Used to prompt for the fingerprint information to be authenticated.
  • the storage medium is further configured to store program code for performing the following steps: after receiving the authentication result information sent by the authentication device in the virtual reality scenario, when the authentication result information indicates that the fingerprint information to be authenticated passes the authentication, A resource transfer event corresponding to the authentication area is executed in the virtual reality scenario.
  • the storage medium is further configured to store program code for performing the following steps: sending the first timestamp from the virtual reality scenario to the authentication device, wherein the first timestamp collects the fingerprint information to be authenticated for the fingerprint collection device
  • the time-point is sent to the authentication device by the fingerprint collection device and the communication terminal device, wherein the second timestamp is a time point at which the fingerprint collection device collects the fingerprint information to be authenticated, and the fingerprint collection device Data transmission with the communication terminal device through a connection established with the communication terminal device; wherein the first time stamp and the second time stamp are used for authentication
  • the authentication fingerprint information is to be authenticated.
  • the storage medium is further configured to store program code for performing the following steps: the authentication device determines that the first timestamp matches the second timestamp, and the fingerprint database has fingerprint information matching the fingerprint information to be authenticated.
  • the first authentication result information that is sent by the authentication device is received in the virtual reality scenario, where the first authentication result information is used to indicate that the to-be-authenticated fingerprint information passes the authentication; and the authentication device determines the first timestamp and the second If the time stamp does not match, and/or the fingerprint information that matches the fingerprint information to be authenticated does not exist in the fingerprint database, the second authentication result information sent by the authentication device is received in the virtual reality scenario, where the second authentication result information It is used to indicate that the fingerprint information to be authenticated fails to pass the authentication.
  • the storage medium is further configured to store program code for performing the following steps: after receiving the authentication result information sent by the authentication device in the virtual reality scenario, displaying the authentication result information in the virtual reality scenario.
  • the method further includes determining whether the indication identifier points to the virtual reality scenario.
  • the indication identifier is generated by the fingerprint collection device in a virtual reality scenario. That is, when performing authentication based on the virtual reality scenario, it is also necessary to indicate that the identification area is selected.
  • an embodiment of the present invention further provides a virtual object selection method for selecting a virtual object by operating a focus in a virtual reality scene.
  • the operation focus mentioned in the following embodiments refers to a point corresponding to the input device in the three-dimensional virtual environment, that is, an indication identifier in the authentication scheme based on the virtual reality scenario.
  • the specific representation of the virtual object in the virtual object selection scheme may be an authentication area in the authentication scheme based on the virtual reality scenario.
  • the selection scheme of the virtual object will be described in detail below through Embodiment 5 to Embodiment 7.
  • FIG. 16 is a schematic structural diagram of a virtual reality (VR) system according to an embodiment of the present invention.
  • the VR system includes a head mounted display 120, a processing unit 140, and an input device 160.
  • the head mounted display 120 is a display for wearing an image display on the user's head.
  • the head mounted display 120 generally includes a wearing portion including a temple for wearing the head mounted display 120 on the head of the user and an elastic band, and the display portion including a left eye display and a right eye display.
  • the head mounted display 120 is capable of displaying different images on the left eye display and the right eye display to simulate a three dimensional virtual environment for the user.
  • the head mounted display 120 is electrically coupled to the processing unit 140 via a flexible circuit board or hardware interface.
  • Processing unit 140 is typically integrated within the interior of head mounted display 120.
  • the processing unit 140 is configured to model a three-dimensional virtual environment, generate a display screen corresponding to the three-dimensional virtual environment, generate a virtual object in the three-dimensional virtual environment, and the like.
  • the processing unit 140 receives an input signal of the input device 160 and generates a display screen of the head mounted display 120.
  • Processing unit 140 is typically implemented by electronics such as a processor, memory, image processing unit, etc. disposed on a circuit board.
  • the processing unit 140 further includes a motion sensor for capturing a user's head motion and changing the display screen in the head mounted display 120 according to the user's head motion.
  • the processing unit 140 is coupled to the input device 160 via a cable, Bluetooth connection, or WiFi connection.
  • the input device 160 is an input peripheral such as a somatosensory glove, a somatosensory handle, a remote controller, a treadmill, a mouse, a keyboard, and a human eye focusing device.
  • physical buttons and motion sensors are provided in the input device 160.
  • the physical button is used to receive an operation command triggered by the user, and the motion sensor is used to collect the spatial attitude of the input device 160.
  • the gravity acceleration sensor can detect the magnitude of acceleration in each direction (usually three axes), and the magnitude and direction of gravity can be detected at rest; the gyro sensor can detect the angular velocity in each direction and detect The rotational action of the input device 160.
  • Input device 160 upon receiving an operational command, transmits an operational command to processing unit 140; input device 160 transmits mobile data and/or rotational data to processing unit 140 upon movement and/or rotation.
  • FIG. 17 is a flowchart of a method for selecting a virtual object according to an embodiment of the present invention. This embodiment is exemplified by applying the virtual object selection method to the VR system shown in FIG. 16. The method includes:
  • Step 1701 Determine a location of an operation focus in a three-dimensional virtual environment, where the operation focus is a point corresponding to the input device in the three-dimensional virtual environment, where the virtual object includes a virtual object, and the virtual object includes a controlled point for accepting the operation;
  • a three-dimensional virtual environment is a virtual environment modeled by a processing unit.
  • the three-dimensional virtual environment can be a room, a building, a game scene, and the like.
  • the three-dimensional virtual environment includes: an x-axis, y
  • the axis between any two of the x-axis, y-axis, and z-axis is vertical.
  • the three-dimensional virtual environment includes a plurality of virtual objects, each of which has corresponding three-dimensional coordinates in the three-dimensional virtual environment.
  • Each virtual object has one or more controlled points.
  • the virtual object is a box, the center point of the box is the controlled point 32, as shown in FIG. 18A; for example, the virtual object is a toothbrush, and a point in the handle of the toothbrush is a controlled point 32, as shown in the figure. 18B; for example, the virtual object is a stick having a controlled point 32 in each end of the stick, as shown in Figure 18C.
  • the operational focus is the point at which the input device corresponds in the three-dimensional virtual environment, and the operational focus is used to indicate the operational location of the input device in the three-dimensional virtual environment.
  • the operational focus has corresponding three-dimensional coordinates in the three-dimensional virtual environment.
  • Step 1702 determining a three-dimensional operation range of the operation focus by using the operation focus as a reference position
  • the three-dimensional operation range is a sphere-shaped range in which the operation focus is the center of the sphere.
  • the three-dimensional operation range is a spherical sphere with a radius of 20 cm with an operating focus of the center of the sphere.
  • the three-dimensional operating range also moves in the three-dimensional virtual environment.
  • the control unit determines the three-dimensional operating range of the operating focus in real time with the operating focus as the reference position.
  • Step 1703 When receiving the operation instruction, determine the virtual object whose controlled point is located in the three-dimensional operation range as the selected virtual object.
  • a controlled point is a point on a virtual object that is used to accept an operation.
  • the controlled points have corresponding three-dimensional coordinates in the three-dimensional virtual environment.
  • the controlled point also moves.
  • the operation instruction is an instruction received by the input device.
  • the operation instruction includes: selecting an object instruction, picking up an object instruction, opening an object instruction, using an object instruction, tapping an object instruction, an attack instruction, and the like.
  • the type of operation of the operation instruction in this embodiment is not limited, and is determined according to a specific embodiment.
  • the processing unit when the processing unit receives the operation instruction, the processing unit detects whether there is a controlled point of the virtual object in the three-dimensional operation range; when there is a controlled point of the virtual object in the three-dimensional operation range, the controlled point is located in the three-dimensional The virtual object within the operating range is determined to be the selected virtual object.
  • the three-dimensional virtual environment 30 includes: a virtual table 31 and a virtual box 33.
  • the virtual box 33 is placed on the virtual table 31, and the control unit determines the spherical three-dimensional operation range 37 by operating the focus 35 as a center of the ball. .
  • the control unit receives the open object command, the control unit detects the three-dimensional operation range 37 Whether there is a controlled point 32 of the virtual object within, when the controlled point 32 of the virtual box 33 is within the three-dimensional operating range 37, the control unit determines the virtual box 33 as the selected virtual object.
  • the processing unit when there is no controlled point of the virtual object in the three-dimensional operation range, the processing unit does not respond to the operation instruction; when there is a controlled point of the virtual object in the three-dimensional operation range, the processing unit directly treats the virtual object as The selected virtual object; when there are at least two controlled points of the virtual object in the three-dimensional operating range, the processing unit automatically selects a virtual object as the selected virtual object from the controlled points of the at least two virtual objects.
  • the virtual object selection method determines the three-dimensional operation range of the operation focus by using the operation focus as the reference position, and only needs to determine the three-dimensional operation range for one operation focus, and does not need to be for each virtual object.
  • the controlled point sets the response range, and solves the problem that the response range is set based on the controlled point of each virtual object.
  • step 1703 can alternatively be implemented as step 1703a and step 1703b, as shown in FIG.
  • Step 1703a when receiving the operation instruction, determining a virtual object whose controlled point is located within a three-dimensional operation range;
  • the processing unit Upon receiving the operation command, the processing unit detects whether there is a controlled point of the virtual object within the three-dimensional operation range.
  • the detection process can be implemented by an intersection intersection operation or a collision detection operation between the three-dimensional operation range and the controlled point.
  • step 1703b when there are at least two virtual objects existing in the three-dimensional operation range, the selected virtual object is determined according to the attribute information of the virtual object.
  • the processing unit automatically determines a virtual object as the selected virtual object from two or more virtual objects according to the attribute information of the virtual object.
  • the attribute information of the virtual object includes at least one of an object type of the virtual object, a priority of the virtual object, a distance between the controlled point of the virtual object, and an operation focus.
  • the virtual object selection method provided in this embodiment automatically selects a selected virtual object by the processing unit when the virtual object is at least two, thereby reducing the operation steps and time costs of the user, and The user's willingness to choose and the convenience of automatically selecting virtual objects.
  • the attribute information of the virtual object includes at least one of three kinds of information.
  • the attribute information of the virtual object includes the object type of the virtual object, refer to the embodiment shown in FIG. 20A as follows; when the attribute information of the virtual object includes the priority of the virtual object, refer to the embodiment shown in FIG. 21A as follows;
  • the attribute information includes the distance between the controlled point of the virtual object and the operational focus, reference is made to the embodiment shown in Fig. 22A below.
  • FIG. 20A is a flowchart of a method for selecting a virtual object according to an embodiment of the present invention. This embodiment is exemplified by applying the virtual object selection method to the VR system shown in FIG. 16. The method includes:
  • Step 501 Determine a location of an operation focus in a three-dimensional virtual environment, where the operation focus is a point corresponding to the input device in the three-dimensional virtual environment, where the virtual object includes a virtual object, and the virtual object includes a controlled point for accepting the operation;
  • the processing unit After the VR system is running, the processing unit is modeled to obtain a three-dimensional virtual environment, and the input device corresponds to an operational focus in the three-dimensional virtual environment.
  • the processing unit determines the location of the operational focus in the three-dimensional virtual environment based on the spatial location of the input device in the actual environment.
  • the input device When the input device moves, the input device transmits the movement data to the processing unit, and the processing unit moves the operation focus in the three-dimensional virtual environment according to the movement data.
  • the operation focus is a directional operational focus, such as a hand operation focus or a gun type operation focus
  • the input device when the input device rotates, the input device sends rotation data to the processing unit, and the processing unit operates the focus according to the rotation data. Rotate in a 3D virtual environment.
  • the movement data is used to indicate the movement distance of the input device on the x-axis, the y-axis, and/or the z-axis;
  • the rotation data is used to indicate the rotation angle of the input device on the x-axis, the y-axis, and/or the z-axis.
  • Step 502 Determine a three-dimensional operation range of the operation focus by using the operation focus as a reference position;
  • the processing unit determines the three-dimensional operating range of the sphere with the operating focus as the center of the sphere as the three-dimensional operating range of the operating focus.
  • the three-dimensional operation range of the operation focus also moves.
  • Step 503 Acquire an operation type corresponding to the operation instruction when receiving the operation instruction
  • the user triggers an operational command on the input device.
  • the triggering methods include, but are not limited to, pressing a physical button on the input device, making a predetermined gesture using the input device, shaking the input device, and the like.
  • the operation type of the operation instruction includes but is not limited to: selecting an object, picking up an object, opening an object At least one of a body, an object, a beat object, and an attack.
  • the pick-up object command is triggered; when the physical button B on the input device is pressed, the open object command is triggered.
  • the input device sends an operation instruction to the processing unit, and after receiving the operation instruction, the processing unit determines the operation type of the operation instruction.
  • the type of operation of the operational command is to open the object.
  • Step 504 Determine a virtual object whose controlled point is located within a three-dimensional operation range
  • the processing unit performs an intersection calculation on the three-dimensional operation range and the controlled point of the virtual object, and when there is an intersection, determining that the controlled point of the virtual object is located in the three-dimensional operation range.
  • the processing unit when there is no controlled point of the virtual object in the three-dimensional operation range, the processing unit does not respond to the operation instruction; when there is a controlled point of the virtual object in the three-dimensional operation range, the processing unit determines the virtual object as being The selected virtual object.
  • Step 505 Determine, when the virtual objects are at least two, the object type corresponding to each virtual object;
  • Each virtual object corresponds to a respective object type
  • the object types include, but are not limited to, walls, pillars, tables, chairs, cups, kettles, dishes, plants, people, rocks, and the like, and the division of the object type in this embodiment The form is not limited.
  • Step 506 Determine, from each object type corresponding to each virtual body object, a target object type that matches the operation type, and the target object type is a type having a capability to respond to the operation instruction;
  • the box has the ability to respond to an open object command, but the spoon does not have the ability to respond to an open object command; for example, the cup has the ability to respond to the pick object command, but the wall does not have the ability to respond to the pick object command.
  • the pick object command is an instruction for picking up an object into a virtual hand.
  • a matching relationship between the operation type and the object type is stored in the processing unit.
  • the correspondence is schematically shown in the following table.
  • the processing unit determines a target object type that matches the operation type according to the pre-stored matching relationship, and the target object type is a type having the ability to respond to the operation instruction.
  • Step 507 determining a virtual object having a target object type as the selected virtual object
  • the three-dimensional virtual environment 50 includes a virtual kettle 51 and a virtual cup 52.
  • the control unit determines the spherical three-dimensional operating range 54 with the operating focus 53 as a center point.
  • the control unit receives the open object command, it is determined that the controlled point of the virtual kettle 51 and the controlled point of the virtual cup 52 are located in the three-dimensional operation range 54, and the control unit determines that the object type of the virtual kettle 51 matches the operation type, the virtual cup The object type of 52 does not match the operation type, and the control unit determines the virtual kettle 51 as the selected virtual object.
  • Step 508 controlling the selected virtual object to respond to the operation instruction.
  • the processing unit controls the virtual kettle 51 to respond to an open object command, such as controlling the virtual kettle 51 to exhibit an animation of opening the kettle lid.
  • a virtual object is automatically selected as the selected object by the object type matching the operation type.
  • the virtual object realizes not only the user's own willingness to choose, but also the automatic selection of the virtual object, reducing the number of times the user selects multiple virtual objects, and efficiently and intelligently helping the user select a suitable virtual object.
  • FIG. 21A a flowchart of a method for selecting a virtual object according to an embodiment of the present invention is shown. This embodiment is exemplified by applying the virtual object selection method to the VR system shown in FIG. 16. The method includes:
  • Step 601 Determine a position of the operation focus in the three-dimensional virtual environment, where the operation focus is a point corresponding to the input device in the three-dimensional virtual environment, and the three-dimensional virtual environment includes a virtual object, and the virtual object includes a controlled point for accepting the operation;
  • Step 602 Determine a three-dimensional operation range of the operation focus by using the operation focus as a reference position;
  • the processing unit determines the three-dimensional operating range of the ellipsoid with the operating focus as the center of the sphere as the three-dimensional operating range of the operating focus.
  • the three-dimensional operation range of the operation focus also moves.
  • Step 603 when receiving an operation instruction, determining a virtual object whose controlled point is located within a three-dimensional operation range;
  • the operational command is an open object command.
  • the processing unit performs an intersection calculation on the three-dimensional operation range and the controlled point of the virtual object, and when there is an intersection, determining that the controlled point of the virtual object is located in the three-dimensional operation range.
  • the processing unit when there is no controlled point of the virtual object in the three-dimensional operation range, the processing unit does not respond to the operation instruction; when there is a controlled point of the virtual object in the three-dimensional operation range, the processing unit determines the virtual object as being The selected virtual object.
  • Step 604 determining a priority of each virtual object when the virtual objects are at least two;
  • the priority of each virtual object is a preset priority.
  • the priority of each virtual object is positively correlated with the number of historical uses. The more historical usage, the higher the priority.
  • the three-dimensional virtual environment 60 includes a virtual round box 61 and a virtual square box 62.
  • the control unit determines the ellipsoidal three-dimensional operating range 64 with the operating focus 63 as the center point.
  • the control unit receives the open object command, it is determined that the virtual object 61 and the virtual object 62 are located in the three-dimensional operation range 64, the control unit determines that the virtual circle box 61 has the preset priority level 2, and the virtual square box 62 has the preset priority. Level 1.
  • Step 605 determining the virtual object having the highest priority as the selected virtual object.
  • control unit determines the virtual square box 62 as the selected virtual object.
  • Step 606 Control the selected virtual object to respond to the operation instruction.
  • the processing unit opens the virtual square box 62.
  • the virtual object selection method provided in this embodiment automatically selects a virtual object as the selected virtual object by using the priority level when there are two or more virtual objects in the three-dimensional operation range. It can not only satisfy the user's own willingness to choose, but also realize the automatic selection of virtual objects, reduce the number of selection operations of the user in multiple virtual objects, and efficiently and intelligently help the user to select suitable virtual objects.
  • FIG. 22A a flowchart of a method for selecting a virtual object according to an embodiment of the present invention is shown. This embodiment is exemplified by applying the virtual object selection method to the VR system shown in FIG. 16. The method includes:
  • Step 701 Determine a position of an operation focus in a three-dimensional virtual environment, where the operation focus is a point corresponding to the input device in the three-dimensional virtual environment, where the virtual object includes a virtual object, and the virtual object includes a controlled point for accepting the operation;
  • Step 702 Determine a three-dimensional operation range of the operation focus by using the operation focus as a reference position;
  • the operating focus is the directional operational focus of the hand, and the processing unit is operated by hand.
  • the focus is the starting point, and the outward direction of the palm is the center line, which determines the three-dimensional operating range of the conical shape.
  • the three-dimensional operation range of the operation focus also moves; when the operation focus rotates, the three-dimensional operation range of the operation focus also rotates.
  • Step 703 when receiving an operation instruction, determining a virtual object whose controlled point is located within a three-dimensional operation range;
  • the operational command is an object picking instruction.
  • the processing unit performs an intersection calculation on the three-dimensional operation range and the controlled point of the virtual object, and when there is an intersection, determining that the controlled point of the virtual object is located in the three-dimensional operation range.
  • the processing unit when there is no controlled point of the virtual object in the three-dimensional operation range, the processing unit does not respond to the operation instruction; when there is a controlled point of the virtual object in the three-dimensional operation range, the processing unit determines the virtual object as being The selected virtual object.
  • Step 704 determining a distance between a controlled point of each virtual object and an operation focus when the virtual object is at least two;
  • the processing unit calculates the distance between the controlled point of each virtual object and the operational focus.
  • the processing unit calculates a distance between each controlled point of the virtual object and an operation focus, and takes a minimum distance as a controlled point of the virtual object. The distance between the operating focus.
  • Step 705 determining the virtual object having the smallest distance as the selected virtual object.
  • the processing unit After calculating the distance between the controlled point of each virtual object and the operational focus, the processing unit determines the virtual object having the smallest distance as the selected virtual object.
  • the three-dimensional virtual environment 70 includes: a virtual object 71, a virtual object 72, and a virtual object 73.
  • the control unit takes the hand-operated focus 74 as a starting point and the palm outward direction as a center line to determine a three-dimensional operating range 75 of the conical shape.
  • the control unit receives the pick-up object command, it is determined that the virtual object 71, the virtual object 72, and the virtual object 73 are located within the three-dimensional operation range 75, and the control unit calculates the distance between the controlled point of the virtual object 71 and the hand-operated focus 74.
  • computing virtual The distance 2 between the controlled point of the object 72 and the hand-operated focus 74 calculates the distance 3 between the controlled point of the virtual object 73 and the hand-operated focus 74. Since the distance 1 ⁇ distance 2 ⁇ distance 3, the control unit determines the virtual object 71 as the selected virtual object.
  • Step 706 controlling the selected virtual object to respond to the operation instruction.
  • the operation command is an object picking instruction
  • the processing unit picks up the virtual object 1 into the virtual hand corresponding to the hand operation focus 74.
  • the virtual object selection method automatically selects a virtual object as the selected virtual object by using a distance and a distance when the virtual object in the three-dimensional operation range is two or more.
  • the user can select the user's own willingness to select, and can automatically select the virtual object, reduce the number of times the user selects multiple virtual objects, and efficiently and intelligently help the user select a suitable virtual object.
  • the above-described embodiment of FIG. 20A, the embodiment of FIG. 21A, and the embodiment of FIG. 22A can be implemented in combination, or a combination of the three.
  • the virtual object having the matching object type and having the highest priority is determined as the selected virtual object; or, the matching object type and the closest distance are The virtual object is determined to be the selected virtual object; or the virtual object having the highest priority and the closest distance is determined as the selected virtual object, or the virtual object having the matching object type and having the highest priority and being the closest is determined to be The selected virtual object.
  • step 1703 may be replaced. Go to step 1703c to step 1703e as shown in FIG. 23A:
  • Step 1703c Determine, according to the ith attribute information of each virtual object, the virtual object selected by the ith time;
  • Each attribute information is one of the above three attribute information.
  • the initial value of i is 1 and i is an integer.
  • the first attribute information is an object type of the virtual object; the second attribute information is a priority of the virtual object; and the third attribute information is a distance between the controlled point of the virtual object and the operation focus, but The embodiment does not limit the specific form of each attribute information.
  • the process of determining the ith selected virtual object may refer to the technical solution provided in the embodiment of FIG. 20A; when the ith attribute information is the priority of the virtual object For example, the technical solution provided by the embodiment of FIG. 21A can be referred to; when the i-th attribute information is the distance between the controlled point of the virtual object and the operating focus, reference may be made to the technical solution provided in the embodiment of FIG. 22A. .
  • step 1703d When the i-th selection is performed according to the i-th attribute information, it is possible to select a virtual object, and the process proceeds to step 1703d. It is also possible to select a plurality of virtual objects having the same i-th attribute information. At this point, the process proceeds to step 1703e.
  • Step 1703d when the virtual object selected in the ith time is one, the virtual object selected in the i th time is determined as the selected virtual object;
  • Step 1703e when the virtual objects selected by the i-th time are two or more, the virtual object selected by the i+1th time is determined according to the i+1th attribute information of each virtual object;
  • the virtual object selected in the ith time is two or more, the virtual object selected in the ith time is determined to be the i+1th selection according to the i+1th attribute information of each virtual object.
  • the virtual object is looped through the above steps until the final selected virtual object is selected.
  • the processing unit first determines the first selected virtual object according to the first attribute information “object type of the virtual object”; if the first time If the selected virtual object is one, it is determined as the final selected virtual object; if the first selected virtual object is two or more, the processing unit then follows the second attribute information "virtual object" "Priority" determines the second selected virtual object; if the second selected virtual object is one, it is determined to be the final selected virtual object; if the second selected virtual object is 2 or more, the processing unit further determines the virtual object selected for the third time according to the third attribute information "distance between the controlled point of the virtual object and the operating focus"; if the third time is selected If there are one virtual object, it is determined as the final selected virtual object; if the virtual object selected in the third time is two or more, the processing unit considers that the selection fails, does not respond or pops up an error. Information.
  • FIG. 23B is a flowchart of a method for selecting a virtual object according to another embodiment of the present invention. This embodiment is exemplified by applying the virtual object selection method to the VR system shown in FIG. 16.
  • the party The law includes:
  • Step 801 Determine a position of an operation focus in a three-dimensional virtual environment, where the operation focus is a point corresponding to the input device in the three-dimensional virtual environment, and the three-dimensional virtual environment includes a virtual object, and the virtual object includes a controlled point for accepting the operation;
  • Step 802 determining a three-dimensional operation range of the operation focus by using the operation focus as a reference position
  • the three-dimensional operation range is at least one of a spherical range, an ellipsoidal range, a conical range, a cube range, and a cylinder range with the operation focus as a reference position.
  • the processing unit takes the operation focus as a reference point, and the direction line of the operation focus is a center line, and determines the three-dimensional operation focus. Operating range.
  • Step 803 when receiving an operation instruction, determining a virtual object whose controlled point is located within a three-dimensional operation range;
  • the processing unit performs an intersection calculation on the three-dimensional operation range and the controlled point of the virtual object, and when there is an intersection, determining that the controlled point of the virtual object is located in the three-dimensional operation range.
  • step 805 When there is a controlled point of a virtual object in the three-dimensional operating range, proceed to step 805;
  • step 806 When there are two or more controlled points of the virtual object in the three-dimensional operation range, the process proceeds to step 806.
  • Step 804 not responding to the operation instruction
  • Step 805 determining the virtual object as the selected virtual object
  • Step 806 Acquire an operation type corresponding to the operation instruction.
  • Step 807 determining an object type corresponding to each virtual object
  • Step 808 Determine, from the object type corresponding to each virtual body object, a target object type that matches the operation type, and the target object type is a type having a capability to respond to the operation instruction;
  • Step 809 detecting whether there are more than one virtual object having the target object type
  • step 805 If there is only one virtual object having the target object type, the process proceeds to step 805; if the virtual object having the target object type is two or more, then the process proceeds to step 810.
  • Step 810 determining a priority of the virtual object having the target object type
  • Step 811 detecting whether the virtual object having the highest priority exceeds one
  • step 805 is entered; if the virtual object having the highest priority is two or more, then step 812 is entered.
  • Step 812 determining a distance between the controlled point of the virtual object having the highest priority and the operation focus
  • step 813 the virtual object having the smallest distance is determined as the selected virtual object.
  • Step 814 controlling the selected virtual object to respond to the operation instruction.
  • the control virtual object when the operation instruction is to select an object instruction, the control virtual object is in a selected state; when the operation instruction is an object selection instruction, the control virtual object is in a state of being picked up by the virtual hand (or other element); when the operation instruction is When the object instruction is turned on, the control virtual object is in an open state; when the operation instruction is an object instruction, the control virtual object is in a used state; when the operation instruction is a tapping object instruction, the control virtual object is in a virtual hand (or other element) The state of the tapping; when the operation command is an attack command, the virtual object is controlled to be attacked.
  • the virtual object selection method automatically selects three types of objects: the object type, the priority, and the distance matched by the operation type when the virtual object in the three-dimensional operation range is two or more.
  • a virtual object is selected as the selected virtual object, which can not only satisfy the user's own willingness to select, but also realize the automatic selection of the virtual object, reduce the number of times the user selects multiple virtual objects, and help the user efficiently and intelligently. Select the appropriate virtual object.
  • FIG. 24 is a structural block diagram of a virtual object selection apparatus according to an embodiment of the present invention. This embodiment is exemplified by the virtual object selection device being applied to the VR system shown in FIG. 16.
  • the virtual object selection device includes:
  • the first determining module 901 is configured to determine a location of the operation focus in the three-dimensional virtual environment.
  • the operation focus is a point corresponding to the input device in the three-dimensional virtual environment, and the three-dimensional virtual environment includes a virtual object, and the virtual object includes a controlled point for accepting the operation.
  • the second determining module 902 is configured to determine a three-dimensional operating range of the operating focus by using the operating focus as a reference position.
  • the third determining module 903 is configured to determine, when the operation instruction is received, the virtual object whose controlled point is located in the three-dimensional operation range as the selected virtual object.
  • the virtual object selection device determines the three-dimensional operation range of the operation focus by using the operation focus as the reference position, and only needs to determine the three-dimensional operation range for one operation focus, and does not need to be for each virtual object.
  • the controlled point sets the response range and resolves each virtual object based on The controlled point sets the response range.
  • FIG. 25 a block diagram showing the structure of a virtual object selecting apparatus according to another embodiment of the present invention is shown. This embodiment is exemplified by the virtual object selection device being applied to the VR system shown in FIG. 16.
  • the virtual object selection device includes:
  • the first determining module 1010 is configured to determine a location of the operation focus in the three-dimensional virtual environment.
  • the operation focus is a point corresponding to the input device in the three-dimensional virtual environment, and the three-dimensional virtual environment includes a virtual object, and the virtual object includes a controlled point for accepting the operation.
  • the second determining module 1020 is configured to determine a three-dimensional operating range of the operating focus by using the operating focus as a reference position.
  • the third determining module 1030 is configured to determine, when the operation instruction is received, the virtual object whose controlled point is located in the three-dimensional operation range as the selected virtual object.
  • the third determining module 1030 includes a first determining unit 1031 and a second determining unit 1032.
  • the first determining unit 1031 is configured to determine, when the operation instruction is received, the virtual object whose controlled point is located within the three-dimensional operation range.
  • the second determining unit 1032 is configured to determine the selected virtual object according to the attribute information of the virtual object when the virtual object is at least two.
  • the attribute information includes at least one of an object type of the virtual object, a priority of the virtual object, and a distance between the controlled point of the virtual object and the operation focus.
  • the virtual object selection device provided by the embodiment automatically selects a selected virtual object by the processing unit when the virtual object is at least two, thereby reducing the operation steps and time costs of the user, and The user's willingness to choose and the convenience of automatically selecting virtual objects.
  • FIG. 26 is a structural block diagram of a virtual object selection apparatus according to an embodiment of the present invention. This embodiment is exemplified by the virtual object selection device being applied to the VR system shown in FIG. 16.
  • the virtual object selection device includes:
  • the first determining module 1110 is configured to determine a location of the operation focus in the three-dimensional virtual environment.
  • the operation focus is a point corresponding to the input device in the three-dimensional virtual environment, and the three-dimensional virtual environment includes a virtual object, and the virtual object includes a controlled point for accepting the operation.
  • the second determining module 1120 is configured to determine a three-dimensional operating range of the operating focus by using the operating focus as a reference position.
  • the third determining module 1130 is configured to determine, when the operation instruction is received, the virtual object whose controlled point is located in the three-dimensional operation range as the selected virtual object.
  • the third determining module 1130 includes a first determining unit 1131 and a second determining unit 1132.
  • the first determining unit 1131 is configured to determine, when the operation instruction is received, the virtual object whose controlled point is located within the three-dimensional operation range.
  • the second determining unit 1132 is configured to determine the selected virtual object according to the attribute information of the virtual object when the virtual object is at least two.
  • the attribute information includes at least one of an object type of the virtual object, a priority of the virtual object, and a distance between the controlled point of the virtual object and the operation focus.
  • the second determining unit 1132 when the attribute information includes an object type of the virtual object, includes: an instruction acquiring subunit, a first determining subunit, and a second determining subunit; optionally, when the attribute information includes the virtual object
  • the second determining unit 1132 further includes: a third determining subunit, a fourth determining subunit, and a fifth determining subunit; optionally, when the attribute information includes the controlled point of the virtual object and the operating focus
  • the second determining unit 1132 further includes: a sixth determining subunit and a seventh determining subunit; optionally, when the attribute information includes at least two, the second determining unit 1132 further includes: an eighth determining subunit and The ninth determines the subunit.
  • the instruction acquisition subunit is configured to acquire an operation type corresponding to the operation instruction.
  • the first determining subunit is configured to determine an object type corresponding to each virtual object.
  • a second determining subunit configured to determine, from each object type corresponding to the virtual body object, a target object type that matches the operation type.
  • the target object type is of a type that has the ability to respond to operational instructions.
  • the third determining subunit is configured to determine the virtual object having the target object type as the selected virtual object.
  • the fourth determining subunit is configured to determine the priority of each virtual object.
  • the fifth determining subunit is configured to determine the virtual object having the highest priority as the selected virtual object.
  • a sixth determining subunit for determining a distance between a controlled point of each virtual object and an operational focus is a sixth determining subunit for determining a distance between a controlled point of each virtual object and an operational focus.
  • a seventh determining subunit configured to determine the virtual object having the smallest distance as the selected virtual object.
  • An eighth determining subunit configured to determine, according to the ith attribute information of each of the virtual objects, the virtual object selected by the ith time;
  • a ninth determining subunit configured to determine the virtual object selected by the ith time as the selected virtual object when the virtual object selected by the ith time is one;
  • the i-th attribute information of the object determines the virtual object selected by the i-th time;
  • the initial value of i is 1 and i is an integer.
  • the command response module 1140 is configured to control the selected virtual object to respond to the operation instruction.
  • the virtual object selection device automatically selects three types of objects: the object type, the priority, and the distance matched by the operation type when the virtual object in the three-dimensional operation range is two or more.
  • a virtual object is selected as the selected virtual object, which can not only satisfy the user's own willingness to select, but also realize the automatic selection of the virtual object, reduce the number of times the user selects multiple virtual objects, and help the user efficiently and intelligently. Select the appropriate virtual object.
  • FIG. 27 is a schematic structural diagram of a VR system according to an embodiment of the present invention.
  • the VR system includes a head mounted display 120, a processing unit 140, and an input device 160.
  • the head mounted display 120 is a display for wearing an image display on the user's head.
  • the head mounted display 120 is electrically coupled to the processing unit 140 via a flexible circuit board or hardware interface.
  • Processing unit 140 is typically integrated within the interior of head mounted display 120.
  • Processing unit 140 includes a processor 142 and a memory 144.
  • Memory 144 is a volatile and nonvolatile, removable and non-removable medium, such as RAM, ROM, implemented by any method or technology for storing information such as computer readable instructions, data structures, program modules or other data.
  • the memory 144 stores one or more program instructions including instructions for implementing the virtual object selection method provided by the various method embodiments described above.
  • the processor 142 is configured to execute instructions in the memory 144 to implement the virtual object selection method provided by the foregoing various method embodiments.
  • the processing unit 140 is connected by cable, Bluetooth, or Wi-Fi (Wireless-Fidelity). The connection is connected to the input device 160.
  • the input device 160 is an input peripheral such as a somatosensory glove, a somatosensory handle, a remote controller, a treadmill, a mouse, a keyboard, and a human eye focusing device.
  • a somatosensory glove such as a somatosensory glove, a somatosensory handle, a remote controller, a treadmill, a mouse, a keyboard, and a human eye focusing device.
  • the embodiment of the present invention further provides a computer readable storage medium, which may be a computer readable storage medium included in the memory in the above embodiment; or may exist separately and not assembled into the terminal.
  • Computer readable storage medium stores one or more programs that are used by one or more processors to perform a virtual object selection method.
  • the fingerprint information to be authenticated collected by the fingerprint collection device in the real scene is sent to the authentication device in the real scene for authentication.
  • the purpose of realizing payment authentication without establishing a payment authentication system in a virtual reality scenario is achieved, thereby realizing the technical effect of improving the efficiency of payment in a virtual reality scenario. That is, based on the authentication scheme of the virtual reality scenario, when the payment is made in the virtual reality scenario, the user is authenticated based on the fingerprint information of the user.
  • the embodiment of the present invention further provides a scheme for generating an identifier for a user through a virtual dot matrix, and implementing identity verification for the user by using the generated identifier, where the generated identifier can be used for authenticating the user in the authentication scheme.
  • the process for example, may replace the fingerprint authentication, or perform the verification of the legality of the user's information again according to the generated identifier after the fingerprint authentication.
  • the virtual reality-based identification generation scheme and the authentication scheme are described in detail below through Embodiment 8 to Embodiment 12.
  • a method for generating an identifier based on virtual reality includes:
  • the three-dimensional coordinates (X, Y, Z) of the user and the direction vector ( ⁇ , ⁇ , ⁇ ) representing the direction of the field of view are obtained.
  • each virtual point in the virtual lattice has unique coordinates and numbers.
  • a virtual dot matrix is generated on a plane that is a distance from the front of the user.
  • the virtual lattice is a single planar lattice, and the normal vector of the planar lattice is ( ⁇ , ⁇ , ⁇ ).
  • the virtual dot matrix a certain number of virtual points are generated according to a preset rule. Record the coordinates of each virtual point and its number. The rules for generating virtual points are as follows:
  • the distances of adjacent virtual points may be the same or different.
  • the distances of the virtual points are the same, that is, the Euclidean distance between two adjacent points (X 1 , Y 1 , Z 1 ) and (X 2 , Y 2 , Z 2 ) is the same.
  • the Euclidean distance formula is as follows:
  • the numbering rule can simply traverse all the virtual points in the preset order. Wherein, the order may be left to right or top to bottom, and the number may be a number or a letter.
  • the position of the virtual point is clearly displayed to the user in the virtual reality environment.
  • the display method can be bright, high contrast color, or other methods.
  • a response area may be set for each virtual point.
  • the response area is a three-dimensional spherical response area with a preset radius of R.
  • the virtual point is selected if the Euclidean distance between the coordinates of the user-controlled input and the coordinates of the virtual point is less than R.
  • the response area of the virtual point can also be any other shape.
  • the above actions are repeated, and the number and order of the virtual points selected by the user are recorded until the user's selection ends.
  • the selection result includes the number of the selected virtual point and the order of the selected virtual point.
  • the selection result may include only the number of the virtual point, and the unique corresponding identifier may be generated according to the number.
  • the specific manner of ending may be that the user does not select any virtual point for a period of time, or the user ends with a specific button.
  • the generating the identifier according to the selection result includes: generating a numeric string or a character string according to the number of the selected virtual point and the order of the selected virtual point.
  • a handle-based VR device is used in the embodiment of the present invention.
  • the handle-based VR device includes a tracking system, a head mounted display (HMD) and an interactive handle:
  • tracking system identifying the position of the head mounted display HMD and the interactive handle in space (three-dimensional coordinates);
  • a head mounted display HMD for displaying a real-time picture viewed by the user
  • Interactive handle for three-dimensional operation in space The input end controlled by the user in the embodiment of the present invention is an interactive handle.
  • the embodiment of the invention provides a method for generating a logo based on virtual reality.
  • the user only needs to control the movement of the input end to select the virtual point, and automatically generates the identifier according to the user selection result, and the operation is simple, and the complicated virtual three-dimensional is avoided.
  • the input method makes the identification generation more efficient, improves the user experience and does not consume a lot of system resources.
  • the method for generating an identifier in the embodiment of the present invention can be used to generate a user name, a user ID number, a password, and the like, and has a broad application prospect.
  • a method for generating an identifier based on virtual reality comprising:
  • the three-dimensional coordinates (X, Y, Z) of the user and the direction vector ( ⁇ , ⁇ , ⁇ ) representing the direction of the field of view are obtained.
  • each virtual point in the virtual lattice has a unique coordinate and a number.
  • the virtual dot matrix is a spatial lattice, and the virtual dots in the virtual dot matrix are deployed on N (N>1) planes, and the user can see without rotating the N planes. All virtual points.
  • the N planes may all be parallel to each other; or the N planes may constitute a closed polyhedron.
  • a first plane is generated in front of the user based on the three-dimensional coordinates and the field of view of the user, and then a parallel second plane is generated according to the preset distance interval.
  • the normal vectors of all the planes are the same, both are ( ⁇ , ⁇ , ⁇ ).
  • the third plane can also be generated on the basis of the first plane and the second plane, and the number of generated planes in the embodiment of the present invention is No restrictions.
  • a certain number of virtual points are generated according to a preset rule, and the coordinates of each virtual point and the number thereof are recorded.
  • the rules for generating the virtual points are the same as those in Embodiment 8, and details are not described herein again.
  • virtual points may be deployed on four parallel planes, and the virtual dot matrix is a spatial lattice of 4*4, and the distances between adjacent virtual points are equal.
  • the position of the virtual point is clearly displayed to the user, and a response area is set for each virtual point.
  • the response area is a three-dimensional spherical response area with a preset radius of R. If the Euclidean distance between the coordinates of the user-controlled input and the coordinates of the virtual point Less than R, the virtual point is selected.
  • the response area of the virtual point can also be any other shape.
  • the above actions are repeated, and the number and order of the virtual points selected by the user are recorded until the user's selection ends.
  • the specific manner of ending may be that the user does not select any virtual point for a period of time, or the user ends with a specific button.
  • the generating the identifier according to the selection result includes generating a numeric string or a character string according to the number of the selected virtual point and the order of the selected virtual point.
  • a VR device without a handle including:
  • a vision/tracking system for obtaining the position of the hand in space (three-dimensional coordinates).
  • the input end controlled by the user is a hand.
  • the head mounted display HMD is used to display a real-time picture viewed by the user.
  • the embodiment of the invention provides another method for generating a logo based on virtual reality.
  • the user can select a virtual point by using the hand as an input end, and automatically generate an identifier according to the result of the user selection, and the operation is simpler.
  • a virtual reality based identifier generating device as shown in FIG. 30, includes:
  • the user orientation obtaining module 3001 is configured to acquire three-dimensional coordinates of the location where the user is located and a direction in which the user's visual field is oriented;
  • a virtual lattice generation module 3002 configured to generate a virtual lattice according to the three-dimensional coordinates and the direction, each virtual point in the virtual lattice has a unique coordinate and a number;
  • a virtual dot matrix display module 3003, configured to display the virtual dot matrix
  • the result obtaining module 3004 is configured to obtain a selection result of the virtual point in the virtual dot matrix by the user;
  • An identifier generating module 3005, configured to generate an identifier according to the selection result
  • the input module 3006 is configured to provide an input to the user for selecting a virtual point in the virtual matrix.
  • the input module includes an interactive handle or a no-handle virtual reality device.
  • the selection result obtaining module 3004 is as shown in FIG. 31, and includes:
  • a real-time location recording sub-module 30041 for recording a real-time location of a user-controlled input
  • the monitoring sub-module 30042 is configured to monitor whether the input end enters a response area of any virtual point, and if the coordinate of the input end of the user control falls into the virtual point response area, the virtual point is selected.
  • the embodiment of the present invention provides a method for generating an identifier based on a virtual reality based on the same inventive concept.
  • the embodiment of the present invention can be used to implement the virtual reality-based identifier generating method provided in Embodiment 8 or 9.
  • An authentication method as shown in FIG. 32, the method includes:
  • S402. Determine whether the identity identifier to be verified input by the user is consistent with the preset identity identifier.
  • the method for generating the identity identifier includes:
  • each virtual point in the virtual lattice has a unique coordinate and a number
  • An identifier is generated based on the selection result.
  • the method for generating an identifier in the embodiment of the present invention may be used to generate a username and/or a password.
  • the user selects a virtual point in the virtual matrix through a control input, and the input is an interactive handle.
  • the user selects a virtual point in the virtual matrix with a hand as an input through a virtual reality device without a handle.
  • the user needs to perform user identity authentication when performing payment authorization and account login.
  • the identity verification method provided by the embodiment of the present invention, the user only needs to control the input end and move according to the preset path in the space, so that the user identity can be quickly completed.
  • the embodiment of the invention simplifies the link of the user input identifier in the identity verification process, thereby improving the efficiency of the identity verification;
  • the complexity is closely related to the security of the authentication.
  • the present invention can design the virtual dot matrix according to actual needs, thereby taking into consideration the comfort of the user experience and the security of the identity verification.
  • An authentication system as shown in FIG. 33, the system includes an authentication server 3301, an application server 3302, and a virtual reality-based identification generating device 3303, and the verification server 3301 and the identification generating device 3303 are both associated with the application.
  • Server 3302 performs communication;
  • the verification server 3301 is configured to store an identifier that is preset by the user;
  • the application server 3302 is configured to initiate an identity verification request to the verification server 3301, and send the identifier generated by the identifier generation device 3303 to the verification server 3301 and obtain an identity verification result of the verification server 3301.
  • the identity verification system is used for identity setting and verification.
  • the generating device generates an identifier to generate a virtual dot matrix and the rules for generating an identifier according to the virtual point selected by the user for the virtual dot matrix are also consistent:
  • the application server 3302 initiates an identification setting request to the verification server 3301;
  • the user control identification generating means 3303 generates an identification and transmits the identification to the verification server.
  • the number of times the user inputs the identifier may not be limited. For example, the user needs to input the identifier twice. If the identifiers entered twice are the same, the identifier is correctly input.
  • the application server 3302 is notified that the setting is successful, and the application server 3302 notifies the identifier generating device 3303 that the setting is successful.
  • the identifier generating device 3303 accesses the application server 3302;
  • the application server 3302 sends a user identity verification request to the verification server 3301;
  • the user control identification generating means 3303 generates an identification and transmits the identification to the verification server 3301.
  • the verification server 3301 compares the received identifier with the previously set identifier, and if the matching is successful, approves the authentication request of the application server 3302 by verification.
  • the identifier generating device 3303 includes:
  • the user orientation obtaining module 33031 is configured to acquire three-dimensional coordinates of the location where the user is located and a direction in which the user's visual field is oriented;
  • a virtual lattice generation module 33032 configured to generate a virtual lattice according to the three-dimensional coordinates and the direction, each virtual point in the virtual lattice has a unique coordinate and a number;
  • a virtual dot matrix display module 33033 configured to display the virtual dot matrix
  • a selection result obtaining module 33034 configured to acquire a selection result of the user, where the selection result includes a number of the selected virtual point and an order of the selected virtual point;
  • the identifier generation module 33035 is configured to generate an identifier according to the selection result.
  • the selection result recording module 33034 includes:
  • a real-time location recording sub-module 330341 for recording a real-time location of a user-controlled input
  • the monitoring sub-module 330342 is configured to monitor whether the input end enters a response area of any virtual point, and if the coordinate of the input end of the user control falls into the virtual point response area, the virtual point is selected.
  • the embodiment of the present invention provides a virtual reality-based identity verification system based on the same inventive concept.
  • the embodiment of the present invention can be used to implement the identity verification method provided in Embodiment 11.
  • the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
  • the technical solution of the present invention may contribute to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause one or more computer devices (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the disclosed client, system, and server may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.

Abstract

一种基于虚拟现实场景的认证方法、虚拟现实设备及存储介质。其中,该方法包括:在虚拟现实场景中接收到认证请求;通过现实场景中的指纹采集设备采集待认证指纹信息;将待认证指纹信息发送至现实场景中的认证设备;在虚拟现实场景中接收认证设备发送的认证结果信息,其中,认证结果信息用于指示待认证指纹信息通过认证或未通过认证。解决了相关技术在虚拟现实场景中支付时需要在虚拟现实场景中建立支付认知系统,这样将会导致在虚拟现实场景中支付的效率较低的技术问题。

Description

基于虚拟现实场景的认证方法、虚拟现实设备及存储介质
本申请要求于2016年08月19日提交中国专利局、申请号为201610695148.0、发明名称为“基于虚拟现实场景的认证和装置”、于2016年10月18日提交中国专利局、申请号为201610907039.0、发明名称为“虚拟物体选取方法、装置及VR系统”、以及于2016年10月27日提交中国专利局、申请号为201610954866.5、发明名称为“一种基于虚拟现实的标识生成方法以及身份验证方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及虚拟现实领域,具体而言,涉及一种基于虚拟现实场景的认证方法、虚拟现实设备及存储介质。
背景技术
虚拟现实(Virtual Reality,简称为VR)是利用电脑模拟产生一个三度空间的虚拟世界,以提供给用户关于视觉、听觉、触觉等感官的模拟,让用户如同身临其境一般,可以及时、没有限制的观察三度空间内的事物。目前,头盔显示器作为虚拟现实技术的一种常用设备,在屏蔽现实世界的同时,可以提供高分辨率、大视场角的虚拟场景,并带有立体声耳机,可以使人产生强烈的沉浸感。目前,服务提供商能够根据用户在虚拟现实场景中的需求提供多种服务和产品,例如,用户可以在虚拟现实场景中购买产品并支付。但是,相关技术在虚拟现实场景中支付时需要在虚拟现实场景中建立支付认知系统,这样将会导致在虚拟现实场景中支付的效率较低。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
本发明实施例提供了一种基于虚拟现实场景的认证方法、虚拟现实设备及 存储介质,以至少解决相关技术在虚拟现实场景中支付时需要在虚拟现实场景中建立支付认知系统,这样将会导致在虚拟现实场景中支付的效率较低的技术问题。
根据本发明实施例的一个方面,提供了一种基于虚拟现实场景的认证方法,包括:在虚拟现实场景中接收到认证请求;通过现实场景中的指纹采集设备采集待认证指纹信息;将待认证指纹信息发送至现实场景中的认证设备;在虚拟现实场景中接收认证设备发送的认证结果信息,其中,认证结果信息用于指示待认证指纹信息通过认证或未通过认证。
根据本发明实施例的另一方面,还提供了一种虚拟现实设备,包括:一个或多个处理器、存储器,所述存储器用于存储软件程序以及模块,且所述处理器通过运行存储在所述存储器内的软件程序以及模块,完成以下操作:在虚拟现实场景中接收到认证请求;通过现实场景中的指纹采集设备采集待认证指纹信息;将所述待认证指纹信息发送至所述现实场景中的认证设备;在所述虚拟现实场景中接收所述认证设备发送的认证结果信息,其中,所述认证结果信息用于指示所述待认证指纹信息通过认证或未通过认证。
根据本发明实施例的另一方面,还提供了一种存储介质,所述存储介质中存储有至少一段程序代码,所述至少一段程序代码由处理器加载并执行以实现如上述一个方面所述的基于虚拟现实场景的认证方法。
在本发明实施例中,采用在虚拟现实场景中接收到认证请求;通过现实场景中的指纹采集设备采集待认证指纹信息;将待认证指纹信息发送至现实场景中的认证设备;在虚拟现实场景中接收认证设备发送的认证结果信息,其中,认证结果信息用于指示待认证指纹信息通过认证或未通过认证的方式,通过在虚拟现实场景中接收到认证请求时,将现实场景中的指纹采集设备采集到的待认证指纹信息发送至现实场景中的认证设备进行认证,达到了无需在虚拟现实场景中建立支付认证系统也能够实现支付认证的目的,从而实现了提高在虚拟现实场景中支付的效率的技术效果,进而解决了相关技术在虚拟现实场景中支付时需要在虚拟现实场景中建立支付认知系统,这样将会导致在虚拟现实场景中支付的效率较低的技术问题。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本发明实施例的基于虚拟现实场景的认证方法的硬件环境的示意图;
图2是根据本发明实施例的一种可选的基于虚拟现实场景的认证方法的流程图;
图3是根据本发明优选实施例的基于虚拟现实场景的认证方法的流程图;
图4是根据本发明优选实施例的虚拟现实场景中认证界面的示意图;
图5是根据本发明优选实施例的指纹采集设备的结构示意图;
图6是根据本发明优选实施例的指示标识指向认证区域的示意图;
图7是根据本发明优选实施例的虚拟现实场景中显示认证结果信息的示意图;
图8是根据本发明优选实施例的虚拟现实场景与现实场景之间数据交互过程的示意图;
图9是根据本发明实施例的一种可选的基于虚拟现实场景的认证装置的示意图;
图10是根据本发明实施例的另一种可选的基于虚拟现实场景的认证装置的示意图;
图11是根据本发明实施例的另一种可选的基于虚拟现实场景的认证装置的示意图;
图12是根据本发明实施例的另一种可选的基于虚拟现实场景的认证装置的示意图;
图13是根据本发明实施例的另一种可选的基于虚拟现实场景的认证装置的示意图;
图14是根据本发明实施例的另一种可选的基于虚拟现实场景的认证装置的示意图;以及
图15是根据本发明实施例的一种终端的结构框图;
图16是本发明一个实施例提供的VR系统的结构示意图;
图17是本发明一个实施例提供的虚拟物体选取方法的流程图;
图18A至图18C是本发明一个实施例提供的受控点的示意图;
图18D是图17所提供的虚拟物体选取方法在具体实施时的示意图;
图19是本发明另一个实施例提供的虚拟物体选取方法的流程图;
图20A是本发明另一个实施例提供的虚拟物体选取方法的流程图;
图20B是图20A所提供的虚拟物体选取方法在具体实施时的示意图;
图21A是本发明另一个实施例提供的虚拟物体选取方法的流程图;
图21B是图21A所提供的虚拟物体选取方法在具体实施时的示意图;
图22A是本发明另一个实施例提供的虚拟物体选取方法的流程图;
图22B是图22A所提供的虚拟物体选取方法在具体实施时的示意图;
图23A是本发明另一个实施例提供的虚拟物体选取方法的流程图;
图23B是本发明另一个实施例提供的虚拟物体选取方法的流程图;
图24是本发明一个实施例提供的虚拟物体选取装置的框图;
图25是本发明另一个实施例提供的虚拟物体选取装置的框图;
图26是本发明另一个实施例提供的虚拟物体选取装置的框图;
图27是本发明另一个实施例提供的VR系统的框图;
图28是本发明一个实施例提供的基于虚拟现实的标识生成方法流程图;
图29是本发明另一个实施例提供的基于手柄的VR设备的结构图;
图30是本发明另一个实施例提供的基于虚拟现实的标识生成装置的框图;
图31是本发明另一个实施例提供的选择结果获取模块的框图;
图32是本发明另一个实施例提供的一种身份验证方法流程图;
图33是本发明另一个实施例提供的身份验证系统框图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
首先,在对本发明实施例进行描述的过程中出现的部分名词或者术语适用于如下解释:
虚拟现实(Virtual Reality,简称为VR),是综合利用计算机图形系统和各种现实及控制等接口设备,在计算机上生成的、可交互的三维环境中提供沉浸感觉的技术。其中,计算机生成的、可交互的三维环境称为虚拟环境。虚拟现实技术是一种可以创建和体验虚拟世界的计算机仿真系统的技术,它利用计算机生成一种模拟环境,利用多源信息融合的交互式三维动态视景和实体行为的系统仿真使用户沉浸到该环境中。
实施例1
根据本发明实施例,提供了一种基于虚拟现实场景的认证方法的方法实施例。
可选地,在本实施例中,上述基于虚拟现实场景的认证方法可以应用于如图1所示的由服务器102和终端104所构成的硬件环境中。如图1所示,服务器102通过网络与终端104进行连接,上述网络包括但不限于:广域网、城域网或局域网,终端104并不限定于个人计算机(Personal Computer,PC)、手机、平板电脑等。本发明实施例的基于虚拟现实场景的认证方法可以由服务器102来执行,也可以由终端104来执行,还可以是由服务器102和终端104共同执行。其中,终端104执行本发明实施例的基于虚拟现实场景的认证方法也可以是由安装在其上的客户端来执行。
图2是根据本发明实施例的一种可选的基于虚拟现实场景的认证方法的流 程图,如图2所示,该方法可以包括以下步骤:
步骤S202,在虚拟现实场景中接收到认证请求;
步骤S204,通过现实场景中的指纹采集设备采集待认证指纹信息;
步骤S206,将待认证指纹信息发送至现实场景中的认证设备;
步骤S208,在虚拟现实场景中接收认证设备发送的认证结果信息,其中,认证结果信息用于指示待认证指纹信息通过认证或未通过认证。
需要说明的是,上述步骤S202至步骤S208可以由虚拟现实设备执行,例如头盔显示器、光阀眼镜等。上述步骤S202至步骤S208,通过在虚拟现实场景中接收到认证请求时,将现实场景中的指纹采集设备采集到的待认证指纹信息发送至现实场景中的认证设备进行认证,达到了无需在虚拟现实场景中建立支付认证系统也能够实现支付认证的目的,从而实现了提高在虚拟现实场景中支付的效率的技术效果,进而解决了相关技术在虚拟现实场景中支付时需要在虚拟现实场景中建立支付认知系统,这样将会导致在虚拟现实场景中支付的效率较低的技术问题。
在步骤S202提供的技术方案中,虚拟现实场景可以为能够呈现虚拟现实场景的虚拟现实设备所显示的场景。其中,该虚拟现实设备可以为头盔显示器、光阀眼镜等。认证请求可以为在虚拟现实场景中执行目标认证事件所触发的请求,其中,本发明实施例对目标认证事件不做具体限定。例如,目标认证事件可以为在虚拟现实场景进行支付时所需要执行的支付认证事件,或者在虚拟现实场景中执行设置有权限认证的事件时所需要执行的权限认证事件。虚拟现实设备可以在虚拟现实场景中实时检测是否存在目标认证事件,若检测到存在目标认证事件且存在对该目标认证事件执行的触发操作,则在虚拟现实场景中会触发生成认证请求。该实施例中虚拟现实设备通过实时检测认证请求可以缩短对认证请求的响应时间,进而达到提高对虚拟现实场景中的目标认证事件的执行效率。
需要说明的是,虚拟现实设备在接收到认证请求之后,可以在虚拟现实场景中显示提示信息,其中,该提示信息可以用于提示用户输入认证请求所指示的认证信息,以完成对目标认证事件所指示的认证操作。该实施例通过在虚拟 现实场景中显示提示信息,可以比较直观地提示用户进行认证操作,有效地提高了用户的使用体验。
在步骤S204提供的技术方案中,用户可以根据虚拟现实场景中显示的提示信息执行认证操作。其中,用户所执行的认证操作可以为输入待认证信息,例如指纹信息、声音信息、人脸识别信息等。本发明实施例以指纹信息为例进行说明。具体地,用户可以在现实场景中通过指纹采集设备输入待认证指纹信息,此处需要说明的是,指纹采集设备可以是指纹扫描仪或者其他能够扫描并采集指纹的设备,指纹采集设备中可以设置指纹采集区域,用户可以将手指放置在该指纹采集区域以完成指纹采集。待认证指纹信息可以是用户在指纹采集设备上输入的指纹的信息,其中,待认证指纹信息可以是指纹图像或者指纹特征信息等。
还需要说明的是,指纹采集设备可以与虚拟现实设备通信连接,该通信连接优选为无线连接,例如蓝牙、无线保真(Wireless Fidelity,WiFi)等。利用指纹采集设备与虚拟现实设备之间的通信连接,指纹采集设备可以将采集到的待认证指纹信息发送给虚拟现实设备,以实现虚拟现实设备根据接收到的待认证指纹信息在虚拟现实场景中响应于认证请求完成认证的目的。
在步骤S206提供的技术方案中,虚拟现实设备在接收到指纹采集设备采集到的待认证指纹信息之后,可以将该待认证指纹信息发送给现实场景中的认证设备进行认证。此处需要说明的是,本发明实施例对现实场景中的认证设备不做具体限定,例如,该认证设备可以是支付宝、银行支付验证平台等。现实场景中的认证设备可以与虚拟现实设备通信连接,该通信连接优选为无线连接,例如蓝牙、WiFi等,利用认证设备与虚拟现实设备之间的通信连接,虚拟现实设备可以将待认证指纹信息发送给认证设备供认证设备进行认证。
还需要说明的是,认证设备中可以预先存储有用户的指纹信息,此处需要说明的是,认证设备中可以存储多个用户的指纹信息,指纹信息与用户具有唯一对应关系。认证设备在接收到待认证指纹信息之后,可以首先判断是否存在与该待认证指纹信息相同的指纹信息,若不存在则认证设备可以直接确定该待认证指纹信息未通过认证;若存在则认证设备可以根据指纹信息与用户的对应 关系再继续认证该待认证指纹信息对应的用户的信息,如果与该待认证指纹信息对应的用户的信息合法,则认证设备可以确定该待认证指纹信息通过认证,否则确定该待认证信息未通过认证。
在步骤S208提供的技术方案中,现实场景中的认证设备在对待认证指纹信息进行认证之后,可以利用认证设备与虚拟现实设备之间的通信连接将认证结果信息反馈给虚拟现实设备。其中,认证结果信息可以用于指示该待认证指纹信息是否通过认证,可以包括通过认证和未通过认证。
作为一种可选的实施例,在步骤S208虚拟现实设备在接收到认证设备发送的认证结果信息之后,该实施例还可以包括:步骤S210,在虚拟现实场景中显示该认证结果信息。需要说明的是,本发明实施例对认证结果信息在虚拟现实场景中的显示方式不做具体限定。该实施例通过在虚拟现实场景中显示认证结果信息,能够使得用户清楚直观地获取该待认证指纹信息的认证结果信息,更加符合用户需求,有效地提高了用户使用体验。
作为一种可选的实施例,在步骤S202虚拟现实场景中接收到认证请求之后,且在步骤S204通过现实场景中的指纹采集设备采集待认证指纹信息之前,该实施例还可以包括以下步骤:
步骤S2032,判断指示标识是否指向虚拟现实场景中的认证区域,其中,指示标识是指纹采集设备在虚拟现实场景中产生的;
步骤S2034,在判断出指示标识指向认证区域时,在虚拟现实场景中显示提示信息,其中,提示信息用于提示输入待认证指纹信息。
在上述步骤中,虚拟现实场景中可以显示有认证区域,在该认证区域内可以显示有需要用户认证的内容,例如,认证区域内可以显示有用户需要支付的金额以及支付过程所需的认证内容。指示标识可以为指纹采集设备在虚拟现实场景中产生的,此处需要说明的是,本发明实施例对指示标识的形式不做具体限定,例如,指示标识可以为鼠标箭头、指示标线等。在现实场景中通过对指纹采集设备执行相应操作可以实现控制指示标识在虚拟现实场景中移动,使指示标识指向认证区域。当指示标识指向认证区域时,在虚拟现实场景中可以显示有提示信息,其中,提示信息可以用于提示用户在显示场景中的指纹采集设 备上输入待认证指纹信息。
需要说明的是,指纹采集设备在虚拟现实场景中产生的指示标识指向认证区域表示对认证区域显示的需要用户认证的内容进行认证。因此,该实施例可以在虚拟现实场景中接收到认证请求之后,首先判断指纹采集设备在虚拟现实场景中产生的指示标识是否指向认证区域,如果判断出指示标识指向认证区域,则说明用户将要对认证区域显示的需要用户认证的内容进行认证。此时,在虚拟现实场景中可以显示提示信息,提示用户在指纹采集设备上输入待认证指纹信息,利用该待认证指纹信息进行相应认证。
该实施例通过判断指纹采集设备在虚拟现实场景中产生的指示标识是否指向认证区域,可以使得在虚拟现实场景中可以直观地确定需要用户认证的内容,使得用户的认证目标比较明确。而且,该实施例通过在指示标识指向认证区域时在虚拟现实场景中显示提示信息,提示用户在指纹采集设备上输入待认证指纹信息,可以使得用户更加清楚执行认证的操作流程,不仅提高了用户的使用体验,而且还能提高用户执行认证的效率。
作为一种可选的实施例,在步骤S208虚拟现实场景中接收到认证设备发送的认证结果信息之后,该实施例还可以包括以下步骤:
步骤S212,在认证结果信息指示待认证指纹信息通过认证时,在虚拟现实场景中执行与认证区域对应的资源转移事件。
在上述步骤中,认证区域内可以显示有需要用户认证的内容,当认证区域内显示的需要用户认证的内容为需要用户支付的金额以及支付过程所需的认证内容时,如果认证设备执行认证后得到的认证结果信息指示待认证指纹信息通过认证,则该实施例可以在虚拟现实场景中执行与认证区域对应的资源转移事件,也即将需要用户支付的金额从用户账户中进行转移。该实施例可以无需在虚拟现实场景中建立认证系统,而是利用现实场景中的指纹采集设备和认证设备完成虚拟现实场景中的认证过程,这样能够减小虚拟现实场景中建立认证系统所造成的资源消耗,同时也能够提高虚拟现实场景中认证的效率。
作为一种可选的实施例,步骤S206将待认证指纹信息发送至现实场景中的认证设备可以包括以下步骤S2062和步骤S2064,具体地:
步骤S2062,从虚拟现实场景将第一时间戳发送给认证设备,其中,第一时间戳为指纹采集设备采集到待认证指纹信息的时间点。
在上述步骤中,指纹采集设备可以采集用户输入的指纹信息,还可以记录采集到指纹信息的时间,该时间可以以时间戳的形式存在。该实施例中的第一时间戳可以为指纹采集设备采集到待认证指纹信息的时间点,指纹采集设备在采集到用户输入的待认证指纹信息的同时记录第一时间戳,并利用指纹采集设备与虚拟现实设备之间的通信连接将采集到的待认证指纹信息和第一时间戳一起发送给虚拟现实设备。虚拟现实设备在接收到待认证指纹信息和第一时间戳之后,可以利用虚拟现实设备与认证设备之间的通信连接将其发送给认证设备,供认证设备进行认证。
步骤S2064,通过指纹采集设备和通信终端设备将待认证指纹信息和第二时间戳发送给认证设备,其中,第二时间戳为指纹采集设备采集到待认证指纹信息的时间点,指纹采集设备通过与通信终端设备之间建立的连接与通信终端设备进行数据传输;其中,第一时间戳和第二时间戳用于认证设备对待认证指纹信息进行认证。
在上述步骤中,本发明实施例对通信终端设备的类型不做具体限定,例如,通信终端设备可以为手机、电脑等设备。指纹采集设备可以与通信终端设备通信连接,该通信连接可以是有线连接,也可以是无线连接,此处该实施例优选地设置指纹采集设备与通信终端设备之间无线连接,例如蓝牙、WiFi等。利用指纹采集设备与通信终端设备之间无线连接指纹采集设备可以将采集到的待认证指纹信息和第二时间戳发送给通信终端设备,其中,第二时间戳可以为指纹采集设备采集到待认证指纹信息的时间点。通信终端设备可以与认证设备通信连接,该通信连接可以是有线连接,也可以是无线连接,本发明对其不做具体限定。
通信终端设备在接收到待认证指纹信息和第二时间戳之后,可以利用通信终端设备与认证设备之间的通信连接将待认证指纹信息和第二时间戳发送给认证设备,供认证设备进行认证。此处需要说明的是,认证设备中预先存储的指纹信息可以为指纹采集设备采集到指纹信息后通过通信终端设备上报给认 证设备的,而且,在上报指纹信息的同时还将采集到指纹信息的时间点,也即时间戳,一起上报给认证设备,认证设备中可以存储有指纹信息、时间戳以及用户的信息的对应关系。
需要说明的是,该实施例中认证设备可以在认证待认证指纹信息的同时,还要利用第一时间戳和第二时间戳进行进一步认证,即认证第一时间戳和第二时间戳是否匹配,这样能够达到提高认证设备的认证准确度的效果。
作为一种可选的实施例,步骤S208在虚拟现实场景中接收到认证设备发送的认证结果信息可以包括以下步骤:
步骤S2082,在认证设备判断出第一时间戳与第二时间戳匹配、且指纹数据库中存在与待认证指纹信息匹配的指纹信息的情况下,在虚拟现实场景中接收到认证设备发送的第一认证结果信息,其中,第一认证结果信息用于指示待认证指纹信息通过认证;
步骤S2084,在认证设备判断出第一时间戳与第二时间戳不匹配、和/或指纹数据库中不存在与待认证指纹信息匹配的指纹信息的情况下,在虚拟现实场景中接收到认证设备发送的第二认证结果信息,其中,第二认证结果信息用于指示待认证指纹信息未通过认证。
在上述步骤中,认证设备的认证过程可以包括:判断出第一时间戳与第二时间戳是否匹配;指纹数据库中是否存在与待认证指纹信息匹配的指纹信息,其中,指纹数据库可以为认证设备中用于存储指纹信息的数据库。认证设备在执行上述认证之后,如果判断出第一时间戳与第二时间戳匹配、且指纹数据库中存在与待认证指纹信息匹配的指纹信息,则认证设备可以向虚拟现实设备发送第一认证结果信息,其中,第一认证结果信息可以用于指示待认证指纹信息通过认证;如果判断出第一时间戳与第二时间戳不匹配、或者指纹数据库中不存在与待认证指纹信息匹配的指纹信息、或者认证设备判断出第一时间戳与第二时间戳不匹配且指纹数据库中不存在与待认证指纹信息匹配的指纹信息,则认证设备可以向虚拟现实设备发送第二认证结果信息,其中,第二认证结果信息可以用于指示待认证指纹信息未通过认证。
该实施例中认证设备在认证指纹数据库中是否存在与待认证指纹信息匹 配的指纹信息的同时,还要认证第一时间戳与第二时间戳是否匹配,多重认证机制能够极大地提高认证设备的认证准确度。而且,认证设备在确定认证结果之后,通过及时反馈给虚拟现实设备并在虚拟现实场景中显示,能够使得用户可以直观清楚地获知认证结果,进而提高了用户的使用体验。
本发明还提供了一种优选实施例,该优选实施例以虚拟现实场景中的支付认证为例进行说明。需要说明的是,虚拟现实场景中的支付认证这一应用场景只是本发明的一种优选实施例,本发明实施例还可以应用于虚拟现实场景中的权限认证等场景中。
图3是根据本发明优选实施例的基于虚拟现实场景的认证方法的流程图,如图3所示,该优选实施例可以包括以下步骤:
步骤S301,在虚拟现实场景中显示认证界面。
其中,该认证界面可以用于指示用户执行支付认证。虚拟现实场景中的认证界面可以如图4所示,如图4所示,认证界面中可以显示有支付认证的内容,例如,认证界面中显示有“需支付100元,请输入支付密码”。在如图4所示的虚拟现实场景中还可以现实有其他内容,例如图4所示的电脑,此处图4仅为虚拟现实场景的示意图,并未示出虚拟现实场景中的所有内容。用户在虚拟现实场景中看到该认证界面中的内容之后,可以进行支付认证过程。
步骤S302,将指纹采集设备在虚拟现实场景中产生的指示标识指向认证界面中的认证区域。
该步骤用于指示对该认证区域中所指示的认证内容进行认证。需要说明的是,现实场景中的指纹采集设备的结构可以如图5所示,在指纹采集设备中可以设置有指纹采集区域以及相应功能操作区域,例如游戏功能按键。在如图4所示的虚拟现实场景中,利用指纹采集设备控制指示标识指向认证区域可以如图6所示,其中,图6中的指示线用于表示指纹采集设备在虚拟现实场景中产生的指示标识指向虚拟现实场景中的认证区域。
步骤S303,在虚拟现实场景中提示用户在指纹采集设备中输入待认证指纹。
当指示标识指向认证区域时,在虚拟现实场景中可以显示提示信息,该提示信息可以用于提示用户在指纹采集设备中输入待认证指纹信息。
步骤S304,用户在指纹采集设备中输入待认证指纹信息。需要说明的是,指纹采集设备在采集待认证指纹信息的同时,还可以记录采集到待认证指纹信息的时间,该时间以时间戳的形式记录。
步骤S305,虚拟现实设备可以接收指纹采集设备采集到的待认证指纹信息和时间戳,并将其发送给现实场景中的认证设备。
需要说明的是,现实场景中的认证设备可以是支付宝、银行支付平台等。
步骤S306,指纹采集设备可以将采集到的待认证指纹信息和时间戳发送给与指纹采集设备具有通信连接的通信终端设备。
例如,通信终端设备可以为手机、电脑等,此处通信连接可以是蓝牙、WiFi等。
步骤S307,通信终端设备可以将接收到的待认证指纹信息和时间戳发送给现实设备中的认证设备。
该步骤用于供认证设备根据该信息对虚拟现实设备发送的待认证指纹信息和时间戳进行认证。
步骤S308,现实场景中的认证设备对待认证指纹信息进行认证。
认证设备可以根据通信终端发送的待认证指纹信息和时间戳对虚拟现实设备发送的待认证指纹信息和时间戳进行认证。具体地,认证设备的认证过程可以包括:判断虚拟现实设备发送的时间戳与通信终端设备发送的时间戳是否匹配;判断指纹数据库中是否存在与虚拟现实设备发送的待认证指纹信息相同的指纹信息,其中,指纹数据库中的存储的指纹信息可以为现实场景中指纹采集设备采集的、并由通信终端设备发送的指纹信息。如果上述认证步骤中的任意一个不满足时,认证设备确定待认证指纹信息未通过认证;如果上述认证步骤均满足时,认证设备确定待认证指纹信息通过认证。
步骤S309,在虚拟现实场景中输出认证设备的认证结果信息。
其中,认证结果信息为认证通过或认证未通过。在虚拟现实场景中显示认 证结果信息可以如图7所示,在图7中显示的认证结果信息为认证通过。
该优选实施例无需在虚拟现实场景中建立认证系统,而是通过虚拟现实设备,例如头盔显示器、光阀眼镜等,与现实场景中的指纹采集设备和认证设备进行数据交互,以实现在虚拟现实场景中进行支付认证。
图8是根据本发明优选实施例的虚拟现实场景与现实场景之间数据交互过程的示意图,如图8所示,虚拟现实现实设备、指纹采集设备以及认证设备之间的数据交互过程具体包括:指纹采集设备在虚拟现实场景中产生的指示标识指向虚拟现实场景中的认证区域;在虚拟现实场景中显示提示信息,提示用户在指纹采集设备上输入待认证指纹信息;指纹采集设备在采集到待认证指纹信息时,将采集到的待认证指纹信息以及采集到待认证指纹信息的时间点以时间戳的形式发送给虚拟现实设备;虚拟现实设备可以将接收到的待认证指纹信息以及对应的时间戳发送给现实场景中的认证设备进行认证。
需要说明的是,在指纹采集设备采集到指纹信息时,可以将采集到的指纹信息以及对应的时间戳发送给与指纹采集设备具有通信连接的通信终端设备,例如手机、电脑等;通信终端设备可以将指纹采集设备采集到的指纹信息以及对应的时间戳发送给认证设备,认证设备将其存储的指纹数据库中。认证设备在接收到虚拟现实设备发送的待认证指纹信息以及对应的时间戳之后,可以根据指纹数据库中存储的信息对待认证指纹信息进行认证,并向虚拟现实设备反馈认证结果信息;如果认证通过,则向虚拟现实设备返回认证通过信息;如果认证未通过,则在虚拟现实场景中输出显示用户指示认证未通过的指示信息。
该优选实施例,通过虚拟现实设备与现实场景中的指纹采集设备和认证设备之间的数据交互过程,可以达到无需在虚拟现实场景中建立支付认证系统也能够实现支付认证的目的,进而解决相关技术在虚拟现实场景中支付时需要在虚拟现实场景中建立支付认知系统,这样将会导致在虚拟现实场景中支付的效率较低的技术问题,从而实现提高在虚拟现实场景中支付的效率的技术效果。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。 其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
实施例2
根据本发明实施例,还提供了一种用于实施上述基于虚拟现实场景的认证方法的基于虚拟现实场景的认证装置。需要说明的是,该实施例中的基于虚拟现实场景的认证装置可以应用于本发明中的虚拟现实设备中。图9是根据本发明实施例的一种可选的基于虚拟现实场景的认证装置的示意图,如图9所示,该装置可以包括:
第一接收单元22,用于在虚拟现实场景中接收到认证请求;采集单元24,用于通过现实场景中的指纹采集设备采集待认证指纹信息;发送单元26,用于将待认证指纹信息发送至现实场景中的认证设备;第二接收单元28,用于在虚拟现实场景中接收认证设备发送的认证结果信息,其中,认证结果信息用于指示待认证指纹信息通过认证或未通过认证。
需要说明的是,该实施例中的第一接收单元22可以用于执行本申请实施例1中的步骤S202,该实施例中的采集单元24可以用于执行本申请实施例1中的步骤S204,该实施例中的发送单元26可以用于执行本申请实施例1中的步骤S206,该实施例中的第二接收单元28可以用于执行本申请实施例1中的步骤S208。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通 过硬件实现。
作为一种可选的实施例,如图10所示,该实施例还可以包括:判断单元232,用于在虚拟现实场景中接收到认证请求之后,且在通过现实场景中的指纹采集设备采集待认证指纹信息之前,判断指示标识是否指向虚拟现实场景中的认证区域,其中,指示标识是指纹采集设备在虚拟现实场景中产生的;第一显示单元234,用于在判断出指示标识指向认证区域时,在虚拟现实场景中显示提示信息,其中,提示信息用于提示输入待认证指纹信息。
需要说明的是,该实施例中的判断单元232可以用于执行本申请实施例1中的步骤S2032,该实施例中的第一显示单元234可以用于执行本申请实施例1中的步骤S2034。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现。
作为一种可选的实施例,如图11所示,该实施例还可以包括:执行单元212,用于在虚拟现实场景中接收到认证设备发送的认证结果信息之后,在认证结果信息指示待认证指纹信息通过认证时,在虚拟现实场景中执行与认证区域对应的资源转移事件。
需要说明的是,该实施例中的执行单元212可以用于执行本申请实施例1中的步骤S212。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现。
作为一种可选的实施例,如图12所示,发送单元26可以包括:第一发送模块262,用于从虚拟现实场景将第一时间戳发送给认证设备,其中,第一时间戳为指纹采集设备采集到待认证指纹信息的时间点;第二发送模块264,用于通过指纹采集设备和通信终端设备将待认证指纹信息和第二时间戳发送给 认证设备,其中,第二时间戳为指纹采集设备采集到待认证指纹信息的时间点,指纹采集设备通过与通信终端设备之间建立的连接与通信终端设备进行数据传输;其中,第一时间戳和第二时间戳用于认证设备对待认证指纹信息进行认证。
需要说明的是,该实施例中的第一发送模块262可以用于执行本申请实施例1中的步骤S2062,该实施例中的第二发送模块264可以用于执行本申请实施例1中的步骤S2064。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现。
作为一种可选的实施例,如图13所示,第二接收单元28可以包括:第一接收模块282,用于在认证设备判断出第一时间戳与第二时间戳匹配、且指纹数据库中存在与待认证指纹信息匹配的指纹信息的情况下,在虚拟现实场景中接收到认证设备发送的第一认证结果信息,其中,第一认证结果信息用于指示待认证指纹信息通过认证;第二接收模块284,用于在认证设备判断出第一时间戳与第二时间戳不匹配、和/或指纹数据库中不存在与待认证指纹信息匹配的指纹信息的情况下,在虚拟现实场景中接收到认证设备发送的第二认证结果信息,其中,第二认证结果信息用于指示待认证指纹信息未通过认证。
需要说明的是,该实施例中的第一接收模块282可以用于执行本申请实施例1中的步骤S2082,该实施例中的第二接收模块284可以用于执行本申请实施例1中的步骤S2084。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现。
作为一种可选的实施例,如图14所示,该实施例还可以包括:第二显示单元210,用于在虚拟现实场景中接收到认证设备发送的认证结果信息之后, 在虚拟现实场景中显示认证结果信息。
需要说明的是,该实施例中的第二显示单元210可以用于执行本申请实施例1中的步骤S210。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现。
通过上述模块,可以达到无需在虚拟现实场景中建立支付认证系统也能够实现支付认证的目的,进而解决相关技术在虚拟现实场景中支付时需要在虚拟现实场景中建立支付认知系统,这样将会导致在虚拟现实场景中支付的效率较低的技术问题,从而实现提高在虚拟现实场景中支付的效率的技术效果。
实施例3
根据本发明实施例,还提供了一种用于实施上述基于虚拟现实场景的认证方法的服务器或终端。需要说明的是,该实施例中的服务器或终端可以应用于本发明的虚拟现实设备中,其中,虚拟现实设备可以呈现虚拟现实场景,虚拟现实设备可以用于执行实施例1中的步骤,以实现在虚拟现实场景中进行认证。
图15是根据本发明实施例的一种终端的结构框图,如图15所示,该终端可以包括:一个或多个(图中仅示出一个)处理器201、存储器203、以及传输装置205,如图15所示,该终端还可以包括输入输出设备207。
其中,存储器203可用于存储软件程序以及模块,如本发明实施例中的基于虚拟现实场景的认证方法和装置对应的程序指令/模块,处理器201通过运行存储在存储器203内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的基于虚拟现实场景的认证方法。存储器203可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器203可进一步包括相对于处理器201远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
上述的传输装置205用于经由一个网络接收或者发送数据,还可以用于处理器与存储器之间的数据传输。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置205包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置205为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
其中,具体地,存储器203用于存储应用程序。
处理器201可以通过传输装置205调用存储器203存储的应用程序,以执行下述步骤:在虚拟现实场景中接收到认证请求;通过现实场景中的指纹采集设备采集待认证指纹信息;将待认证指纹信息发送至现实场景中的认证设备;在虚拟现实场景中接收认证设备发送的认证结果信息,其中,认证结果信息用于指示待认证指纹信息通过认证或未通过认证。
处理器201还用于执行下述步骤:在虚拟现实场景中接收到认证请求之后,且在通过现实场景中的指纹采集设备采集待认证指纹信息之前,判断指示标识是否指向虚拟现实场景中的认证区域,其中,指示标识是指纹采集设备在虚拟现实场景中产生的;在判断出指示标识指向认证区域时,在虚拟现实场景中显示提示信息,其中,提示信息用于提示输入待认证指纹信息。
处理器201还用于执行下述步骤:在虚拟现实场景中接收到认证设备发送的认证结果信息之后,在认证结果信息指示待认证指纹信息通过认证时,在虚拟现实场景中执行与认证区域对应的资源转移事件。
处理器201还用于执行下述步骤:从虚拟现实场景将第一时间戳发送给认证设备,其中,第一时间戳为指纹采集设备采集到待认证指纹信息的时间点;通过指纹采集设备和通信终端设备将待认证指纹信息和第二时间戳发送给认证设备,其中,第二时间戳为指纹采集设备采集到待认证指纹信息的时间点,指纹采集设备通过与通信终端设备之间建立的连接与通信终端设备进行数据传输;其中,第一时间戳和第二时间戳用于认证设备对待认证指纹信息进行认证。
处理器201还用于执行下述步骤:在认证设备判断出第一时间戳与第二时 间戳匹配、且指纹数据库中存在与待认证指纹信息匹配的指纹信息的情况下,在虚拟现实场景中接收到认证设备发送的第一认证结果信息,其中,第一认证结果信息用于指示待认证指纹信息通过认证;在认证设备判断出第一时间戳与第二时间戳不匹配、和/或指纹数据库中不存在与待认证指纹信息匹配的指纹信息的情况下,在虚拟现实场景中接收到认证设备发送的第二认证结果信息,其中,第二认证结果信息用于指示待认证指纹信息未通过认证。
处理器201还用于执行下述步骤:在虚拟现实场景中接收到认证设备发送的认证结果信息之后,在虚拟现实场景中显示认证结果信息。
采用本发明实施例,提供了一种基于虚拟现实场景的认证的方案。通过在虚拟现实场景中接收到认证请求时,将现实场景中的指纹采集设备采集到的待认证指纹信息发送至现实场景中的认证设备进行认证,达到了无需在虚拟现实场景中建立支付认证系统也能够实现支付认证的目的,进而解决了相关技术在虚拟现实场景中支付时需要在虚拟现实场景中建立支付认知系统,这样将会导致在虚拟现实场景中支付的效率较低的技术问题,从而实现了提高在虚拟现实场景中支付的效率的技术效果。
可选地,本实施例中的具体示例可以参考上述实施例1和实施例2中所描述的示例,本实施例在此不再赘述。
本领域普通技术人员可以理解,图15所示的结构仅为示意,终端可以是能够呈现虚拟现实场景的头盔显示器、光阀眼镜等终端设备。图15其并不对上述电子装置的结构造成限定。例如,终端还可包括比图15中所示更多或者更少的组件(如网络接口、显示装置等),或者具有与图15所示不同的配置。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
实施例4
本发明的实施例还提供了一种存储介质。可选地,在本实施例中,上述存 储介质可以用于执行基于虚拟现实场景的认证方法的程序代码。需要说明的是,该实施例中的存储介质可以应用于本发明的虚拟现实设备中,例如头盔显示器、光阀眼镜等。虚拟现实设备利用该实施例的存储介质可以执行本发明实施例的基于虚拟现实场景的认证方法,以实现在虚拟现实场景中进行认证。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:
S1,在虚拟现实场景中接收到认证请求;
S2,通过现实场景中的指纹采集设备采集待认证指纹信息;
S3,将待认证指纹信息发送至现实场景中的认证设备;
S4,在虚拟现实场景中接收认证设备发送的认证结果信息,其中,认证结果信息用于指示待认证指纹信息通过认证或未通过认证。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在虚拟现实场景中接收到认证请求之后,且在通过现实场景中的指纹采集设备采集待认证指纹信息之前,判断指示标识是否指向虚拟现实场景中的认证区域,其中,指示标识是指纹采集设备在虚拟现实场景中产生的;在判断出指示标识指向认证区域时,在虚拟现实场景中显示提示信息,其中,提示信息用于提示输入待认证指纹信息。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在虚拟现实场景中接收到认证设备发送的认证结果信息之后,在认证结果信息指示待认证指纹信息通过认证时,在虚拟现实场景中执行与认证区域对应的资源转移事件。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:从虚拟现实场景将第一时间戳发送给认证设备,其中,第一时间戳为指纹采集设备采集到待认证指纹信息的时间点;通过指纹采集设备和通信终端设备将待认证指纹信息和第二时间戳发送给认证设备,其中,第二时间戳为指纹采集设备采集到待认证指纹信息的时间点,指纹采集设备通过与通信终端设备之间建立的连接与通信终端设备进行数据传输;其中,第一时间戳和第二时间戳用于认证设 备对待认证指纹信息进行认证。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在认证设备判断出第一时间戳与第二时间戳匹配、且指纹数据库中存在与待认证指纹信息匹配的指纹信息的情况下,在虚拟现实场景中接收到认证设备发送的第一认证结果信息,其中,第一认证结果信息用于指示待认证指纹信息通过认证;在认证设备判断出第一时间戳与第二时间戳不匹配、和/或指纹数据库中不存在与待认证指纹信息匹配的指纹信息的情况下,在虚拟现实场景中接收到认证设备发送的第二认证结果信息,其中,第二认证结果信息用于指示待认证指纹信息未通过认证。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:在虚拟现实场景中接收到认证设备发送的认证结果信息之后,在虚拟现实场景中显示认证结果信息。
可选地,本实施例中的具体示例可以参考上述实施例1和实施例2中所描述的示例,本实施例在此不再赘述。
在另一个实施例中,如前文所述,在虚拟现实场景中接收到认证请求之后,且在通过现实场景中的指纹采集设备采集待认证指纹信息之前,还包括判断指示标识是否指向虚拟现实场景中的认证区域的步骤。其中,所述指示标识是指纹采集设备在虚拟现实场景中产生的。即在基于虚拟现实场景进行认证时,还需指示标识选中认证区域。为此,本发明实施例还提供了一种虚拟物体选取方法,来实现在虚拟现实场景中通过操作焦点来选取虚拟物体。其中,下述实施例中提及的操作焦点指代输入设备在三维虚拟环境中所对应的点,即相当于基于虚拟现实场景的认证方案中的指示标识。其中,在虚拟物体的选取方案中虚拟物体的一种具体表现形式可为基于虚拟现实场景的认证方案中的认证区域。下面通过实施例5至实施例7对虚拟物体的选取方案进行详细说明。
实施例5
请参考图16,其示出了本发明一个实施例提供的虚拟现实(Virtual Reality,VR)系统的结构示意图。该VR系统包括:头戴式显示器120、处理单元140和输入设备160。
头戴式显示器120是用于佩戴在用户头部进行图像显示的显示器。头戴式显示器120通常包括佩戴部和显示部,佩戴部包括用于将头戴式显示器120佩戴在用户头部的眼镜腿及弹性带,显示部包括左眼显示屏和右眼显示屏。头戴式显示器120能够在左眼显示屏和右眼显示屏显示不同的图像,从而为用户模拟出三维虚拟环境。
头戴式显示器120通过柔性电路板或硬件接口与处理单元140电性相连。
处理单元140通常集成在头戴式显示器120的内部。处理单元140用于建模三维虚拟环境、生成三维虚拟环境所对应的显示画面、生成三维虚拟环境中的虚拟物体等。处理单元140接收输入设备160的输入信号,并生成头戴式显示器120的显示画面。处理单元140通常由设置在电路板上的处理器、存储器、图像处理单元等电子器件实现。可选地,处理单元140还包括运动传感器,用于捕捉用户的头部动作,并根据用户的头部动作改变头戴式显示器120中的显示画面。
处理单元140通过线缆、蓝牙连接或WiFi连接与输入设备160相连。
输入设备160是体感手套、体感手柄、遥控器、跑步机、鼠标、键盘、人眼聚焦设备等输入外设。可选地,输入设备160中设置有物理按键和运动传感器。物理按键用于接收用户触发的操作指令,运动传感器用于采集输入设备160的空间姿态。作为运动传感器的一种,重力加速度传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向;陀螺仪传感器可检测各个方向上角速度的大小,检测出输入设备160的旋转动作。输入设备160在接收到操作指令时,向处理单元140发送操作指令;输入设备160在发生移动和/或旋转时,向处理单元140发送移动数据和/或旋转数据。
请参考图17,其示出了本发明一个实施例提供的虚拟物体选取方法的方法流程图。本实施例以该虚拟物体选取方法应用于图16所示的VR系统中来举例说明。该方法包括:
步骤1701,在三维虚拟环境中确定操作焦点的位置,操作焦点是输入设备在三维虚拟环境中所对应的点,三维虚拟环境中包括虚拟物体,虚拟物体包括有用于接受操作的受控点;
三维虚拟环境是由处理单元建模得到的虚拟环境。该三维虚拟环境可以是一个房间、一栋建筑、一个游戏场景等。可选地,三维虚拟环境包括:x轴、y 轴和z轴所形成的虚拟坐标系。x轴、y轴和z轴中任意两个轴之间垂直。
三维虚拟环境中包括若干个虚拟物体,每个虚拟物体在三维虚拟环境中具有对应的三维坐标。每个虚拟物体具有一个或一个以上的受控点。比如,虚拟物体是一个盒子,该盒子的中心点是受控点32,如图18A所示;又比如,虚拟物体是牙刷,该牙刷的刷柄内的一个点是受控点32,如图18B所示;再比如,虚拟物体是一个棍子,该棍子的两端内分别有一个受控点32,如图18C所示。
操作焦点是输入设备在三维虚拟环境中所对应的点,操作焦点用于指示输入设备在三维虚拟环境中的操作位置。可选地,操作焦点在三维虚拟环境中具有对应的三维坐标。当输入设备发生移动时,操作焦点也会发生移动。
步骤1702,以操作焦点为基准位置,确定操作焦点的三维操作范围;
可选地,三维操作范围是以操作焦点为球心的圆球状范围。比如,三维操作范围是以操作焦点为球心,半径为20厘米的圆球状范围。
随着操作焦点在三维虚拟环境中的移动,三维操作范围在三维虚拟环境中也会发生移动。
控制单元以操作焦点为基准位置,实时确定操作焦点的三维操作范围。
步骤1703,在接收到操作指令时,将受控点位于三维操作范围内的虚拟物体确定为被选取的虚拟物体。
受控点是虚拟物体上用于接受操作的点。可选地,受控点在三维虚拟环境中具有对应的三维坐标。当虚拟物体发生移动时,受控点也会发生移动。
操作指令是输入设备接收到的指令。操作指令包括:选取物体指令、拾取物体指令、打开物体指令、使用物体指令、拍打物体指令、攻击指令等指令中的任意一种。本实施例对操作指令的操作类型不加以限定,视具体的实施例所决定。
可选地,在处理单元接收到操作指令时,处理单元检测三维操作范围内是否存在虚拟物体的受控点;当三维操作范围内存在一个虚拟物体的受控点,则将受控点位于三维操作范围内的虚拟物体确定为被选取的虚拟物体。
示意性的,参考图18D,三维虚拟环境30中包括:虚拟桌子31和虚拟盒子33,虚拟盒子33放置在虚拟桌子31上,控制单元以操作焦点35为球心,确定球形的三维操作范围37。当操作焦点35发生移动时,三维操作范围37也发生移动。当控制单元接收到打开物体指令时,控制单元检测三维操作范围37 内是否存在虚拟物体的受控点32,当虚拟盒子33的受控点32位于三维操作范围37内时,控制单元将该虚拟盒子33确定为被选取的虚拟物体。
可选地,当三维操作范围内不存在虚拟物体的受控点时,处理单元不响应操作指令;当三维操作范围内存在一个虚拟物体的受控点时,处理单元直接将该虚拟物体作为被选取的虚拟物体;当三维操作范围内存在至少两个虚拟物体的受控点时,处理单元从至少两个虚拟物体的受控点中,自动选择出一个虚拟物体作为被选取的虚拟物体。
综上所述,本实施例提供的虚拟物体选取方法,通过以操作焦点为基准位置确定操作焦点的三维操作范围,仅需要为一个操作焦点确定三维操作范围即可,不需要为每个虚拟物体的受控点设置响应范围,解决了基于每个虚拟物体的受控点设置响应范围,当三维虚拟环境中的虚拟物体较多时需要耗费处理单元的大量计算资源的问题;达到了不论虚拟物体有多少个,只需要为一个操作焦点确定一个三维操作范围,从而节约了处理单元的大量计算资源的效果。
在图18D中,以三维操作范围中存在一个虚拟物体的受控点为例,但是在更多的实施场景中,受控点位于三维操作范围的虚拟物体为两个或两个以上。此时,步骤1703可替代实现成为步骤1703a和步骤1703b,如图19所示:
步骤1703a,在接收到操作指令时,确定受控点位于三维操作范围内的虚拟物体;
在接收到操作指令时,处理单元检测三维操作范围内是否存在虚拟物体的受控点。该检测过程可以由三维操作范围与受控点之间的求交集运算或碰撞检测运算实现。
步骤1703b,在三维操作范围内存在的虚拟物体为至少两个时,按照虚拟物体的属性信息确定出被选取的虚拟物体。
处理单元根据虚拟物体的属性信息,从两个或两个以上虚拟物体中自动确定出一个虚拟物体作为被选取的虚拟物体。
其中,虚拟物体的属性信息包括:虚拟物体的物体类型、虚拟物体的优先级、虚拟物体的受控点与操作焦点之间的距离中的至少一种。
综上所述,本实施例提供的虚拟物体选取方法,通过在虚拟物体为至少两个时,由处理单元自动选择出一个被选取的虚拟物体,减少用户的操作步骤和时间成本,并且能够兼顾用户的自主选择意愿和自动选择虚拟物体的便捷性。
由于虚拟物体的属性信息包括三种信息中的至少一种。当虚拟物体的属性信息包括虚拟物体的物体类型时,参考如下图20A所示实施例;当虚拟物体的属性信息包括虚拟物体的优先级时,参考如下图21A所示实施例;当虚拟物体的属性信息包括虚拟物体的受控点与操作焦点之间的距离时,参考如下图22A所示实施例。
请参考图20A,其示出了本发明一个实施例提供的虚拟物体选取方法的方法流程图。本实施例以该虚拟物体选取方法应用于图16所示的VR系统来举例说明。该方法包括:
步骤501,在三维虚拟环境中确定操作焦点的位置,操作焦点是输入设备在三维虚拟环境中所对应的点,三维虚拟环境中包括虚拟物体,虚拟物体包括有用于接受操作的受控点;
在VR系统运行后,处理单元建模得到三维虚拟环境,输入设备在三维虚拟环境中对应有操作焦点。处理单元根据输入设备在实际环境中的空间位置,确定操作焦点在三维虚拟环境中的位置。
当输入设备发生移动时,输入设备向处理单元发送移动数据,处理单元根据移动数据将操作焦点在三维虚拟环境中进行移动。可选地,如果操作焦点是具有方向性的操作焦点,比如手型操作焦点或者枪型操作焦点,当输入设备发生旋转时,输入设备向处理单元发送旋转数据,处理单元根据旋转数据将操作焦点在三维虚拟环境中进行旋转。
其中,移动数据用于指示输入设备在x轴、y轴和/或z轴上的移动距离;旋转数据用于指示输入设备在x轴、y轴和/或z轴上的旋转角度。
步骤502,以操作焦点为基准位置,确定操作焦点的三维操作范围;
示意性的,处理单元以操作焦点为球心确定球形的三维操作范围,作为操作焦点的三维操作范围。
可选地,当操作焦点移动时,该操作焦点的三维操作范围也会发生移动。
步骤503,在接收到操作指令时,获取操作指令对应的操作类型;
用户在输入设备上触发操作指令。触发方式包括但不限于:按压输入设备上的物理按键、使用输入设备做出预定手势、摇晃输入设备等。
其中,操作指令的操作类型包括但不限于:选取物体、拾取物体、打开物 体、使用物体、拍打物体、攻击中的至少一种。
比如,按压输入设备上的物理按键A时,触发拾取物体指令;按压输入设备上的物理按键B时,触发打开物体指令。
输入设备将操作指令发送给处理单元,处理单元在接收到操作指令后,确定操作指令的操作类型。示意性的,操作指令的操作类型是打开物体。
步骤504,确定受控点位于三维操作范围内的虚拟物体;
可选地,处理单元对三维操作范围和虚拟物体的受控点进行求交集计算,当存在交集时,确定虚拟物体的受控点位于三维操作范围内。
可选地,当三维操作范围内不存在虚拟物体的受控点时,处理单元不响应操作指令;当三维操作范围内存在一个虚拟物体的受控点时,处理单元将该虚拟物体确定为被选取的虚拟物体。
当三维操作范围内的虚拟物体为至少两个时,进入步骤505。
步骤505,在虚拟物体为至少两个时,确定每个虚拟物体对应的物体类型;
每个虚拟物体对应各自的物体类型,物体类型包括但不限于:墙壁、柱子、桌子、椅子、杯子、水壶、盘子、植物、人物、岩石等各种类型,本实施例对该物体类型的划分形式不加以限定。
步骤506,从每个虚体物体对应的物体类型中,确定出与操作类型匹配的目标物体类型,目标物体类型是具有响应操作指令的能力的类型;
对于每一种操作指令来讲,并不一定是所有的虚拟物体都能够响应该操作指令。比如,盒子具有响应打开物体指令的能力,但勺子不具有响应打开物体指令的能力;又比如,杯子具有响应拾取物体指令的能力,但墙壁不具有响应拾取物体指令的能力。拾取物体指令是用于将物体拾取到虚拟手中的指令。
可选地,处理单元中存储有操作类型和物体类型之间的匹配关系。下表一示意性的示出了该对应关系。
表一
操作类型 匹配的物体类型
打开物体指令 盒子、水壶、箱子、柜子、井盖
拾取物体指令 杯子、盘子、勺子、武器、笔、书籍
攻击指令 动物、人物
处理单元根据预存的匹配关系,确定出与操作类型匹配的目标物体类型,目标物体类型是具有响应操作指令的能力的类型。
步骤507,将具有目标物体类型的虚拟物体,确定为被选取的虚拟物体;
示意性的参考图20B,三维虚拟环境50中包括:虚拟水壶51、虚拟杯子52。控制单元以操作焦点53为中心点,确定出球状的三维操作范围54。当控制单元接收到打开物体指令时,确定出虚拟水壶51的受控点和虚拟杯子52的受控点位于三维操作范围54内,控制单元确定虚拟水壶51的物体类型与操作类型匹配,虚拟杯子52的物体类型与操作类型不匹配,控制单元将虚拟水壶51确定为被选取的虚拟物体。
步骤508,控制被选取的虚拟物体对操作指令进行响应。
处理单元控制虚拟水壶51对打开物体指令进行响应,比如控制虚拟水壶51展现出打开水壶盖的动画。
综上所述,本实施例提供的虚拟物体选取方法,在三维操作范围内的虚拟物体为两个或两个以上时,通过与操作类型匹配的物体类型自动选择出一个虚拟物体作为被选取的虚拟物体,实现了既能满足用户自身的选择意愿,又能实现对虚拟物体的自动选取,减少用户在多个虚拟物体时的选取操作次数,高效智能的帮助用户选取合适的虚拟物体。
请参考图21A,其示出了本发明一个实施例提供的虚拟物体选取方法的方法流程图。本实施例以该虚拟物体选取方法应用于图16所示的VR系统来举例说明。该方法包括:
步骤601,在三维虚拟环境中确定操作焦点的位置,操作焦点是输入设备在三维虚拟环境中所对应的点,三维虚拟环境中包括虚拟物体,虚拟物体包括有用于接受操作的受控点;
步骤602,以操作焦点为基准位置,确定操作焦点的三维操作范围;
示意性的,处理单元以操作焦点为球心确定椭球形的三维操作范围,作为操作焦点的三维操作范围。
可选地,当操作焦点移动时,该操作焦点的三维操作范围也会发生移动。
步骤603,在接收到操作指令时,确定受控点位于三维操作范围内的虚拟物体;
示意性的,操作指令是打开物体指令。
可选地,处理单元对三维操作范围和虚拟物体的受控点进行求交集计算,当存在交集时,确定虚拟物体的受控点位于三维操作范围内。
可选地,当三维操作范围内不存在虚拟物体的受控点时,处理单元不响应操作指令;当三维操作范围内存在一个虚拟物体的受控点时,处理单元将该虚拟物体确定为被选取的虚拟物体。
当三维操作范围内的虚拟物体为至少两个时,进入步骤604。
步骤604,在虚拟物体为至少两个时,确定每个虚拟物体的优先级;
可选地,每个虚拟物体的优先级是预设的优先级。或者,每个虚拟物体的优先级与历史使用次数呈正相关关系,历史使用次数越多,优先级越高。
示意性的参考图21B,三维虚拟环境60中包括:虚拟圆盒子61、虚拟方盒子62。控制单元以操作焦点63为中心点,确定出椭球状的三维操作范围64。当控制单元接收到打开物体指令时,确定出虚拟物体61和虚拟物体62位于三维操作范围64内,控制单元确定虚拟圆盒子61具有预设的优先级2,虚拟方盒子62具有预设的优先级1。
步骤605,将具有最高优先级的虚拟物体,确定为被选取的虚拟物体。
由于优先级1大于优先级2,所以控制单元将虚拟方盒子62确定为被选取的虚拟物体。
步骤606,控制被选取的虚拟物体对操作指令进行响应。
处理单元打开虚拟方盒子62。
综上所述,本实施例提供的虚拟物体选取方法,在三维操作范围内的虚拟物体为两个或两个以上时,通过优先级高低来自动选择出一个虚拟物体作为被选取的虚拟物体,实现了既能满足用户自身的选择意愿,又能实现对虚拟物体的自动选取,减少用户在多个虚拟物体时的选取操作次数,高效智能的帮助用户选取合适的虚拟物体。
请参考图22A,其示出了本发明一个实施例提供的虚拟物体选取方法的方法流程图。本实施例以该虚拟物体选取方法应用于图16所示的VR系统来举例说明。该方法包括:
步骤701,在三维虚拟环境中确定操作焦点的位置,操作焦点是输入设备在三维虚拟环境中所对应的点,三维虚拟环境中包括虚拟物体,虚拟物体包括有用于接受操作的受控点;
步骤702,以操作焦点为基准位置,确定操作焦点的三维操作范围;
示意性的,操作焦点为具有方向性的手型操作焦点,处理单元以手型操作 焦点为起始点,手心向外方向为中心线,确定出圆锥形状的三维操作范围。
可选地,当操作焦点移动时,该操作焦点的三维操作范围也会发生移动;当操作焦点发生旋转时,该操作焦点的三维操作范围也会发生旋转。
步骤703,在接收到操作指令时,确定受控点位于三维操作范围内的虚拟物体;
示意性的,操作指令是拾取物体指令。
可选地,处理单元对三维操作范围和虚拟物体的受控点进行求交集计算,当存在交集时,确定虚拟物体的受控点位于三维操作范围内。
可选地,当三维操作范围内不存在虚拟物体的受控点时,处理单元不响应操作指令;当三维操作范围内存在一个虚拟物体的受控点时,处理单元将该虚拟物体确定为被选取的虚拟物体。
当三维操作范围内的虚拟物体为至少两个时,进入步骤704。
步骤704,在虚拟物体为至少两个时,确定每个虚拟物体的受控点与操作焦点之间的距离;
在三维操作范围内的虚拟物体为两个或两个以上时,处理单元计算每个虚拟物体的受控点与操作焦点之间的距离。
设操作焦点A在三维虚拟环境中的坐标为(x1,y1,z1);虚拟物体的受控点B在三维虚拟环境中的坐标为(x2,y2,z2),则操作焦点A与受控点B之间的距离d为:
Figure PCTCN2017095640-appb-000001
可选地,若一个虚拟物体包括多个受控点,则处理单元计算该虚拟物体的每个受控点与操作焦点之间的距离,取最小的一个距离作为该虚拟物体的受控点与操作焦点之间的距离。
步骤705,将具有最小距离的虚拟物体,确定为被选取的虚拟物体。
在计算出每个虚拟物体的受控点与操作焦点之间的距离后,处理单元将具有最小距离的虚拟物体,确定为被选取的虚拟物体。
示意性的参考图22B,三维虚拟环境70中包括:虚拟物体71、虚拟物体72、虚拟物体73。控制单元以手型操作焦点74为起始点,手心向外方向为中心线,确定出圆锥形状的三维操作范围75。当控制单元接收到拾取物体指令时,确定出虚拟物体71、虚拟物体72、虚拟物体73位于三维操作范围75内,控制单元计算虚拟物体71的受控点与手型操作焦点74之间的距离1,计算虚拟 物体72的受控点与手型操作焦点74之间的距离2,计算虚拟物体73的受控点与手型操作焦点74之间的距离3。由于距离1<距离2<距离3,所以控制单元将虚拟物体71确定为被选取的虚拟物体。
步骤706,控制被选取的虚拟物体对操作指令进行响应。
示意性的,操作指令是拾取物体指令,则处理单元将虚拟物体1拾取到手型操作焦点74对应的虚拟手中。
综上所述,本实施例提供的虚拟物体选取方法,在三维操作范围内的虚拟物体为两个或两个以上时,通过距离远近来自动选择出一个虚拟物体作为被选取的虚拟物体,实现了既能满足用户自身的选择意愿,又能实现对虚拟物体的自动选取,减少用户在多个虚拟物体时的选取操作次数,高效智能的帮助用户选取合适的虚拟物体。
上述的图20A实施例、图21A实施例和图22A实施例能够两两结合实施,或者三者结合实施。示意性的,在三维操作范围内的虚拟物体为多个时,将具有匹配的物体类型且优先级最高的虚拟物体确定为被选取的虚拟物体;或者,将具有匹配的物体类型且距离最近的虚拟物体确定为被选取的虚拟物体;或者,将优先级最高且距离最近的虚拟物体确定为被选取的虚拟物体,或者,具有匹配的物体类型且优先级最高且距离最近的虚拟物体确定为被选取的虚拟物体。
下面采用图23A对上述图20A实施例、图21A实施例和图22A实施例进行组合时的实施例进行示意。当虚拟物体的属性信息包括:虚拟物体的物体类型、虚拟物体的优先级、虚拟物体的受控点与操作焦点之间的距离这三种信息中的至少两种时,步骤1703可被替代实现成为步骤1703c至步骤1703e,如图23A所示:
步骤1703c,按照每个虚拟物体的第i种属性信息,确定第i次选取出的虚拟物体;
每种属性信息是上述三种属性信息中的一种。i的初始值为1且i为整数。示意性的,第1种属性信息是虚拟物体的物体类型;第2种属性信息是虚拟物体的优先级;第3种属性信息是虚拟物体的受控点与操作焦点之间的距离,但本实施例并不限定每种属性信息的具体形式。
当第i种属性信息是虚拟物体的物体类型时,确定第i次选取出的虚拟物体的过程,可以参考如图20A实施例所提供的技术方案;当第i种属性信息是虚拟物体的优先级时,可以参考如图21A实施例所提供的技术方案;当第i种属性信息是虚拟物体的受控点与操作焦点之间的距离时,可以参考如图22A实施例所提供的技术方案。
根据第i种属性信息进行第i次选取时,有可能选择出一个虚拟物体,此时进入步骤1703d;也有可能选择出多个虚拟物体,这多个虚拟物体具有相同的第i种属性信息,此时进入步骤1703e。
步骤1703d,当第i次选取出的虚拟物体为一个时,将第i次选取出的虚拟物体确定为被选取的虚拟物体;
步骤1703e,当第i次选取出的虚拟物体为两个或两个以上时,按照每个虚拟物体的第i+1种属性信息,确定第i+1次选取出的虚拟物体;
如果第i次选取出的虚拟物体为两个或两个以上,则对第i次选取出的虚拟物体,按照每个虚拟物体的第i+1种属性信息,确定出第i+1次选取出的虚拟物体,循环执行上述步骤,直至选择出最终的一个被选取的虚拟物体。
例如,如果受控点位于三维操作范围内的虚拟物体为多个,处理单元先按照第1种属性信息“虚拟物体的物体类型”,确定出第1次选取出的虚拟物体;如果第1次选取出的虚拟物体为1个,则确定为最终的被选取的虚拟物体;如果第1次选取出的虚拟物体为2个或2个以上,则处理单元再按照第2种属性信息“虚拟物体的优先级”,确定出第2次选取出的虚拟物体;如果第2次选取出的虚拟物体为1个,则确定为最终的被选取的虚拟物体;如果第2次选取出的虚拟物体为2个或2个以上,则处理单元再按照第3种属性信息“虚拟物体的受控点与操作焦点之间的距离”,确定出第3次选取出的虚拟物体;如果第3次选取出的虚拟物体为1个,则确定为最终的被选取的虚拟物体;如果第3次选取出的虚拟物体为2个或2个以上,则处理单元认为选择失败,不做响应或者弹出错误提示信息。
为了对上述图23A实施例的过程进行更为详细的阐述,下面采用图23B对上述三个实施例结合实施的实施例进行阐述。
图23B示出了本发明另一实施例提供的虚拟物体选取方法的流程图。本实施例以该虚拟物体选取方法应用于图16所示的VR系统中来举例说明。该方 法包括:
步骤801,在三维虚拟环境中确定操作焦点的位置,操作焦点是输入设备在三维虚拟环境中所对应的点,三维虚拟环境中包括虚拟物体,虚拟物体包括有用于接受操作的受控点;
步骤802,以操作焦点为基准位置,确定操作焦点的三维操作范围;
可选地,三维操作范围是以操作焦点为基准位置的圆球状范围、椭球状范围、圆锥状范围、立方体范围、圆柱体范围中的至少一种。可选地,如果操作焦点是具有方向性的操作焦点,比如手型操作焦点、枪型操作焦点等,处理单元以操作焦点为基准点,操作焦点的方向线为中心线,确定操作焦点的三维操作范围。
步骤803,在接收到操作指令时,确定受控点位于三维操作范围内的虚拟物体;
可选地,处理单元对三维操作范围和虚拟物体的受控点进行求交集计算,当存在交集时,确定虚拟物体的受控点位于三维操作范围内。
当三维操作范围内不存在虚拟物体的受控点时,进入步骤804;
当三维操作范围内存在一个虚拟物体的受控点时,进入步骤805;
当三维操作范围内存在两个或两个以上的虚拟物体的受控点时,进入步骤806。
步骤804,不响应操作指令;
步骤805,将该虚拟物体确定为被选取的虚拟物体;
步骤806,获取操作指令对应的操作类型;
步骤807,确定每个虚拟物体对应的物体类型;
步骤808,从每个虚体物体对应的物体类型中,确定出与操作类型匹配的目标物体类型,目标物体类型是具有响应操作指令的能力的类型;
步骤809,检测具有目标物体类型的虚拟物体是否超过一个;
若具有目标物体类型的虚拟物体仅有一个,则进入步骤805;若具有目标物体类型的虚拟物体为两个或两个以上,则进入步骤810。
步骤810,确定具有目标物体类型的虚拟物体的优先级;
步骤811,检测具有最高优先级的虚拟物体是否超过一个;
若具有最高优先级的虚拟物体为仅有一个,则进入步骤805;若具有最高优先级的虚拟物体为两个或两个以上,则进入步骤812。
步骤812,确定具有最高优先级的虚拟物体的受控点与操作焦点的距离;
步骤813,将具有最小距离的虚拟物体确定为被选取的虚拟物体。
步骤814,控制被选取的虚拟物体对操作指令进行响应。
示意性的,当操作指令是选取物体指令时,控制虚拟物体处于被选取状态;当操作指令是拾取物体指令时,控制虚拟物体处于被虚拟手(或其它元素)拾取的状态;当操作指令是打开物体指令时,控制虚拟物体处于被打开状态;当操作指令是使用物体指令时,控制虚拟物体处于被使用状态;当操作指令是拍打物体指令时,控制虚拟物体处于被虚拟手(或其它元素)拍打的状态;当操作指令是攻击指令时,控制虚拟物体处于被攻击状态。
综上所述,本实施例提供的虚拟物体选取方法,在三维操作范围内的虚拟物体为两个或两个以上时,通过与操作类型匹配的物体类型、优先级和距离三种因素自动选择出一个虚拟物体作为被选取的虚拟物体,实现了既能满足用户自身的选择意愿,又能实现对虚拟物体的自动选取,减少用户在多个虚拟物体时的选取操作次数,高效智能的帮助用户选取合适的虚拟物体。
实施例6
下述为本发明装置实施例,可以用于执行本发明方法实施例。对于本发明装置实施例中未披露的细节,请参照本发明方法实施例。
请参考图24,其示出了本发明一个实施例提供的虚拟物体选取装置的结构方框图。本实施例以该虚拟物体选取装置应用于图16所示的VR系统中来举例说明。该虚拟物体选取装置,包括:
第一确定模块901,用于在三维虚拟环境中确定操作焦点的位置。
可选地,操作焦点是输入设备在三维虚拟环境中所对应的点,三维虚拟环境中包括虚拟物体,虚拟物体包括有用于接受操作的受控点。
第二确定模块902,用于以操作焦点为基准位置,确定操作焦点的三维操作范围。
第三确定模块903,用于在接收到操作指令时,将受控点位于三维操作范围内的虚拟物体确定为被选取的虚拟物体。
综上所述,本实施例提供的虚拟物体选取装置,通过以操作焦点为基准位置确定操作焦点的三维操作范围,仅需要为一个操作焦点确定三维操作范围即可,不需要为每个虚拟物体的受控点设置响应范围,解决了基于每个虚拟物体 的受控点设置响应范围,当三维虚拟环境中的虚拟物体较多时需要耗费处理单元的大量计算资源的问题;达到了不论虚拟物体有多少个,只需要为一个操作焦点确定一个三维操作范围,从而节约了处理单元的大量计算资源的效果。
请参考图25,其示出了发明另一个实施例提供的虚拟物体选取装置的结构方框图。本实施例以该虚拟物体选取装置应用于图16所示的VR系统中来举例说明。该虚拟物体选取装置,包括:
第一确定模块1010,用于在三维虚拟环境中确定操作焦点的位置。
可选地,操作焦点是输入设备在三维虚拟环境中所对应的点,三维虚拟环境中包括虚拟物体,虚拟物体包括有用于接受操作的受控点。
第二确定模块1020,用于以操作焦点为基准位置,确定操作焦点的三维操作范围。
第三确定模块1030,用于在接收到操作指令时,将受控点位于三维操作范围内的虚拟物体确定为被选取的虚拟物体。
可选地,第三确定模块1030包括第一确定单元1031和第二确定单元1032。
第一确定单元1031,用于在接收到操作指令时,确定受控点位于三维操作范围内的虚拟物体。
第二确定单元1032,用于在虚拟物体为至少两个时,按照虚拟物体的属性信息确定出被选取的虚拟物体。
可选地,属性信息包括:虚拟物体的物体类型、虚拟物体的优先级和虚拟物体的受控点与操作焦点之间的距离中的至少一种。
综上所述,本实施例提供的虚拟物体选取装置,通过在虚拟物体为至少两个时,由处理单元自动选择出一个被选取的虚拟物体,减少用户的操作步骤和时间成本,并且能够兼顾用户的自主选择意愿和自动选择虚拟物体的便捷性。
请参考图26,其示出了发明一个实施例提供的虚拟物体选取装置的结构方框图。本实施例以该虚拟物体选取装置应用于图16所示的VR系统中来举例说明。该虚拟物体选取装置,包括:
第一确定模块1110,用于在三维虚拟环境中确定操作焦点的位置。
可选地,操作焦点是输入设备在三维虚拟环境中所对应的点,三维虚拟环境中包括虚拟物体,虚拟物体包括有用于接受操作的受控点。
第二确定模块1120,用于以操作焦点为基准位置,确定操作焦点的三维操作范围。
第三确定模块1130,用于在接收到操作指令时,将受控点位于三维操作范围内的虚拟物体确定为被选取的虚拟物体。
可选地,第三确定模块1130包括第一确定单元1131和第二确定单元1132。
第一确定单元1131,用于在接收到操作指令时,确定受控点位于三维操作范围内的虚拟物体。
第二确定单元1132,用于在虚拟物体为至少两个时,按照虚拟物体的属性信息确定出被选取的虚拟物体。
可选地,属性信息包括:虚拟物体的物体类型、虚拟物体的优先级和虚拟物体的受控点与操作焦点之间的距离中的至少一种。
可选地,当属性信息包括虚拟物体的物体类型时,第二确定单元1132,包括:指令获取子单元、第一确定子单元、第二确定子单元;可选地,当属性信息包括虚拟物体的优先级时,第二确定单元1132还包括:第三确定子单元、第四确定子单元、第五确定子单元;可选地,当属性信息包括虚拟物体的受控点与操作焦点之间的距离时,第二确定单元1132还包括:第六确定子单元和第七确定子单元;可选地,属性信息包括至少两种时,第二确定单元1132还包括:第八确定子单元和第九确定子单元。
指令获取子单元,用于获取操作指令对应的操作类型。
第一确定子单元,用于确定每个虚拟物体对应的物体类型。
第二确定子单元,用于从每个所述虚体物体对应的物体类型中,确定出与所述操作类型匹配的目标物体类型。
可选地,目标物体类型是具有响应操作指令的能力的类型。
第三确定子单元,用于将具有目标物体类型的虚拟物体,确定为被选取的虚拟物体。
第四确定子单元,用于确定每个虚拟物体的优先级。
第五确定子单元,用于将具有最高优先级的虚拟物体,确定为被选取的虚拟物体。
第六确定子单元,用于确定每个虚拟物体的受控点与操作焦点之间的距离。
第七确定子单元,用于将具有最小距离的虚拟物体,确定为被选取的虚拟 物体。
第八确定子单元,用于按照每个所述虚拟物体的第i种属性信息,确定第i次选取出的虚拟物体;
第九确定子单元,用于当所述第i次选取出的所述虚拟物体为一个时,将所述第i次选取出的所述虚拟物体确定为所述被选取的虚拟物体;
所述第八确定子单元,还用于当所述第i次选取出的所述虚拟物体为两个或两个以上时,令i=i+1,重新执行所述按照每个所述虚拟物体的第i种属性信息,确定第i次选取出的虚拟物体;
其中,i的初始值为1且i为整数。
指令响应模块1140,用于控制被选取的虚拟物体对操作指令进行响应。
综上所述,本实施例提供的虚拟物体选取装置,在三维操作范围内的虚拟物体为两个或两个以上时,通过与操作类型匹配的物体类型、优先级和距离三种因素自动选择出一个虚拟物体作为被选取的虚拟物体,实现了既能满足用户自身的选择意愿,又能实现对虚拟物体的自动选取,减少用户在多个虚拟物体时的选取操作次数,高效智能的帮助用户选取合适的虚拟物体。
实施例7
请参考图27,其示出了本发明一个实施例提供的VR系统的结构示意图。该VR系统包括:头戴式显示器120、处理单元140和输入设备160。
头戴式显示器120是用于佩戴在用户头部进行图像显示的显示器。
头戴式显示器120通过柔性电路板或硬件接口与处理单元140电性相连。
处理单元140通常集成在头戴式显示器120的内部。处理单元140包括处理器142和存储器144。存储器144是用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质,比如RAM、ROM、EPROM、EEPROM、闪存或其他固态存储其技术,CD-ROM、DVD或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。存储器144存储有一个或一个以上的程序指令,该程序指令包括用于实现上述各个方法实施例所提供的虚拟物体选取方法的指令。处理器142用于执行存储器144中的指令,来实现上述各个方法实施例所提供的虚拟物体选取方法。
处理单元140通过线缆、蓝牙连接或Wi-Fi(Wireless-Fidelity,无线保真) 连接与输入设备160相连。
输入设备160是体感手套、体感手柄、遥控器、跑步机、鼠标、键盘、人眼聚焦设备等输入外设。
本发明实施例还提供了一种计算机可读存储介质,该计算机可读存储介质可以是上述实施例中的存储器中所包含的计算机可读存储介质;也可以是单独存在,未装配入终端中的计算机可读存储介质。该计算机可读存储介质存储有一个或者一个以上程序,该一个或者一个以上程序被一个或者一个以上的处理器用来执行虚拟物体选取方法。
在另一个实施例中,如前文所述,通过在虚拟现实场景中接收到认证请求时,将现实场景中的指纹采集设备采集到的待认证指纹信息发送至现实场景中的认证设备进行认证,达到了无需在虚拟现实场景中建立支付认证系统也能够实现支付认证的目的,从而实现了提高在虚拟现实场景中支付的效率的技术效果。即,基于虚拟现实场景的认证方案,在虚拟现实场景中进行支付时,采取了基于用户的指纹信息对用户进行认证。除此之外,本发明实施例还提出了一种通过虚拟点阵为用户生成标识,并通过生成的标识实现对用户的身份验证的方案,其中生成的标识可用于认证方案中对用户的认证过程,比如可以代替指纹认证,或者在指纹认证后根据生成的标识再次完成对用户的信息的合法性校验。下面通过实施例8至实施例12对基于虚拟现实的标识生成方案以及身份验证方案进行详细说明。
实施例8
一种基于虚拟现实的标识生成方法,如图28所示,所述方法包括:
S101.获取用户所在位置的三维坐标以及用户视野朝向的方向。
获取用户的三维坐标(X,Y,Z)与代表视野朝向的方向向量(α,β,γ)。
S102.根据所述三维坐标以及所述方向生成虚拟点阵,所述虚拟点阵中的每个虚拟点均具有唯一的坐标及编号。
基于用户的三维坐标及视野朝向,在距离用户正前方一段距离的平面上生成虚拟点阵。本发明实施例中所述虚拟点阵为单一平面点阵,所述平面点阵的法向量为(α,β,γ)。在所述虚拟点阵中,按照预设规则生成一定数量的虚拟点, 并记录每个虚拟点的坐标及其编号。其中,虚拟点的生成规则如下:
a)对虚拟点的数量在技术上不做限制,但所述虚拟点数量不能过小与过大。生成的点过多,用户操作繁琐;生成的点过少,安全性难以保证。
b)各个相邻虚拟点的距离可以相同,或者不相同。各个虚拟点的距离相同,即指相邻两个点(X1,Y1,Z1)、(X2,Y2,Z2)之间的欧几里德距离相同。欧几里德距离公式如下:
Figure PCTCN2017095640-appb-000002
c)编号的规则只要按照预设的次序能够遍历全部虚拟点即可。其中,次序可以是从左到右或从上到下,编号可以为数字或字母。
S103.显示所述虚拟点阵。
在虚拟现实环境下向用户清晰地显示出虚拟点的位置。显示的方法可以是加亮、用高对比度的颜色,或者其他方法。此外,还可为每一个虚拟点都设置一个响应区,本发明实施例中响应区为预设半径为R的三维球形响应区域。若用户控制的输入端的坐标与所述虚拟点的坐标之间的欧几里德距离小于R,则所述虚拟点被选中。当然,所述虚拟点的响应区还可以为其他任意形状。
S104.获取用户的选择结果。
记录用户控制的输入端的实时位置,监测所述输入端是否进入到任意虚拟点的响应区,若是,则记录所述虚拟点的编号;
重复上述动作,记录用户选择的虚拟点的编号及顺序直至用户的选择结束。所述选择结果包括被选择的虚拟点的编号以及被选择的虚拟点的顺序。作为另一种实施方式,所述选择结果可仅仅包括虚拟点的编号,根据所述编号即可生成唯一对应的标识。
所述结束的具体方式,可以是一段时间内用户未选定任意虚拟点,或者是用户使用某个特定的按钮来结束。
S105.根据所述选择结果生成标识。
所述根据所述选择结果生成标识包括:根据被选择的虚拟点的编号以及被选择的虚拟点的顺序生成数字串或字符串。
本发明实施例中使用基于手柄的VR设备。参见图29,所述基于手柄的VR设备包括跟踪系统、头戴式显示器HMD(Head Mount Display)和交互式手柄:
(1)跟踪系统,识别出头戴式显示器HMD及交互式手柄在空间中的位置(三维坐标);
(2)头戴式显示器HMD,用于显示用户观看到的实时画面;
(3)交互式手柄,用于在空间中进行三维操作。本发明实施例用户控制的输入端即为交互式手柄。
本发明实施例提供了一种基于虚拟现实的标识生成方法,用户只需控制输入端移动即可对虚拟点进行选择,并自动根据用户选择结果生成标识,操作简单,避免了使用复杂的虚拟三维输入法,使得标识生成效率较高、提升了用户体验并且不会耗费大量的系统资源。本发明实施例中的标识生成方法可以用于生成用户名、用户ID号、密码等标识,具备广阔的应用前景。
实施例9
一种基于虚拟现实的标识生成方法,所述方法包括:
S211.获取用户所在位置的三维坐标以及用户视野朝向的方向。
获取用户的三维坐标(X,Y,Z)与代表视野朝向的方向向量(α,β,γ)。
S212.根据所述三维坐标以及所述方向生成虚拟点阵,所述虚拟点阵中的每个虚拟点均具有唯一的坐标及编号。
本发明实施例中虚拟点阵为空间点阵,所述虚拟点阵中的虚拟点部署于N(N>1)个平面上,用户能够在不转动所述N个平面的情况下,看到所有的虚拟点。所述N个平面可以均相互平行;也可以,所述N个平面构成封闭多面体。
具体地,若为平行的N个平面,则基于用户的三维坐标及视野朝向,在距离用户正前方生成第一平面,然后按照预设的距离间隔,生成一个平行第二平面。简言之,所有平面的法向量都相同,都为(α,β,γ),当然在第一平面和第二平面的基础上也可以生成第三平面,本发明实施例对生成平面的数量不做限制。在所有平面上,按照预设规则生成一定数量的虚拟点,并记录每个虚拟点的坐标及其编号,所述虚拟点的生成规则与实施例8相同,在此不再赘述。
本发明实施例中,可以在平行的4个平面上部署虚拟点,所述虚拟点阵为4*4的空间点阵,相邻虚拟点之间的距离均相等。
S213.显示所述虚拟点阵。
在虚拟现实环境下向用户清晰地显示出虚拟点的位置,并为每一个虚拟点,都设置一个响应区,本发明实施例中响应区为预设半径为R的三维球形响应区域。若用户控制的输入端的坐标与所述虚拟点的坐标之间的欧几里德距离 小于R,则所述虚拟点被选中。当然,所述虚拟点的响应区还可以为其他任意形状。
S214.获取用户的选择结果。
记录用户控制的输入端的实时位置,监测所述输入端是否进入到任意虚拟点的响应区,若是,则记录所述虚拟点的编号;
重复上述动作,记录用户选择的虚拟点的编号及顺序直至用户的选择结束。所述结束的具体方式,可以是一段时间内用户未选定任意虚拟点,或者是用户使用某个特定的按钮来结束。
S215.根据所述选择结果生成标识。
所述根据所述选择结果生成标识包括根据被选择的虚拟点的编号以及被选择的虚拟点的顺序生成数字串或字符串。
本发明实施例中使用无手柄的VR设备,包括:
视觉/跟踪系统,用于获得手部在空间中的位置(三维坐标)。本发明实施例中用户控制的输入端为手。
头戴式显示器HMD,用于显示用户观看到的实时画面。
本发明实施例提供了另一种基于虚拟现实的标识生成方法,用户使用手作为输入端即可对虚拟点进行选择,并自动根据用户选择结果生成标识,操作更为简单。
实施例10
一种基于虚拟现实的标识生成装置,如图30所示,包括:
用户方位获取模块3001,用于获取用户所在位置的三维坐标以及用户视野朝向的方向;
虚拟点阵生成模块3002,用于根据所述三维坐标以及所述方向生成虚拟点阵,所述虚拟点阵中的每个虚拟点均具有唯一的坐标及编号;
虚拟点阵显示模块3003,用于显示所述虚拟点阵;
选择结果获取模块3004,用于获取用户对所述虚拟点阵中的虚拟点的选择结果;
标识生成模块3005,用于根据所述选择结果生成标识;
输入模块3006,用于向用户提供用于对所述虚拟点阵中的虚拟点进行选择的输入端。所述输入模块包括交互式手柄或无手柄虚拟现实设备。
具体地,所述选择结果获取模块3004如图31所示,包括:
实时位置记录子模块30041,用于记录用户控制的输入端的实时位置;
监测子模块30042,用于监测所述输入端是否进入到任意虚拟点的响应区,若用户控制的输入端所在的坐标落入所述虚拟点响应区,则所述虚拟点被选中。
本发明实施例基于同样地发明构思,提供了一种基于虚拟现实的标识生成方法装置,本发明实施例能够用于实现实施例8或9中提供的基于虚拟现实的标识生成方法。
实施例11
一种身份验证方法,如图32所示,所述方法包括:
S401.获取用户预设的身份标识。
S402.判断用户输入的待验证的身份标识与所述预设的身份标识是否一致。
S403.若一致,则验证通过。
S404.否则,验证不通过。
所述身份标识的生成方法,包括:
获取用户所在位置的三维坐标以及用户视野朝向的方向;
根据所述三维坐标以及所述方向生成虚拟点阵,所述虚拟点阵中的每个虚拟点均具有唯一的坐标及编号;
显示所述虚拟点阵;
获取用户对所述虚拟点阵中的虚拟点的选择结果;
根据所述选择结果生成标识。本发明实施例中所述标识生成方法可以用于生成用户名和/或密码。
具体地,用户通过控制输入端对所述虚拟点阵中的虚拟点进行选择,所述输入端为交互式手柄。作为另一种实施方式,用户通过无手柄的虚拟现实设备以手作为输入端对所述虚拟点阵中的虚拟点进行选择。
用户在进行支付授权、账户登录时,需要进行用户身份认证。通过本发明实施例提供的一种身份验证方法,用户只需控制输入端,在空间中按照预设路径移动,即可快速完成用户身份识别。本发明实施例简化了身份验证过程中的用户输入标识的环节,从而提升身份验证效率;由于在身份验证过程中,标识 的复杂度与身份验证的安全性密切相关,本发明可以根据实际需要对虚拟点阵进行设计,从而兼顾用户体验的舒适度与身份验证的安全性。
实施例12
一种身份验证系统,如图33所示,所述系统包括验证服务器3301、应用服务器3302和基于虚拟现实的标识生成装置3303,所述验证服务器3301和所述标识生成装置3303均与所述应用服务器3302进行通讯;
所述验证服务器3301用于存储用户预设的身份标识;
所述应用服务器3302用于向所述验证服务器3301发起身份验证请求,并向所述验证服务器3301发送所述标识生成装置3303生成的标识并得到所述验证服务器3301的身份验证结果。
所述身份验证系统用于身份的设定和验证,在设定和验证过程中表示生成装置生成标识生成虚拟点阵以及根据用户对虚拟点阵选择的虚拟点生成标识的规则也一致:
(1)设定的流程为:
首先,应用服务器3302向验证服务器3301发起标识设定请求;
然后,用户控制标识生成装置3303生成一种标识,并将所述标识发送至验证服务器。具体地,可以不限定用户输入标识的次数,比如用户需要前后输入两次,若两次输入的标识相同,才算是正确地输入标识。
最后,验证服务器3301接收所述标识后,通知所述应用服务器3302标识设定成功,应用服务器3302通知所述标识生成装置3303标识设定成功。
(2)校验的流程为:
首先,标识生成装置3303访问应用服务器3302;
然后,应用服务器3302向验证服务器3301发送用户身份验证请求;
再者,用户控制标识生成装置3303生成一种标识,并将所述标识发送至验证服务器3301。
最后,验证服务器3301将接收到的标识与之前设定的标识比较,如果匹配成功,则通过校验,同意应用服务器3302的验证请求。
所述标识生成装置3303包括:
用户方位获取模块33031,用于获取用户所在位置的三维坐标以及用户视野朝向的方向;
虚拟点阵生成模块33032,用于根据所述三维坐标以及所述方向生成虚拟点阵,所述虚拟点阵中的每个虚拟点均具有唯一的坐标及编号;
虚拟点阵显示模块33033,用于显示所述虚拟点阵;
选择结果获取模块33034,用于获取用户的选择结果,所述选择结果包括被选择的虚拟点的编号以及被选择的虚拟点的顺序;
标识生成模块33035,用于根据所述选择结果生成标识。
所述选择结果记录模块33034包括:
实时位置记录子模块330341,用于记录用户控制的输入端的实时位置;
监测子模块330342,用于监测所述输入端是否进入到任意虚拟点的响应区,若用户控制的输入端所在的坐标落入所述虚拟点响应区,则所述虚拟点被选中。
本发明实施例基于同样地发明构思,提供了一种基于虚拟现实的身份验证系统,本发明实施例能够用于实现实施例11中提供的身份验证方法。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端、系统、服务器,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (37)

  1. 一种基于虚拟现实场景的认证方法,其特征在于,包括:
    在虚拟现实场景中接收到认证请求;
    通过现实场景中的指纹采集设备采集待认证指纹信息;
    将所述待认证指纹信息发送至所述现实场景中的认证设备;
    在所述虚拟现实场景中接收所述认证设备发送的认证结果信息,其中,所述认证结果信息用于指示所述待认证指纹信息通过认证或未通过认证。
  2. 根据权利要求1所述的方法,其特征在于,在虚拟现实场景中接收到认证请求之后,且在通过现实场景中的指纹采集设备采集待认证指纹信息之前,所述方法还包括:
    判断指示标识是否指向所述虚拟现实场景中的认证区域,其中,所述指示标识是所述指纹采集设备在所述虚拟现实场景中产生的;
    在判断出所述指示标识指向所述认证区域时,在所述虚拟现实场景中显示提示信息,其中,所述提示信息用于提示输入所述待认证指纹信息。
  3. 根据权利要求2所述的方法,其特征在于,在所述虚拟现实场景中接收到所述认证设备发送的认证结果信息之后,所述方法还包括:
    在所述认证结果信息指示所述待认证指纹信息通过认证时,在所述虚拟现实场景中执行与所述认证区域对应的资源转移事件。
  4. 根据权利要求1所述的方法,其特征在于,将所述待认证指纹信息发送至所述现实场景中的认证设备,包括:
    在所述虚拟现实场景中将第一时间戳发送给所述认证设备,其中,所述第一时间戳为所述指纹采集设备采集到所述待认证指纹信息的时间点;
    通过所述指纹采集设备和通信终端设备将所述待认证指纹信息和第二时间戳发送给所述认证设备,其中,所述第二时间戳为所述指纹采集设备采集到所 述待认证指纹信息的时间点,所述指纹采集设备通过与所述通信终端设备之间建立的连接与所述通信终端设备进行数据传输;其中,所述第一时间戳和所述第二时间戳用于所述认证设备对所述待认证指纹信息进行认证。
  5. 根据权利要求4所述的方法,其特征在于,在所述虚拟现实场景中接收到所述认证设备发送的认证结果信息,包括:
    在所述认证设备判断出所述第一时间戳与所述第二时间戳匹配、且指纹数据库中存在与所述待认证指纹信息匹配的指纹信息的情况下,在所述虚拟现实场景中接收到所述认证设备发送的第一认证结果信息,其中,所述第一认证结果信息用于指示所述待认证指纹信息通过认证;
    在所述认证设备判断出所述第一时间戳与所述第二时间戳不匹配、和/或所述指纹数据库中不存在与所述待认证指纹信息匹配的指纹信息的情况下,在所述虚拟现实场景中接收到所述认证设备发送的第二认证结果信息,其中,所述第二认证结果信息用于指示所述待认证指纹信息未通过认证。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,在所述虚拟现实场景中接收到所述认证设备发送的认证结果信息之后,所述方法还包括:
    在所述虚拟现实场景中显示所述认证结果信息。
  7. 一种虚拟现实设备,其特征在于,包括:一个或多个处理器、存储器,所述存储器用于存储软件程序以及模块,且所述处理器通过运行存储在所述存储器内的软件程序以及模块,完成以下操作:
    在虚拟现实场景中接收到认证请求;
    通过现实场景中的指纹采集设备采集待认证指纹信息;
    将所述待认证指纹信息发送至所述现实场景中的认证设备;
    在所述虚拟现实场景中接收所述认证设备发送的认证结果信息,其中,所述认证结果信息用于指示所述待认证指纹信息通过认证或未通过认证。
  8. 根据权利要求7所述的设备,其特征在于,所述处理器通过运行存储在所述存储器内的软件程序以及模块,完成以下操作:
    在虚拟现实场景中接收到认证请求之后,且在通过现实场景中的指纹采集设备采集待认证指纹信息之前,判断指示标识是否指向所述虚拟现实场景中的认证区域,其中,所述指示标识是所述指纹采集设备在所述虚拟现实场景中产生的;
    在判断出所述指示标识指向所述认证区域时,在所述虚拟现实场景中显示提示信息,其中,所述提示信息用于提示输入所述待认证指纹信息。
  9. 根据权利要求8所述的设备,其特征在于,所述处理器通过运行存储在所述存储器内的软件程序以及模块,完成以下操作:
    在所述虚拟现实场景中接收到所述认证设备发送的认证结果信息之后,在所述认证结果信息指示所述待认证指纹信息通过认证时,在所述虚拟现实场景中执行与所述认证区域对应的资源转移事件。
  10. 根据权利要求7所述的设备,其特征在于,所述处理器通过运行存储在所述存储器内的软件程序以及模块,完成以下操作:
    在所述虚拟现实场景中将第一时间戳发送给所述认证设备,其中,所述第一时间戳为所述指纹采集设备采集到所述待认证指纹信息的时间点;
    通过所述指纹采集设备和通信终端设备将所述待认证指纹信息和第二时间戳发送给所述认证设备,其中,所述第二时间戳为所述指纹采集设备采集到所述待认证指纹信息的时间点,所述指纹采集设备通过与所述通信终端设备之间建立的连接与所述通信终端设备进行数据传输;其中,所述第一时间戳和所述第二时间戳用于所述认证设备对所述待认证指纹信息进行认证。
  11. 根据权利要求10所述的设备,其特征在于,所述处理器通过运行存储在所述存储器内的软件程序以及模块,完成以下操作:
    在所述认证设备判断出所述第一时间戳与所述第二时间戳匹配、且指纹数 据库中存在与所述待认证指纹信息匹配的指纹信息的情况下,在所述虚拟现实场景中接收到所述认证设备发送的第一认证结果信息,其中,所述第一认证结果信息用于指示所述待认证指纹信息通过认证;
    在所述认证设备判断出所述第一时间戳与所述第二时间戳不匹配、和/或所述指纹数据库中不存在与所述待认证指纹信息匹配的指纹信息的情况下,在所述虚拟现实场景中接收到所述认证设备发送的第二认证结果信息,其中,所述第二认证结果信息用于指示所述待认证指纹信息未通过认证。
  12. 根据权利要求7至11中任一项所述的设备,其特征在于,所述处理器通过运行存储在所述存储器内的软件程序以及模块,完成以下操作:
    在所述虚拟现实场景中接收到所述认证设备发送的认证结果信息之后,
    在所述虚拟现实场景中显示所述认证结果信息。
  13. 一种存储介质,其特征在于,所述存储介质中存储有至少一段程序代码,所述至少一段程序代码由处理器加载并执行以实现如权利要求1至6中任一权利要求所述的基于虚拟现实场景的认证方法。
  14. 一种虚拟物体选取方法,其特征在于,所述方法包括:
    在三维虚拟环境中确定操作焦点的位置,所述操作焦点是输入设备在所述三维虚拟环境中所对应的点,所述三维虚拟环境中包括虚拟物体,所述虚拟物体包括有用于接受操作的受控点;
    以所述操作焦点为基准位置,确定所述操作焦点的三维操作范围;
    在接收到操作指令时,将所述受控点位于所述三维操作范围内的所述虚拟物体确定为被选取的虚拟物体。
  15. 根据权利要求14所述的方法,其特征在于,所述在接收到操作指令时,将所述受控点位于所述三维操作范围内的所述虚拟物体确定为被选取的虚拟物体,包括:
    在接收到所述操作指令时,确定所述受控点位于所述三维操作范围内的所 述虚拟物体;
    在所述虚拟物体为至少两个时,按照所述虚拟物体的属性信息确定出所述被选取的虚拟物体;
    其中,所述属性信息包括:所述虚拟物体的物体类型、所述虚拟物体的优先级和所述虚拟物体的所述受控点与所述操作焦点之间的距离中的至少一种。
  16. 根据权利要求15所述的方法,其特征在于,所述属性信息包括:所述虚拟物体的物体类型;
    所述在所述虚拟物体为至少两个时,按照所述虚拟物体的属性信息确定出所述被选取的虚拟物体,包括:
    获取所述操作指令对应的操作类型;
    确定每个所述虚拟物体对应的物体类型;
    从每个所述虚体物体对应的物体类型中,确定出与所述操作类型匹配的目标物体类型,所述目标物体类型是具有响应所述操作指令的能力的类型;
    将具有所述目标物体类型的所述虚拟物体,确定为所述被选取的虚拟物体。
  17. 根据权利要求15所述的方法,其特征在于,所述属性信息包括:所述虚拟物体的优先级;
    所述在所述虚拟物体为至少两个时,按照所述虚拟物体的属性信息确定出所述被选取的虚拟物体,包括:
    确定每个所述虚拟物体的优先级;
    将具有最高优先级的所述虚拟物体,确定为所述被选取的虚拟物体。
  18. 根据权利要求15所述的方法,其特征在于,所述属性信息包括:所述虚拟物体的所述受控点与所述操作焦点之间的距离;
    所述在所述虚拟物体为至少两个时,按照所述虚拟物体的属性信息确定出所述被选取的虚拟物体,包括:
    确定每个所述虚拟物体的所述受控点与所述操作焦点之间的距离;
    将具有最小距离的所述虚拟物体,确定为所述被选取的虚拟物体。
  19. 根据权利要求15所述的方法,其特征在于,所述属性信息包括:所述虚拟物体的物体类型、所述虚拟物体的优先级和所述虚拟物体的所述受控点与所述操作焦点之间的距离中的至少两种;
    所述在所述虚拟物体为至少两个时,按照所述虚拟物体的属性信息确定出所述被选取的虚拟物体,包括:
    按照每个所述虚拟物体的第i种属性信息,确定第i次选取出的虚拟物体;
    当所述第i次选取出的所述虚拟物体为一个时,将所述第i次选取出的所述虚拟物体确定为所述被选取的虚拟物体;
    当所述第i次选取出的所述虚拟物体为两个或两个以上时,按照每个所述虚拟物体的第i+1种属性信息,确定第i+1次选取出的虚拟物体;
    其中,i的初始值为1且i为整数。
  20. 一种虚拟现实VR系统,其特征在于,所述VR系统包括:头戴式显示器、处理单元和输入设备;所述头戴式显示器与所述处理单元相连,所述处理单元与所述输入设备相连;所述处理单元用于完成以下操作:
    在三维虚拟环境中确定操作焦点的位置,所述操作焦点是输入设备在所述三维虚拟环境中所对应的点,所述三维虚拟环境中包括虚拟物体,所述虚拟物体包括有用于接受操作的受控点;
    以所述操作焦点为基准位置,确定所述操作焦点的三维操作范围;
    在接收到操作指令时,将所述受控点位于所述三维操作范围内的所述虚拟物体确定为被选取的虚拟物体。
  21. 根据权利要求20所述的VR系统,其特征在于,所述处理单元用于完成以下操作:
    在接收到所述操作指令时,确定所述受控点位于所述三维操作范围内的所述虚拟物体;
    在所述虚拟物体为至少两个时,按照所述虚拟物体的属性信息确定出所述被选取的虚拟物体;
    其中,所述属性信息包括:所述虚拟物体的物体类型、所述虚拟物体的优先级和所述虚拟物体的所述受控点与所述操作焦点之间的距离中的至少一种。
  22. 根据权利要求21所述的VR系统,其特征在于,所述属性信息包括:所述虚拟物体的物体类型;所述处理单元用于完成以下操作:
    获取所述操作指令对应的操作类型;
    确定每个所述虚拟物体对应的物体类型;
    从每个所述虚体物体对应的物体类型中,确定出与所述操作类型匹配的目标物体类型,所述目标物体类型是具有响应所述操作指令的能力的类型;
    将具有所述目标物体类型的所述虚拟物体,确定为所述被选取的虚拟物体。
  23. 根据权利要求21所述的VR系统,其特征在于,所述属性信息包括:所述虚拟物体的优先级;所述处理单元用于完成以下操作:
    确定每个所述虚拟物体的优先级;
    将具有最高优先级的所述虚拟物体,确定为所述被选取的虚拟物体。
  24. 根据权利要求21所述的VR系统,其特征在于,所述属性信息包括:所述虚拟物体的所述受控点与所述操作焦点之间的距离;所述处理单元用于完成以下操作:
    确定每个所述虚拟物体的所述受控点与所述操作焦点之间的距离;
    将具有最小距离的所述虚拟物体,确定为所述被选取的虚拟物体。
  25. 根据权利要求21所述的VR系统,其特征在于,所述属性信息包括:所述虚拟物体的物体类型、所述虚拟物体的优先级和所述虚拟物体的所述受控点与所述操作焦点之间的距离中的至少两种;
    所述处理单元用于完成以下操作:
    按照每个所述虚拟物体的第i种属性信息,确定第i次选取出的虚拟物体;
    当所述第i次选取出的所述虚拟物体为一个时,将所述第i次选取出的所述虚拟物体确定为所述被选取的虚拟物体;
    当所述第i次选取出的所述虚拟物体为两个或两个以上时,按照每个所述虚拟物体的第i+1种属性信息,确定第i+1次选取出的虚拟物体;
    其中,i的初始值为1且i为整数。
  26. 一种存储介质,其特征在于,所述存储介质中存储有一个或者一个以上程序,所述一个或者一个以上程序由处理单元加载并执行以实现如权利要求14至19中任一权利要求所述的虚拟物体选取方法。
  27. 一种基于虚拟现实的标识生成方法,其特征在于,所述方法包括:
    获取用户所在位置的三维坐标以及用户视野朝向的方向;
    根据所述三维坐标以及所述方向生成虚拟点阵,所述虚拟点阵中的每个虚拟点均具有唯一的坐标及编号;
    显示所述虚拟点阵;
    获取用户对所述虚拟点阵中的虚拟点的选择结果;
    根据所述选择结果生成标识。
  28. 根据权利要求27所述的方法,其特征在于,以每个虚拟点为中心,设置虚拟点响应区,若用户控制的输入端所在的坐标落入所述虚拟点响应区,则所述虚拟点被选中。
  29. 根据权利要求28所述的方法,其特征在于,所述获取用户对所述虚拟点阵中的虚拟点的选择结果,包括:
    记录用户控制的输入端的实时位置,监测所述输入端是否进入到任意虚拟点的响应区,若是,则记录所述虚拟点的编号;
    重复上述动作,记录用户选择的虚拟点的编号及顺序。
  30. 根据权利要求27所述的方法,其特征在于,所述根据所述选择结果生成标识,包括:
    根据被选择的虚拟点的编号以及被选择的虚拟点的顺序生成数字串。
  31. 一种身份验证方法,其特征在于,所述方法包括:
    获取用户预设的身份标识;
    判断用户输入的待验证的身份标识与所述预设的身份标识是否一致:
    若一致,则验证通过;
    所述身份标识的生成方法包括:
    获取用户所在位置的三维坐标以及用户视野朝向的方向;
    根据所述三维坐标以及所述方向生成虚拟点阵,所述虚拟点阵中的每个虚拟点均具有唯一的坐标及编号;
    显示所述虚拟点阵;
    获取用户对所述虚拟点阵中的虚拟点的选择结果;
    根据所述选择结果生成标识。
  32. 根据权利要求31所述的方法,其特征在于,用户通过控制的输入端对所述虚拟点阵中的虚拟点进行选择,所述输入端包括交互式手柄。
  33. 根据权利要求31所述的方法,其特征在于,用户使用无手柄的虚拟现实设备,以手作为输入端对所述虚拟点阵中的虚拟点进行选择。
  34. 一种基于虚拟现实的标识生成装置,其特征在于,包括:一个或多个处理单元、存储器,所述存储器用于存储软件程序以及模块,且所述处理单元通过运行存储在所述存储器内的软件程序以及模块,完成以下操作:
    获取用户所在位置的三维坐标以及用户视野朝向的方向;
    根据所述三维坐标以及所述方向生成虚拟点阵,所述虚拟点阵中的每个虚拟点均具有唯一的坐标及编号;
    显示所述虚拟点阵;
    获取用户对所述虚拟点阵中的虚拟点的选择结果;
    根据所述选择结果生成标识。
  35. 一种身份验证系统,其特征在于,所述系统包括验证服务器、应用服务器和基于虚拟现实的标识生成装置,所述验证服务器和所述标识生成装置均与所述应用服务器进行通讯;
    所述验证服务器用于存储用户预设的身份标识;
    所述应用服务器用于向所述验证服务器发起身份验证请求,向所述验证服务器发送所述标识生成装置生成的标识并得到所述验证服务器的身份验证结果;
    所述标识生成装置包括:一个或多个处理单元、存储器,所述存储器用于存储软件程序以及模块,且所述处理单元通过运行存储在所述存储器内的软件程序以及模块,完成以下操作:
    获取用户所在位置的三维坐标以及用户视野朝向的方向;
    根据所述三维坐标以及所述方向生成虚拟点阵,所述虚拟点阵中的每个虚拟点均具有唯一的坐标及编号;
    显示所述虚拟点阵;
    获取用户对所述虚拟点阵中的虚拟点的选择结果;
    根据所述选择结果生成标识。
  36. 一种存储介质,其特征在于,所述存储介质中存储有一个或者一个以上程序,所述一个或者一个以上程序由处理单元加载并执行以实现如权利要求27至30中任一权利要求所述的基于虚拟现实的标识生成方法。
  37. 一种存储介质,其特征在于,所述存储介质中存储有一个或者一个以上程序,所述一个或者一个以上程序由处理单元加载并执行以实现如权利要求31至33中任一权利要求所述的身份验证方法。
PCT/CN2017/095640 2016-08-19 2017-08-02 基于虚拟现实场景的认证方法、虚拟现实设备及存储介质 WO2018032970A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP17840945.4A EP3502939B1 (en) 2016-08-19 2017-08-02 Authentication method based on virtual reality scene, virtual reality device, and storage medium
US16/205,708 US10868810B2 (en) 2016-08-19 2018-11-30 Virtual reality (VR) scene-based authentication method, VR device, and storage medium

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201610695148.0A CN106131057B (zh) 2016-08-19 2016-08-19 基于虚拟现实场景的认证和装置
CN201610695148.0 2016-08-19
CN201610907039.0A CN106527887B (zh) 2016-10-18 2016-10-18 虚拟物体选取方法、装置及vr系统
CN201610907039.0 2016-10-18
CN201610954866.5A CN107992213B (zh) 2016-10-27 2016-10-27 一种基于虚拟现实的标识生成方法以及身份验证方法
CN201610954866.5 2016-10-27

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/205,708 Continuation US10868810B2 (en) 2016-08-19 2018-11-30 Virtual reality (VR) scene-based authentication method, VR device, and storage medium

Publications (1)

Publication Number Publication Date
WO2018032970A1 true WO2018032970A1 (zh) 2018-02-22

Family

ID=61197326

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/095640 WO2018032970A1 (zh) 2016-08-19 2017-08-02 基于虚拟现实场景的认证方法、虚拟现实设备及存储介质

Country Status (3)

Country Link
US (1) US10868810B2 (zh)
EP (1) EP3502939B1 (zh)
WO (1) WO2018032970A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111773658A (zh) * 2020-07-03 2020-10-16 珠海金山网络游戏科技有限公司 一种基于计算机视觉库的游戏交互方法及装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110036356B (zh) * 2017-02-22 2020-06-26 腾讯科技(深圳)有限公司 Vr系统中的图像处理
WO2019051813A1 (zh) * 2017-09-15 2019-03-21 达闼科技(北京)有限公司 一种目标识别方法、装置和智能终端
US11392998B1 (en) * 2018-08-22 2022-07-19 United Services Automobile Association (Usaa) System and method for collecting and managing property information
WO2024008519A1 (en) * 2022-07-07 2024-01-11 Gleechi Ab Method for real time object selection in a virtual environment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368288A (zh) * 2011-09-19 2012-03-07 中兴通讯股份有限公司 一种验证密码的方法及应用该方法的移动终端
CN104182670A (zh) * 2013-05-21 2014-12-03 百度在线网络技术(北京)有限公司 通过穿戴式设备进行认证的方法和穿戴式设备
US20160034039A1 (en) * 2013-03-21 2016-02-04 Sony Corporation Information processing apparatus, operation control method and program
US20160188861A1 (en) * 2014-12-31 2016-06-30 Hand Held Products, Inc. User authentication system and method
CN105867637A (zh) * 2016-04-29 2016-08-17 乐视控股(北京)有限公司 基于虚拟现实设备的认证方法、装置及系统
CN105955470A (zh) * 2016-04-26 2016-09-21 乐视控股(北京)有限公司 一种头盔显示器的控制方法及装置
CN106131057A (zh) * 2016-08-19 2016-11-16 腾讯科技(深圳)有限公司 基于虚拟现实场景的认证和装置
CN106527887A (zh) * 2016-10-18 2017-03-22 腾讯科技(深圳)有限公司 虚拟物体选取方法、装置及vr系统

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1787532A (zh) 2004-12-07 2006-06-14 Nvlsoft有限公司 提供3d图像生成服务的系统与方法
JP4376292B2 (ja) 2008-03-24 2009-12-02 株式会社コナミデジタルエンタテインメント 指示内容決定装置、指示内容決定方法、ならびに、プログラム
US8914854B2 (en) * 2008-09-11 2014-12-16 International Business Machines Corporation User credential verification indication in a virtual universe
EP2189884A1 (fr) 2008-11-18 2010-05-26 Gemalto SA Clavier virtuel projeté et sécurisé
US20100153722A1 (en) * 2008-12-11 2010-06-17 International Business Machines Corporation Method and system to prove identity of owner of an avatar in virtual world
CA2658174A1 (en) * 2009-03-18 2010-09-18 Stephane Duguay System to provide virtual avatars having real faces with biometric identification
FR2960986A1 (fr) 2010-06-04 2011-12-09 Thomson Licensing Procede de selection d’un objet dans un environnement virtuel
CN102446192A (zh) 2010-09-30 2012-05-09 国际商业机器公司 在虚拟世界中评估关注度的方法和装置
CN103135930B (zh) 2013-02-05 2017-04-05 深圳市金立通信设备有限公司 一种触摸屏控制方法及设备
CN103279304B (zh) 2013-06-03 2016-01-27 贝壳网际(北京)安全技术有限公司 一种显示选中图标的方法、装置及移动设备
KR20150050825A (ko) * 2013-11-01 2015-05-11 삼성전자주식회사 보안 정보를 포함하는 컨텐츠의 표시 방법 및 시스템
CN103701614B (zh) 2014-01-15 2018-08-10 网易宝有限公司 一种身份验证方法及装置
KR102219464B1 (ko) * 2014-05-23 2021-02-25 삼성전자주식회사 보안 운용 방법 및 이를 지원하는 전자 장치
CN104102357B (zh) 2014-07-04 2017-12-19 Tcl集团股份有限公司 一种虚拟场景中的3d模型检测方法及装置
CN104408338B (zh) 2014-10-31 2017-07-28 上海理工大学 一种三维网格模型版权认证方法
CN105955453A (zh) 2016-04-15 2016-09-21 北京小鸟看看科技有限公司 一种3d沉浸式环境下的信息输入方法
US20170364920A1 (en) * 2016-06-16 2017-12-21 Vishal Anand Security approaches for virtual reality transactions

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368288A (zh) * 2011-09-19 2012-03-07 中兴通讯股份有限公司 一种验证密码的方法及应用该方法的移动终端
US20160034039A1 (en) * 2013-03-21 2016-02-04 Sony Corporation Information processing apparatus, operation control method and program
CN104182670A (zh) * 2013-05-21 2014-12-03 百度在线网络技术(北京)有限公司 通过穿戴式设备进行认证的方法和穿戴式设备
US20160188861A1 (en) * 2014-12-31 2016-06-30 Hand Held Products, Inc. User authentication system and method
CN105955470A (zh) * 2016-04-26 2016-09-21 乐视控股(北京)有限公司 一种头盔显示器的控制方法及装置
CN105867637A (zh) * 2016-04-29 2016-08-17 乐视控股(北京)有限公司 基于虚拟现实设备的认证方法、装置及系统
CN106131057A (zh) * 2016-08-19 2016-11-16 腾讯科技(深圳)有限公司 基于虚拟现实场景的认证和装置
CN106527887A (zh) * 2016-10-18 2017-03-22 腾讯科技(深圳)有限公司 虚拟物体选取方法、装置及vr系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3502939A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111773658A (zh) * 2020-07-03 2020-10-16 珠海金山网络游戏科技有限公司 一种基于计算机视觉库的游戏交互方法及装置
CN111773658B (zh) * 2020-07-03 2024-02-23 珠海金山数字网络科技有限公司 一种基于计算机视觉库的游戏交互方法及装置

Also Published As

Publication number Publication date
EP3502939A4 (en) 2019-12-11
EP3502939B1 (en) 2023-06-14
US20190098005A1 (en) 2019-03-28
EP3502939A1 (en) 2019-06-26
US10868810B2 (en) 2020-12-15

Similar Documents

Publication Publication Date Title
WO2018032970A1 (zh) 基于虚拟现实场景的认证方法、虚拟现实设备及存储介质
CN109799900B (zh) 手腕可安装计算通信和控制设备及其执行的方法
CN102253712B (zh) 用于共享信息的识别系统
CN105264460B (zh) 全息图对象反馈
WO2017002414A1 (ja) プログラム
US20120278904A1 (en) Content distribution regulation by viewing user
CN107852573A (zh) 混合现实社交交互
US20220382051A1 (en) Virtual reality interaction method, device and system
CN109999491A (zh) 在头戴式显示器上渲染图像的方法和计算机可读存储介质
JP6234622B1 (ja) 仮想空間を介して通信するための方法、当該方法をコンピュータに実行させるためのプログラム、および当該プログラムを実行するための情報処理装置
CN105245542B (zh) 账号授权方法、服务器及客户端
CN110496392B (zh) 虚拟对象的控制方法、装置、终端及存储介质
CN105915766A (zh) 基于虚拟现实的控制方法和装置
US20210089639A1 (en) Method and system for 3d graphical authentication on electronic devices
WO2020114176A1 (zh) 对虚拟环境进行观察的方法、设备及存储介质
CN107562201A (zh) 定向交互方法、装置、电子设备及存储介质
CN110115842A (zh) 应用处理系统、应用处理方法以及应用处理程序
Chen et al. A case study of security and privacy threats from augmented reality (ar)
CN112827166A (zh) 基于牌类对象的交互方法、装置、计算机设备及存储介质
CN104407838B (zh) 一种生成随机数及随机数组的方法和设备
CN112818733B (zh) 信息处理方法、装置、存储介质及终端
CN109806583A (zh) 用户界面显示方法、装置、设备及系统
CN112995687A (zh) 基于互联网的互动方法、装置、设备及介质
JP2018116684A (ja) 仮想空間を介して通信するための方法、当該方法をコンピュータに実行させるためのプログラム、および当該プログラムを実行するための情報処理装置
CN113730906B (zh) 虚拟对局的控制方法、装置、设备、介质及计算机产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17840945

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017840945

Country of ref document: EP

Effective date: 20190319