CN107977834B - Data object interaction method and device in virtual reality/augmented reality space environment - Google Patents

Data object interaction method and device in virtual reality/augmented reality space environment Download PDF

Info

Publication number
CN107977834B
CN107977834B CN201610921190.XA CN201610921190A CN107977834B CN 107977834 B CN107977834 B CN 107977834B CN 201610921190 A CN201610921190 A CN 201610921190A CN 107977834 B CN107977834 B CN 107977834B
Authority
CN
China
Prior art keywords
user
information
data object
interaction
providing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610921190.XA
Other languages
Chinese (zh)
Other versions
CN107977834A (en
Inventor
张洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201610921190.XA priority Critical patent/CN107977834B/en
Priority to PCT/CN2017/105359 priority patent/WO2018072617A1/en
Publication of CN107977834A publication Critical patent/CN107977834A/en
Application granted granted Critical
Publication of CN107977834B publication Critical patent/CN107977834B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/12Payment architectures specially adapted for electronic shopping systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0603Catalogue ordering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Abstract

The invention provides a data object interaction method and a data object interaction device in a virtual reality/augmented reality space environment, wherein the virtual reality data object interaction method can provide a virtual reality shop object internal space environment, so that a user can obtain experience of being placed in a three-dimensional space environment, a specific interaction process can be realized through voice, for the user, interaction can be realized only by speaking, manual input equipment such as an external handle is not needed, most importantly, in a payment link, manual payment is not needed, the legality of payment operation is judged through a voiceprint characteristic of the user, and then whether payment is triggered is determined, and the safety of shopping payment of the user in a virtual reality scene is ensured. The technical scheme realized based on the augmented reality technology adopts similar technical ideas and can achieve the same technical effect.

Description

Data object interaction method and device in virtual reality/augmented reality space environment
Technical Field
The present application relates to the field of virtual reality/augmented reality technologies, and in particular, to a method and an apparatus for data object interaction in a virtual reality/augmented reality space environment.
Background
Virtual Reality (VR) is a computer simulation system that can create and experience a Virtual world. The system generates various virtual environments which act on the vision, the hearing and the touch of the user to enable the user to generate the immersive feeling and immerse the virtual environments. Whereas a so-called virtual world is a collection of virtual environments or given simulation objects. Virtual reality is a new technology that has been developed in recent years.
Augmented Reality (AR) is a new technology for seamlessly integrating real world information and virtual world information, and is a technology for applying virtual information to the real world to be perceived by a user through overlaying entity information (visual information, sound, taste, touch and the like) which is difficult to experience in a certain time space range of the real world originally by computer and other scientific technologies after simulation, so that the sensory experience beyond Reality is achieved.
At present, VR/AR technology is still in a development stage, and a virtual reality/augmented reality interaction system for opening a whole online shopping payment link is not available.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a data object interaction method in a virtual reality/augmented reality space environment, which can completely release the hand operation of a user when the user uses a head display device, provides a scheme for completing shopping payment without hand operation for the user, can enable the shopping experience of the virtual reality/augmented reality space environment to be closer to an off-line physical shop, and provides one-stop immersive shopping experience for the user.
In addition, the invention also provides a data object interaction device in the virtual reality/augmented reality space environment, so as to ensure the application and realization of the method in reality.
In a first aspect, the present invention provides a method for data object interaction in a virtual reality space environment, the method comprising:
providing a virtual reality shop object inner space environment by a client, wherein the shop object inner space environment comprises at least one interactive data object;
determining a target data object of interest to a user;
providing information content related to order confirmation, the information content related to order confirmation comprising: attribute information and resource information to be paid associated with the target data object;
and matching the voice of the user with the pre-trained voiceprint model, and triggering payment operation when the matching is successful.
In a second aspect the present invention provides a method of data object interaction in a virtual reality space environment, the method comprising:
the method comprises the steps that a server stores virtual reality shop object internal space environment data, wherein the shop object internal space environment data comprise at least one interactive data object, and the interactive data object corresponds to an interactive area range and is associated with preset information content;
providing the virtual reality shop object internal space environment data to a client so that the client can provide the virtual reality shop object internal space environment data and determine a target data object interested by a user; providing information content related to order confirmation, the information content related to order confirmation comprising: attribute information and resource information to be paid associated with the target data object; and matching the voice of the user with the pre-trained voiceprint model, and triggering payment operation when the matching is successful.
In a third aspect the invention provides a method of data object interaction in an augmented reality space environment, the method comprising:
a client obtains a three-dimensional space model of the internal space environment of the entity shop; the physical store interior including a plurality of items;
after the three-dimensional space model is subjected to space matching with the entity shop through a preset Augmented Reality (AR) device, providing interaction area range information corresponding to the goods in the view range of a user of the AR device;
determining a target item of interest to a user;
providing information content related to order confirmation, the information content related to order confirmation comprising: attribute information associated with the target goods and resource information to be paid; and matching the voice of the user with the pre-trained voiceprint model, and triggering payment operation when the matching is successful.
In a fourth aspect, the present invention provides a method for data object interaction in an augmented reality space environment, the method comprising:
the method comprises the steps that a server stores a three-dimensional space model of a space environment inside a physical shop, wherein the interior of the physical shop comprises a plurality of goods, interaction area range information corresponding to the goods, and interaction response information of data objects related to the interaction area ranges;
providing the three-dimensional space model to a client, so that the client provides interaction area range information corresponding to the goods in the user vision range of an AR device after the client performs space matching on the three-dimensional space model and the entity shop through a preset augmented reality AR device, and determines target goods which are interested by a user; providing information content related to order confirmation, the information content related to order confirmation comprising: attribute information associated with the target goods and resource information to be paid; and matching the voice of the user with the pre-trained voiceprint model, and triggering payment operation when the matching is successful.
In a fifth aspect the present invention provides an apparatus for data object interaction in a virtual reality space environment, the apparatus comprising:
the virtual reality environment providing module is used for providing a virtual reality shop object internal space environment, and the shop object internal space environment comprises at least one interactive data object;
a target data object determination module for determining a target data object of interest to a user;
the order information providing module is used for providing information content related to order confirmation, and the information content related to order confirmation comprises the following steps: attribute information and resource information to be paid associated with the target data object;
and the matching module is used for matching the voice of the user with the pre-trained voiceprint model and triggering payment operation when the matching is successful.
In a sixth aspect the present invention provides an apparatus for data object interaction in a virtual reality space environment, said apparatus comprising:
the system comprises an environment data storage module, a virtual reality shop object internal space environment data processing module and a virtual reality data processing module, wherein the shop object internal space environment data comprises at least one interactive data object, and the interactive data object corresponds to an interactive region range and is associated with preset information content;
the environment data providing module is used for providing the virtual reality shop object internal space environment data to a client so that the client can provide the virtual reality shop object internal space environment data and determine a target data object interested by a user; providing information content related to order confirmation, the information content related to order confirmation comprising: attribute information and resource information to be paid associated with the target data object; and matching the voice of the user with the pre-trained voiceprint model, and triggering payment operation when the matching is successful.
In a seventh aspect the present invention provides an apparatus for data object interaction in an augmented reality space environment, the apparatus comprising:
the system comprises a space model acquisition module, a space model acquisition module and a space model acquisition module, wherein the space model acquisition module is used for acquiring a three-dimensional space model of the internal space environment of the entity shop; the physical store interior including a plurality of items;
the space model matching module is used for providing interaction area range information corresponding to the goods in the visual field range of a user of the AR equipment after the three-dimensional space model is subjected to space matching with the entity shop through the preset augmented reality AR equipment;
the target goods determining module is used for determining target goods which are interested by the user;
the order information providing module is used for providing information content related to order confirmation, and the information content related to order confirmation comprises the following steps: attribute information associated with the target goods and resource information to be paid;
and the matching module is used for matching the voice of the user with the pre-trained voiceprint model and triggering payment operation when the matching is successful.
In an eighth aspect the present invention provides an apparatus for data object interaction in an augmented reality space environment, the apparatus comprising:
the system comprises a storage module, a display module and a display module, wherein the storage module is used for storing a three-dimensional space model of an internal space environment of an entity shop, and the inside of the entity shop comprises a plurality of goods, interaction area range information corresponding to the goods, and interaction response information of data objects related to the interaction area ranges;
the space model providing module is used for providing the three-dimensional space model for a client so that the client can provide the range information of the interaction area corresponding to the goods in the view range of the user of the AR equipment after the client performs space matching on the three-dimensional space model and the entity shop through a preset Augmented Reality (AR) device, and the target goods which are interested by the user are determined; providing information content related to order confirmation, the information content related to order confirmation comprising: attribute information associated with the target goods and resource information to be paid; and matching the voice of the user with the pre-trained voiceprint model, and triggering payment operation when the matching is successful.
Compared with the prior art, the technical scheme provided by the invention has the following advantages:
by utilizing the technical scheme provided by the invention, the internal space environment of the virtual reality shop object can be provided, so that a user can obtain the experience of being placed in the three-dimensional space environment, the specific interaction process can be realized through voice, for the user, the interaction can be realized only by speaking without using an external handle and other manual input equipment, most importantly, in the payment link, the payment operation legality is judged through the voiceprint characteristics of the user without manual payment, and then whether the payment is triggered or not is determined, and the safety of shopping payment of the user in the virtual reality scene is ensured.
In addition, the technical scheme provided by the invention can be realized based on an augmented reality technology, and has the same advantages as the above.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic structural diagram of a virtual reality device provided in an embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for data object interaction in a virtual reality space environment at a client according to an embodiment of the present disclosure;
FIG. 3-1 is a diagram illustrating a structure of information related to a data object provided in an embodiment of the present application;
3-2 is a diagram illustrating an example of detailed information of a data object provided by an embodiment of the application;
FIG. 4 is a schematic diagram illustrating "information content related to order confirmation information" provided in an embodiment of the present application;
FIG. 5-1 is an exemplary diagram of a virtual reality main navigation space provided by an embodiment of the present application;
FIG. 5-2 is a diagram of a graphical presentation structure provided by an embodiment of the present application;
fig. 6 is a flowchart illustrating a method for data object interaction in a virtual reality space environment at a server according to an embodiment of the present application;
FIG. 7 is a flowchart illustrating a method for data object interaction in an augmented reality space environment at a client according to an embodiment of the present application;
fig. 8 is a schematic flowchart of a method for interacting data objects in an augmented reality space environment at a server according to an embodiment of the present disclosure;
FIG. 9 is a diagram illustrating an example of a data object interaction apparatus in a virtual reality space environment of a client according to an embodiment of the present application;
FIG. 10 is a diagram illustrating an example of a data object interaction apparatus in a virtual reality space environment on a server side according to an embodiment of the present application;
FIG. 11 is a diagram illustrating an example of a data object interaction device in an augmented reality space environment of a client according to an embodiment of the present application;
fig. 12 is a diagram illustrating an example of a data object interaction apparatus in an augmented reality space environment at a server according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the embodiment of the application, in order to make the online shopping process of the user more approximate to the shopping experience in the actual online physical store, it is proposed to provide an internal space environment of a store object in a virtual reality manner, where the internal space environment may include a plurality of interactive data objects (e.g., commodity objects, etc.), and the user may browse and interact the data objects in such a virtual reality space environment. Because the virtual reality space environment has a three-dimensional effect, the data objects seen by the user are not a simple data object list any more, but can be seen in the three-dimensional space like an offline solid shop, wherein a shelf and the like are arranged, the data objects can be placed on the shelf, the user can take down specific data objects from the shelf to view details of the data objects, and the like, so that the experience of shopping in the actual offline solid shop can be more approximate.
In addition, in the embodiment of the application, in order to further improve the user experience, in the process of interaction between the user and the system, the shopping intention of the user can be judged in a voice recognition mode without using external input devices such as handles, interaction between the user and the user is further realized, and the payment operation is further completed in a voiceprint feature recognition mode, so that under the condition that no external input devices such as handles are used for assisting, the user can obtain the shopping experience closer to the online entity shop only through voice interaction, and the immersion feeling of the user is enhanced.
In order to implement the technical solution, in terms of software, the virtual reality internal space environment data of a plurality of shop objects can be implemented and provided to the client. The client is mainly used for front-end display of data and interaction with a user. The client may be an independent application program, or may also be a function module in some comprehensive application programs, for example, a function module in a mobile terminal App such as "mobile phone panning", or "panning", or the like.
In terms of hardware, firstly, a virtual reality device can be used, and the virtual reality device can be an integrated virtual reality device integrating functions of storage, calculation, screen display and the like, or can also be an external virtual reality device or a mobile virtual reality device only having a screen display function. In the integrated virtual reality device, the client program can be directly installed or built in the virtual reality device because the integrated virtual reality device has functions of storage, calculation and the like. As for the "external" virtual reality device, since it depends on an external PC (personal computer) or other device, the client program can be installed in the PC, and the external virtual reality device is connected to the PC, so as to implement virtual reality space environment and interaction. In addition, for the "mobile" virtual reality device, it needs to be used in cooperation with an intelligent mobile terminal device such as a mobile phone, for example, a "VR glasses box", etc. when in use, the intelligent mobile terminal device such as a mobile phone can be put into the glasses box, and the specific functions of storage, calculation, etc. are completed by the intelligent mobile terminal device, and the "VR glasses box" is only used for realizing the related functions of the on-screen display. Therefore, referring to fig. 1, for the virtual reality device, a client program may be installed or built in the intelligent mobile terminal device, and then the intelligent mobile terminal device is connected with the mobile virtual reality device, and the two devices cooperate with each other to implement the functions in the embodiment of the present application.
First, a detailed description is given below of a specific implementation scheme provided in the embodiments of the present application from the perspective of the foregoing client.
Example one
In a first embodiment of the present application, a method for data object interaction in a virtual reality space environment is first provided, and referring to fig. 2, the method may include the following steps 201 to 204:
step 201: providing a virtual reality shop object inner space environment by a client, wherein the shop object inner space environment comprises at least one interactive data object;
the shop object may be a "shop" opened by an online sales platform such as a merchant, and in the embodiment of the present application, a physical shop, such as a supermarket, a clothing exclusive shop, or the like, may be associated with the shop object online. In specific implementation, a simulated shop interior space environment can be generated by simulating the layout, the commodity display mode and the like in an off-line shop in a modeling mode, or in order to further improve the reality experience of the environment, the shop object interior space environment can be established by recording videos in an off-line entity shop, and when the shop interior space environment is provided for a user in virtual reality equipment, the video can be played, so that the user can obtain a shopping experience closer to reality.
Specifically, the virtual reality space environment provided for the user to watch is provided by the recorded video, and in the recorded video, the picture has integrity, that is, each frame of picture is an integral, so that the user can interact with the data object in the picture to realize subsequent processes of checking details, purchasing and the like, and the recorded video can be processed. In a specific processing procedure, that is, a data object appearing in a video is marked, for example, the position of a certain commodity appearing in a video picture can be input with identification information such as an ID of the commodity by means of mouse clicking and the like, so that in a subsequent interaction process with a user, contents such as detailed information and the like corresponding to the ID of the commodity can be determined through a database stored in a background. Thus, by such marking operation processing, the data objects appearing in the video picture can be made to have interchangeability.
After entering the space environment inside the shop object, the user realizes interaction with the data object, and can view the associated information content of the data object, such as the detailed information of the data object.
For example, in a specific implementation, the detailed information of the data object may include stereoscopic image information of the data object and description information of character properties. The process of the detail information "diverging" from the marked point location of the data object, as shown in fig. 3-1, may also be demonstrated. The detailed information may be information on the stereoscopic image, and the description information of the character property may be displayed around the stereoscopic image. That is to say, the stereoscopic image information and the text information can be displayed separately, and an operation control for rotating the stereoscopic image can be provided, so that when the user's sight focus enters the area range where the operation control is located, the stereoscopic image can be rotated, and stereoscopic images of multiple viewing angles of the data object can be displayed. Also, such operational controls may include multiple operational controls for rotation in different directions, e.g., both clockwise and counterclockwise rotation directions, and so forth.
For example, as shown in fig. 3-2, it is a detail information (divided into left and right frames, corresponding to binocular vision of human eyes) of an alarm clock presented after a user focuses on the alarm clock, and it can be seen that the detail information includes a stereoscopic image of the alarm clock, which is located at a central position 301, and a plurality of items of detail information such as names, prices, and the like of data objects can be displayed at a periphery 302. Also, below the central stereoscopic image, two rotation controls 303 are provided for rotating the stereoscopic image in the clockwise direction and the counterclockwise direction, respectively. It should be noted that, the detailed information content in the specific reality in fig. 3-2 is not limited in the embodiment of the present application, and therefore, the definition of the text or picture information displayed in the drawing does not affect the protection scope of the embodiment of the present application.
The three-dimensional image of the data object can be generated by a photo obtained by photographing an actual goods, the photo actually photographed can be a plurality of photos shot from a plurality of visual angles, a three-dimensional display effect is obtained by synthesis, the visual angles can be changed by rotating the three-dimensional image, and details of the data object can be comprehensively known from a plurality of angles.
In the embodiment of the present application, in order to provide a 360-degree stereoscopic image about a data object and enable the stereoscopic image to have a realistic presentation effect, the previous data processing operation may further include: the goods corresponding to the specific data object are photographed from a plurality of photographing angles, and the photos photographed by the plurality of photographing angles are stored, so that the goods can be provided according to the stored photos when the user has the requirement of checking the three-dimensional images at the plurality of angles.
Step 202: determining a target data object of interest to a user;
after entering the space environment inside the shop object, the user can interact with the data object so as to determine the shopping target of the user, and for the system, the shopping intention of the user can be identified according to the interaction condition of the user.
Particularly, when the user intention is recognized, the embodiment of the application can determine the shopping intention of the user in a mode of voice interaction of the user without an external input device such as a handle or a mode of emitting certain rays by a VR device, or can recognize the shopping intention of the user by recognizing the sight focus position of the user.
In this embodiment of the application, in a specific implementation, step 202 may be implemented as follows:
and identifying whether the voice of the user contains information content associated with a certain data object in the interactive data objects, and if so, taking the data object as a target data object interested by the user.
In specific implementation, after looking up the detailed information of the data object, the user can input voice through a recording component of the VR device, such as a microphone, and point out the target data object interested by the user in a voice interaction mode. The client receives the voice, recognizes the voice, and recognizes whether the voice contains the related information content of one or more interactive data objects in the currently provided virtual reality shop object internal space environment, wherein the related information content of one interactive data object is the content which can clearly correspond to the interactive data object. In practical application, it may also be further identified whether the voice of the user contains content capable of representing the meaning of the purchase intention, where the content capable of representing the meaning of the purchase intention may include: words that characterize the intent of the purchase, such as "join shopping cart," "buy immediately," and so on.
In this embodiment of the application, in a specific implementation, step 202 may be implemented as follows:
determining a user sight focus position; and the number of the first and second groups,
and when the sight focus position enters the range of the interaction area of a certain data object and the staying time reaches a preset time threshold, determining the data object as a target data object interested by the user.
If the user considers that the data object meets the requirement of the user after checking the detailed information of the data object, the user can generate the intention of purchasing the data object, at the moment, the user can aim the sight focus at the range of the interaction area where the data object is located, and after the sight focus stays for a certain time, the system can identify the shopping intention of the user.
In addition, when providing details of the data object information, an operation control for performing a purchase operation on the data object may also be provided, such as an "buy immediately" button or the like as shown at 304 in fig. 3-2. When the user sight line focus is within the area range of the 'purchase immediately' button and the staying time reaches a preset time threshold, the data object can be determined to be the target data object interested by the user. That is, if the user considers that the data object meets the requirement of the user after checking the details of the target data object, the user may generate an intention to purchase the data object, at this time, the user may focus on the area range where the "operation control for performing a purchase operation on the data object" is located, and after the sight focus stays for a certain time, the system may recognize the purchase intention of the user.
In specific implementation, a target data object which is interested by a user can be determined by combining the position of the sight focus of the user and the voice of the user; determining a data object interested by a user according to the sight focus position of the user, and then identifying whether the voice of the user contains content capable of representing the meaning of the purchase intention, wherein the content capable of representing the meaning of the purchase intention can comprise: words that characterize the intent of the purchase, such as "join shopping cart," "buy immediately," and so on. In practical application, words capable of representing purchasing intention can be collected in advance, then when the voice of the user is recognized, the voice content of the user is analyzed, the voice content is converted into text content, word segmentation processing is carried out on the text content, and then recognition is carried out in a mode of matching word shapes and word meanings.
Step 203: providing information content related to order confirmation, the information content related to order confirmation comprising: attribute information and resource information to be paid associated with the target data object;
upon identifying the user's shopping intention via step 202, relevant information for order confirmation, such as "information content relevant to order confirmation" shown at 401 in fig. 4, may be provided, and when the information content relevant to order confirmation is implemented, the information content may include various information, such as determining a receiving address, a name of a purchased data object, a number of purchased data objects, whether to use a virtual resource such as a "coupon", and the like.
In particular implementations, a first information panel may be provided on which the information content related to order confirmation is provided. Closing the first information panel when the user gaze focus position is moved away from the first information panel. The area of the first information panel may be smaller than the area of the field of view of the user, and in order to facilitate exiting the presentation of the information content currently related to order confirmation, a corresponding implementation manner is also provided in the embodiment of the present application.
Specifically, the gaze focus of the user may be tracked, and when the gaze focus position of the user moves away from the first information panel, the first information panel may be closed, and accordingly, the information content currently related to order confirmation is no longer displayed, and the user may continue to browse other data objects. In other words, in this way, it is not necessary to provide an operation control such as a "close button" in the first information panel, and the user can trigger the closing of the first information panel only by moving the sight focus away from the area where the information panel is located, so that the user can operate more conveniently.
On the page displaying the information content related to order confirmation, operation controls for confirming various items of information can be provided respectively.
In a specific implementation, the information content related to order confirmation may further include: the first operation control is used for modifying the order related information;
and when the sight focus of the user enters the area range where the first operation control is located and the staying time reaches a preset time threshold, providing a second information panel, and providing content for modifying the order related information in the second information panel.
For example, assuming that the user needs to modify the current default shipping address, it may be triggered by the first operation control. Specifically, in the embodiment of the present application, the triggering may also be performed by a line-of-sight fusing manner. That is to say, when the user sight line focus enters the area range where the first operation control is located and the staying time reaches the preset time threshold, providing a second information panel, and providing the content for modifying the order related information in the second information panel. If the user selects other delivery addresses, the second information panel is automatically closed, and if the user does not need to modify after browsing the delivery addresses, the second information panel can be closed through operation controls such as a closing button in the second information panel, or closed through voice control. At this point, the display of the first information panel continues.
In a specific implementation, the information content related to order confirmation may further include: a second operation control for performing an increase/decrease operation on the number of target data objects;
and when the focal position of the sight of the user enters the second operation control and the staying time reaches a preset time threshold, executing an operation of adding one to or subtracting one from the number of the target data objects on the basis of the original value.
And if the sight focus position enters the second operation control and does not leave after the staying time reaches a preset time threshold, continuing to perform the operation of adding one or subtracting one to the number of the target data objects, and when the operation of adding one or subtracting one is continuously performed, the time interval of execution is smaller than the preset time threshold.
In practical applications, there may be a need to purchase multiple items, at this time, the user needs to continuously operate the second operation control for multiple times, if no special processing is performed, the user needs to focus on the second operation control, stay for 3S, add 1 to the number of data objects, move the focus of the view away from the control, and then enter the control, and so on, and need to perform repeatedly for multiple times, which obviously takes more time and also makes the user feel inconvenient.
For this reason, in the embodiment of the present application, the following special processing may also be performed: and if the sight focus position enters the second operation control and does not leave after the staying time reaches a preset time threshold, continuing to perform the operation of adding one or subtracting one to the number of the target data objects, and when the operation of adding one or subtracting one is continuously performed, the time interval of execution is smaller than the preset time threshold. That is, when the user focuses on such a second operation control (e.g., a control that performs an "increase" operation), a first fusing process is triggered, after a preset time threshold, 1 is added to the number of data objects on the original basis, then, if the user does not move away from the line of sight, a second fusing is triggered, after each fusing, 1 is added to the number of data objects on the original basis, and the time spent in each subsequent fusing process may be shortened, for example, 3S for the first time, 2S for the second time, 1S for the third time, and so on. Of course, the shortest fusing time may also be limited, for example, the shortest fusing time cannot be shorter than 1S, which means that the fourth and subsequent fusing processes may be 1S elapsed time.
After the confirmation operation of each item of information is completed, the order can be submitted, and the subsequent payment operation process is entered. After providing the information content related to order confirmation, the client considers that the user knows the order related information, and then directly executes step 204, or interacts with the user in the following manner, and determines that the user completes the confirmation of the order information, and then executes step 204.
One way is that when providing information content related to order confirmation, also providing verification content, wherein the verification content is generated according to at least part of attribute information of a target data object; and then, identifying whether the voice of the user contains the verification content, and if so, determining that the user completes the confirmation of the order information.
The attribute information of the target data object refers to information related to the target data object and capable of characterizing the target data object, and may include: the title of the product, the color of the product, the size of the product, the model, the material, the brand, the configuration, the grade, the packaging capacity, the production date, the expiration date, the usage, the price, the production place, etc., the verification contents are generated according to the one or more attribute information of the target data object.
For example: in the embodiment of the present application, the verification content provided for the user is mainly convenient for the user to quickly confirm the commodity to be purchased, and confirm the purchasing behavior in a voice interaction manner. In addition, the user can be prompted through words to determine the order by reading the checking content. The user checks the verification content, reads out the verification content, the system recognizes the voice of the user, judges whether the voice contains the content of 'golden iphone7Plus 256GB cargo', and if the voice contains the content, the user confirms the order information by the voice. During subsequent matching, only the voice containing the verification content output by the user needs to be matched with the pre-trained voiceprint model, so that the reliability of the matching result can be ensured, the matching efficiency can be improved, and the interactive operation of the user can be simplified.
In another mode, when providing information content related to order confirmation, the client also provides a check code; then, whether the check code is contained in the voice of the user is identified, and if yes, the user is determined to finish confirming the order information. Wherein the check code may be randomly generated by the client.
In practical application, a situation that the user fails to recognize due to a wrong-looking check code or inaccurate pronunciation may occur, and for such a situation, another string of random numbers may be sent again during implementation, so that the user performs an operation of confirming the order information again according to the another string of random numbers. Therefore, the check code is sent for multiple times, the opportunity of confirming the order information for multiple times is provided for the user, and certain fault tolerance is achieved; for example: the user may be allowed to make N attempts, and N may be set according to actual requirements, for example, may be set to 3.
As other people can hardly see the check code except the user, the method can effectively prevent other people from illegally triggering the operation of confirming the order and effectively ensure the safety of order payment in the virtual reality scene.
Another way is to determine the user gaze focal position; and when the sight focus position enters the range of the interaction area of the information content relevant to order confirmation and the staying time reaches a preset time threshold value, determining that the user has completed the confirmation of the order information.
In a specific implementation, after providing the information content related to order confirmation, the client may trigger a "fusing" mechanism when finding that the focus of the line of sight of the user enters the range of the interaction area of the "information content related to order confirmation", that is, a timer may be used to start timing, and if the staying time in the range of the interaction area reaches a certain time threshold (for example, 3S), it may be determined that the user confirms the order content, and then the client performs step 204.
In another mode, whether words capable of representing the meaning of order confirmation are contained in the voice of the user is identified, and if yes, the fact that the user has completed the order information confirmation is determined.
In a specific implementation, after providing information content related to order confirmation, a client acquires a voice uttered by a user after viewing the information content, and then, the client identifies whether the voice contains a word capable of representing the meaning of confirming an order, where the word capable of representing the meaning of confirming an order may include: the words "place an order", "confirm", "pay", "ok", etc.
Step 204: and matching the voice of the user with the pre-trained voiceprint model, and triggering payment operation when the matching is successful.
Because the payment operation relates to the property and privacy of the user, in the real world, the user provides a bank card and inputs a password manually, or the user uses an intelligent terminal to carry out online payment on the spot, the payment behavior can ensure the legality of the payment operation of the user, but in a virtual reality scene, the user cannot implement the payment behavior, so in the embodiment of the application, the legality of the user identity also needs to be verified to confirm the legality and the safety of the payment, and the malicious payment operation of an illegal person is effectively avoided.
In the embodiment of the application, the method and the device for verifying the legality of the user identity by utilizing the voiceprint features of the user are provided, and the voiceprint features of each person are unique and have uniqueness, so that the voiceprint features are difficult to imitate by others in general, and therefore, the reliability of verification can be guaranteed by verifying the legality of the user identity through the voiceprint features.
In the embodiment of the present application, several alternative ways are provided as to how to match the user's speech with the pre-trained voiceprint model in step 204, which are explained below.
One way is to collect the user's voice and then match the collected voice with a pre-trained voiceprint model, and if all the collected voice matches with a certain pre-trained voiceprint model, it indicates that the matching is successful, and at this time, a payment operation is triggered. When the payment is successful, the payment success information can be provided, and if the payment fails due to some reason, the payment failure information can be provided to prompt the user that the payment fails, for example, prompt the user that the account balance is insufficient and the payment cannot be completed.
The other mode is that a voiceprint model corresponding to the equipment identifier or a voiceprint model corresponding to the user identifier is selected from a plurality of voiceprint models trained in advance according to the equipment identifier or the user identifier of the client; the user's speech is then matched to the selected voiceprint model.
In this way, the voice of the user can be understood as all voices uttered by the user when using the VR device, so that the reliability of the matching result can be ensured. The voice of the user may be a voice uttered by the user after viewing the "information content related to order confirmation", so that the calculation amount of matching can be reduced.
Compared with the ergodic model matching, the method can greatly improve the matching efficiency. The device identifier refers to a unique identifier of the client, such as a client ID, and the user identifier refers to a unique identifier capable of identifying the user, such as a user name, a user number, and the like.
In this embodiment of the application, the triggering of the payment operation may be understood as that the client may trigger a locally configured APP for completing a payment service, so that the APP completes payment according to the relevant payment account information pre-registered by the user, for example, funds of a user account are transferred to a merchant account. The triggering payment operation may also be understood as that the client triggers a server for completing the payment service, and the server completes payment according to the related payment account information registered in advance by the user. The user's relevant payment account information may include: user bank account and password, user virtual resource account and password, etc. After completing the payment, the client may also provide payment result information to inform the user that the payment operation succeeded or failed.
In order to support the verification of the identity validity of the user, in the embodiment of the present application, the voiceprint model of the user needs to be obtained by training in advance, and when the voiceprint model is implemented, the training of the voiceprint model can be completed in advance through the following training modes, where the training modes include:
collecting a plurality of voices of a user;
extracting a respective MFCC cepstrum coefficient for each of the plurality of voices;
and training according to the MFCC cepstrum coefficients to obtain a voiceprint model corresponding to the user.
During actual training, a plurality of voices of a user can be acquired in an off-line mode, and the more the number of the voices is, the more reliable the trained model is. When the method is implemented, a user can record own voice through a mobile phone or other terminals and upload the recorded voice to a voice acquisition system; certainly, the user can also directly record own voice at a recording interface provided by the voice acquisition system; the voice acquisition system firstly preprocesses received voice, wherein the preprocessing is mainly used for removing non-voice signals and silent voice signals, and framing processing is carried out on the voice signals, so that a data base is provided for subsequent model training. Before training, parameters (MFCC) of each frame of speech signal, MFCC, and abbreviations of Mel Frequency Cepstrum coefficients are extracted, wherein Mel Frequency is extracted based on human auditory characteristics and has a nonlinear correspondence with Hz Frequency. MFCC is the Hz spectral signature calculated by using the relationship between them. Each person's sound spectrum is unique, then the MFCC is able to uniquely identify the user's sound signature. During the training of the voiceprint model, the invention can adopt a Gaussian mixture model, train the Gaussian mixture model by using the voice uploaded by the user and the MFCC parameters corresponding to the voice, and obtain the voiceprint model special for the user.
It should be noted that, in order to enable the pre-trained voiceprint model to truly represent the actual voiceprint features of the user, the embodiment of the present application further provides a method for optimizing the voiceprint model.
During specific implementation, voice of a user in a virtual reality scene is collected, and the collected voice is used for optimizing a voiceprint model of the user. The optimization method does not need to collect the voice of the user through a special channel, and only the voice of the user in the virtual reality scene is used for optimizing the voiceprint model, so that the method is simple and feasible, and the reliability of the optimization effect is higher.
In the embodiment of the present application, only the voiceprint model of a specific user may be trained in advance, and then only the user is served, and of course, in order to serve multiple users, a voiceprint model dedicated to an independent user may also be trained in advance for each user in multiple users.
Therefore, by the embodiment of the application, the internal space environment of the shop object can be provided in a virtual reality mode, the internal space environment can comprise a plurality of interactive data objects, the shopping intention of the user can be determined through the voice of the user, the payment intention of the user can be further determined through the voiceprint characteristics of the user, the safety of payment operation is ensured, external input equipment such as a handle is not needed in the whole process, the user operation is simplified, and the shopping payment experience of the user is improved.
In practical applications, before the step S201, some preliminary preparation work may be performed. For example, taking a "mobile" virtual reality device (e.g., a VR glasses box) as an example, the client provided in the embodiment of the present application may be installed in a mobile terminal device such as a mobile phone, and then the mobile phone is placed in the VR glasses box. In addition, in practical application, there may be some previous configuration operations such as login, receiving address input, and the like, in this embodiment, in order to ensure that the user can smoothly shop after wearing the VR glasses box, the complicated configuration operation may be performed before wearing the glasses by the user, and the method may include: login, add shipping address, payment account information, etc. For example, when the user checks the page of the double 11 meeting places through the mobile phone panning client on the mobile phone, if the page includes the relevant virtual reality entry, the user can enter through the entry, at this time, whether the user logs in, whether a receiving address is added or not can be judged, and if not, the user can be guided to perform the operations of logging in and adding the receiving address. In addition, the user can be reminded to transversely place the mobile phone, and preset small animations can be displayed. It should be noted that, in practical applications, in addition to determining whether the user logs in or not, it may also be determined whether the model of the mobile terminal device such as a mobile phone used by the user is supported, and if not, the prompt may be performed. In addition, whether the current client version supports can be judged, if not, the user can be guided to upgrade the client version, and the like. At the last step of the process, the cell phone can be placed into the VR glasses box.
Since there may be a plurality of shop objects displayed in a virtual reality manner, after the mobile phone is placed in the VR glasses box, the mobile phone can first enter a navigation space, that is, a space environment for selecting shop objects. The system is equivalent to the home page of a website, but the difference is that the leading aviation room in the system is a three-dimensional space, and a user can search the leading aviation room by rotating the head up, down, left and right. In a specific implementation, as shown in fig. 5-1, in the leading flight room, the initial position of the line of sight may be a wall of photos, each dynamic photo on the wall represents a store in a street in a city in a country in the world, and the content shown in the dynamic photo may also be a real recorded video of the store, for example, including a street view outside the store.
To enable a user to select a target store object from a lead flight, an operational layer may be added between the environmental layer and the screen layer of such a lead flight. The environment layer can be understood as a virtual reality scene, the operation layer can be understood as a GUI, and the screen layer can be understood as a client control. As shown in fig. 5-2, in the operation layer, "bounding boxes" corresponding to the positions and sizes of the moving pictures of the respective shop objects may be set, and the "bounding boxes" may be set as the interactive area ranges corresponding to the shop objects, so that the user may trigger to enter the internal space environment of a certain shop object by focusing the line of sight in the interactive area range corresponding to the shop object and staying for a while.
After the entry into the target shop object is triggered, the client may request the server to acquire virtual reality space environment data of the target shop object, and perform operations such as parsing and rendering, so that a period of time may elapse from the time when the entry into the target shop is triggered to the time when the internal space environment interface of the target shop is actually displayed. In the embodiment of the application, during the time, the leading air space environment can be switched to the live video associated with the selected shop object, wherein the live video is used for showing scenes which can be watched in the way of going to the offline physical shop corresponding to the selected shop object. For example, a "space-time shuttle" effect may be exhibited, where the user may feel to go from the current navigation room, onto the vehicle, onto the street outside a particular store, and finally into a store door, and so forth. For different store objects, the geographic positions of the corresponding offline entity stores are different, so that the offline entity stores can correspond to different live videos respectively, and users can obtain more real experience.
Example two
The second embodiment is corresponding to the first embodiment, and introduces the specific scheme provided by the embodiment of the present application from the perspective of the server.
Specifically, referring to fig. 6, the second embodiment provides a data object interaction method in a virtual reality space environment, where the method may include the following steps:
step 601: the method comprises the steps that a server stores virtual reality shop object internal space environment data, wherein the shop object internal space environment data comprise at least one interactive data object, and the interactive data object corresponds to an interactive area range and is associated with preset information content;
in specific implementation, the server can collect relevant video data and the like in the entity shop, perform processing operations such as labeling and the like, generate and store virtual reality shop object internal space environment data.
Step 602: providing the virtual reality shop object internal space environment data to a client so that the client can provide the virtual reality shop object internal space environment data and determine a target data object interested by a user; providing information content related to order confirmation, the information content related to order confirmation comprising: attribute information and resource information to be paid associated with the target data object; and matching the voice of the user with the pre-trained voiceprint model, and triggering payment operation when the matching is successful.
The server can directly provide the generated virtual reality shop object internal space environment data to the client, or can provide the related data to the client when receiving the request of the client, the client can provide the virtual reality space environment after obtaining the corresponding data, and the interaction with the user is realized based on the environment, so that the complete shopping process can be realized.
For other specific implementations in the second embodiment, reference may be made to the description in the first embodiment, and details are not described here.
EXAMPLE III
The foregoing embodiments provide various specific solutions based on virtual reality technology, and when the embodiments are specifically implemented, the embodiments can also be implemented based on augmented reality technology. Among them, regarding the difference between the virtual reality and the augmented reality technology, it can be simply understood as: in the virtual reality environment, the content displayed by the "environment layer" is a virtual environment provided by simulating or shooting videos and the like in real space in advance, and the information of the "operation layer", including the marking of the operation range of the interactive object, the display of the interactive response information and the like, is realized based on the content displayed in the virtual environment layer; in the augmented reality space environment, the content displayed by the environment layer is the actual content in the entity space, the information in the operation layer comprises the label information and the like of the range of the operation area of the goods, the label information can be labeled based on a pre-established three-dimensional space model, when the information is actually displayed, the entity space is spatially matched with the three-dimensional space model, then the positioning is carried out through the computer visual capacity of the AR equipment, the label information and the like stored in the three-dimensional space model are displayed in the visual field range of the AR equipment, and the content which is actually seen by a user through the AR equipment comprises the environment in the entity space and the enhanced information superposed on the environment.
Therefore, in the embodiment of the application, if a corresponding technical solution is provided based on an augmented reality technology, a three-dimensional space model can be created in advance for an entity store under a line, and then information interaction with a user is realized based on the three-dimensional space model.
For example, in one of the modes, a worker may wear an AR device such as AR glasses to enter the internal space of the physical store and walk in the store. Since the AR device is provided with a sensing device such as a camera, the shop and the internal layout thereof can be scanned by using the sensing device while walking. After the scan results are obtained, they can be imported into the development environment. Such a development environment may generally support annotation of scan results, and thus, the scan results may be annotated by a worker. For example, if a store includes a plurality of shelves, each shelf being used to place a specific storage object (e.g., "item" or the like), the number of the shelf, data object identification (e.g., item ID or the like) information corresponding to the item, or the like may be labeled. After the labeling is completed, the system can store information such as the number of each shelf and the corresponding goods information, and meanwhile, the system can automatically generate and store information such as the position coordinates of each shelf and the goods, so that the three-dimensional space model corresponding to the shop is generated. In addition, as for information such as data object details and the like which need to be provided in the process of interacting with the user, a data object detail information database can be established in advance, and during marking, marking can also be performed according to the ID and other identifiers of the data object corresponding to the specific goods in the database, so that the goods in the augmented reality space and the detail information of the data object stored in the database can be associated.
In a specific implementation, in order to implement the interactive function provided in the embodiment of the present application, on a software level, a relevant client (including an application program or a relevant functional module) may be implemented in advance. The client may be used with the AR device in a variety of forms. Specifically, for an integrated AR device (that is, the AR device independently undertakes tasks such as screen display, calculation, and storage), the client may be directly installed in the AR device, so that the AR device has the interaction function in the embodiment of the present application. Or, for the mobile AR device, since the mobile AR device usually only undertakes the screen display task, and when in use, the mobile terminal device such as a mobile phone and the like usually needs to be connected to the AR device, in this case, the client may also be installed in the mobile terminal device such as a mobile phone, and thus, after the mobile terminal device installed with the client is placed in the AR device, the AR device may have the interactive function described in the embodiment of the present application. In addition, after the three-dimensional space model is generated, the three-dimensional space model can be directly stored in the terminal equipment where the client is located, or the three-dimensional space model can be stored in the server, and when the client needs to interact, the three-dimensional space model corresponding to the current shop is downloaded from the server. In short, the client, whether directly installing the client in the AR device or in the mobile terminal device, can enable the AR device to implement specific interaction based on the client in combination with the generated three-dimensional space model.
Specifically, referring to fig. 7, a third embodiment provides a data object interaction method in an augmented reality space environment, where the method may include the following steps:
step 701: a client obtains a three-dimensional space model of the internal space environment of the entity shop; the physical store interior including a plurality of items;
step 702: after the three-dimensional space model is subjected to space matching with the entity shop through a preset Augmented Reality (AR) device, providing interaction area range information corresponding to the goods in the view range of a user of the AR device;
when a user such as a consumer needs to go to a physical store for shopping, the user can wear a relevant AR device (including an integrated AR device or a mobile AR device connected with a mobile terminal device) to enter the physical store, and then can start a relevant client. After the client is started, initialization processing may be performed first, and specifically, the initialization processing may include performing spatial matching on the three-dimensional space model and the physical store. By spatial matching, that is, the three-dimensional space model can be matched with the actual physical store so that the position in the three-dimensional space corresponds to the position, direction, etc. in the physical store, which enables the "enhanced" information to be accurately displayed at the position of the corresponding item within the field of view.
For example, in one mode, some feature points, for example, four corner positions in a space, may be stored in a three-dimensional space model in advance, after a consumer enters a physical store by wearing an AR device such as AR glasses, an application program is started first, then the consumer may look around the storage space for one circle while wearing the AR glasses, a sensor device of the AR glasses may scan the storage space, then the feature points and the points of the actual positions in the storage space may be matched by using the feature points and the scanning result, so as to determine the position, the direction, and the like of the three-dimensional space model, thereby completing the space matching, and finally the position of each point in the three-dimensional space model is made to be consistent with the actual position and the direction corresponding to the physical store. Of course, in specific implementation, the spatial matching may also be implemented in other ways, for example, automatic matching may be implemented, and the like, which are not described in detail herein.
After the spatial matching is completed, the interaction area information corresponding to the items can be provided in the visual field of the user of the AR device, for example, operable areas of the items can be displayed in the visual field by adding an "operation layer" in the visual field, for example, a "blue dot" is displayed at the position of each item, and so on.
Step 703: determining a target item of interest to a user;
step 704: providing information content related to order confirmation, the information content related to order confirmation comprising: attribute information associated with the target goods and resource information to be paid;
step 705: and matching the voice of the user with the pre-trained voiceprint model, and triggering payment operation when the matching is successful.
After the interactive region information corresponding to the shipment is provided, in the subsequent specific interactive process, the determination of the target shipment interested by the user, the provision of various interactive response information, the provision of various types of operation controls and the improvement of the interactive mode thereof over the prior art, and the like, may be similar to the implementation mode in the virtual reality space, and are not described herein again.
Example four
The fourth embodiment is a description from the perspective of a server corresponding to the third embodiment, and in particular, referring to fig. 8, the fourth embodiment provides a method for data object interaction in an augmented reality space environment, where the method may include:
step 801: the method comprises the steps that a server stores a three-dimensional space model of a space environment inside a physical shop, wherein the interior of the physical shop comprises a plurality of goods, interaction area range information corresponding to the goods, and interaction response information of data objects related to the interaction area ranges;
step 802: providing the three-dimensional space model to a client, so that the client provides interaction area range information corresponding to the goods in the user vision range of an AR device after the client performs space matching on the three-dimensional space model and the entity shop through a preset augmented reality AR device, and determines target goods which are interested by a user; providing information content related to order confirmation, the information content related to order confirmation comprising: attribute information associated with the target goods and resource information to be paid; and matching the voice of the user with the pre-trained voiceprint model, and triggering payment operation when the matching is successful.
The server can store a three-dimensional space model of the internal space environment of the entity shop and directly provide the three-dimensional space model for the client, or can provide related data for the client when receiving a request of the client, the client can provide an augmented reality space environment after obtaining corresponding data, interaction with a user is realized based on the environment, and a complete shopping process can be realized.
Regarding the third and fourth embodiments, details of implementation related to the information interaction process, including improvement points and the like in the prior art, can be referred to the descriptions in the foregoing embodiments, and are not described herein again.
EXAMPLE five
The fifth embodiment is an apparatus corresponding to the first embodiment, namely, the client method, and referring to fig. 9, this embodiment provides an apparatus for data object interaction in a virtual reality space environment, where the apparatus is applied to a client, and the apparatus includes:
a virtual reality environment providing module 901, configured to provide a virtual reality shop object internal space environment, where the shop object internal space environment includes at least one interactable data object;
a target data object determination module 902 for determining a target data object of interest to a user;
an order information providing module 903, configured to provide information content related to order confirmation, where the information content related to order confirmation includes: attribute information and resource information to be paid associated with the target data object;
and a matching module 904, configured to match the voice of the user with a pre-trained voiceprint model, and trigger a payment operation when the matching is successful.
In this embodiment of the present application, optionally, the apparatus may further include:
and the order determining module is used for determining that the user completes the confirmation of the order information.
Further, optionally, the order determination module may include:
the verification content providing submodule is used for providing verification content when providing information content related to order confirmation, and the verification content is generated according to at least part of attribute information of the target data object;
and the verification content identification submodule is used for identifying whether the voice of the user contains the verification content or not, and if so, determining that the user completes the confirmation of the order information.
Optionally, the matching module includes: and the matching sub-module is used for matching the voice containing the verification content sent by the user with the pre-trained voiceprint model.
Further, optionally, the order determination module may include:
the verification code providing sub-module is used for providing a verification code when providing information content related to order confirmation;
and the check code identification submodule is used for identifying whether the voice of the user contains the check code, and if so, determining that the user completes the confirmation of the order information.
Optionally, the check code is randomly generated by the client.
Optionally, the matching module includes: and the matching sub-module is used for matching the voice containing the check code sent by the user with the pre-trained voiceprint model.
Optionally, the check code identification sub-module may be further configured to trigger the check code providing sub-module when the voice of the user is identified not to include the check code, so that the check code providing module provides another check code, and then the check code identification sub-module identifies whether the voice of the user includes another check code, and if so, determines that the user has completed the confirmation of the order information.
Optionally, the order determining module may include:
the sight focus position determining submodule is used for determining the position of the sight focus of the user;
and the order determining submodule is used for determining that the user completes the order information confirmation when the sight line focus position enters the range of the interaction area of the information content relevant to the order confirmation and the staying time reaches a preset time threshold.
Optionally, the order determining module may include:
and the voice recognition submodule is used for recognizing whether the voice of the user contains the content capable of representing the meaning of the order confirmation, and if so, determining that the user completes the order information confirmation.
Optionally, the target data object determining module may include:
the sight focus position determining submodule is used for determining the position of the sight focus of the user;
and the target data object determining submodule is used for determining the data object as a target data object which is interested by the user when the sight line focus position enters the range of the interaction area of a certain data object and the staying time reaches a preset time threshold.
Optionally, the target data object determining module may include:
and the voice recognition sub-module is used for recognizing whether the voice of the user contains the information content associated with a certain data object in the interactive data objects, and if so, taking the data object as a target data object interested by the user.
Optionally, the apparatus further comprises:
and the prompting module is used for providing a prompt for the user to select invalid information content when the voice of the user is identified not to contain the information content associated with one of the interactive data objects.
Optionally, the matching module may include:
the voiceprint model selection submodule is used for selecting a voiceprint model corresponding to the equipment identifier or selecting a voiceprint model corresponding to the user identifier from a plurality of pre-trained voiceprint models according to the equipment identifier or the user identifier of the client;
and the matching sub-module is used for matching the voice of the user with the selected voiceprint model.
Optionally, the apparatus further comprises: a voiceprint model training module for training a voiceprint model,
the voiceprint model training module comprises:
the voice acquisition submodule is used for acquiring a plurality of voices of the user;
an extraction sub-module for extracting a respective MFCC cepstrum coefficient for each of the plurality of voices;
and the training submodule is used for training according to the MFCC cepstrum coefficients to obtain a voiceprint model corresponding to the user.
Optionally, the apparatus further comprises: and the voiceprint model optimization module is used for optimizing the voiceprint model of the user in the pre-trained voiceprint model by using the voice sent by the user.
For the working principle of the apparatus and the specific implementation of each module and sub-module in the fifth embodiment, reference may be made to the description in the first embodiment, and details are not described here.
EXAMPLE six
The sixth embodiment is an apparatus corresponding to the second server-side method of the second embodiment, and referring to fig. 10, the embodiment provides a data object interaction apparatus in a virtual reality space environment, which is applied to a server side, and the apparatus includes:
the system comprises an environment data storage module 1001, a virtual reality shop object internal space environment data storage module, a virtual reality shop object internal space environment data processing module and a virtual reality data processing module, wherein the shop object internal space environment data comprises at least one interactive data object, and the interactive data object corresponds to an interactive area range and is associated with preset information content;
an environment data providing module 1002, configured to provide the virtual reality shop object internal space environment data to a client, so that the client provides the virtual reality shop object internal space environment data, and determines a target data object interested by a user; providing information content related to order confirmation, the information content related to order confirmation comprising: attribute information and resource information to be paid associated with the target data object; and matching the voice of the user with the pre-trained voiceprint model, and triggering payment operation when the matching is successful.
For the working principle of the apparatus and the specific implementation of each module in the sixth embodiment, reference may be made to the descriptions in the first and second embodiments, and details are not described here.
EXAMPLE seven
The seventh embodiment is an apparatus corresponding to the third client method of the third embodiment, and referring to fig. 11, the embodiment provides a data object interaction apparatus in an augmented reality space environment, which is applied to a client, and the apparatus includes:
a space model obtaining module 1101, configured to obtain a three-dimensional space model of an internal space environment of the physical store; the physical store interior including a plurality of items;
a spatial model matching module 1102, configured to provide interaction area range information corresponding to the goods within a user view range of an AR device after a preset Augmented Reality (AR) device performs spatial matching on the three-dimensional spatial model and the entity store;
a target item determination module 1103 for determining a target item of interest to the user;
an order information providing module 1104, configured to provide information content related to order confirmation, where the information content related to order confirmation includes: attribute information associated with the target goods and resource information to be paid;
and a matching module 1105, configured to match the voice of the user with a pre-trained voiceprint model, and trigger a payment operation when the matching is successful.
For the working principle of the apparatus and the specific implementation of each module in the seventh embodiment, reference may be made to the description in the third embodiment, and details are not described here.
Example eight
An eighth embodiment is an apparatus corresponding to the fourth server-side method of the embodiment, and referring to fig. 12, this embodiment provides an apparatus for data object interaction in an augmented reality space environment, where the apparatus is applied to a server side, and the apparatus includes:
the storage module 1201 is used for storing a three-dimensional space model of a space environment inside the physical store, wherein the interior of the physical store comprises a plurality of goods, interaction area range information corresponding to the goods, and interaction response information of data objects associated with the interaction area ranges;
a space model providing module 1202, configured to provide the three-dimensional space model to a client, so that the client provides interaction area range information corresponding to the item within a user view range of an AR device after performing space matching on the three-dimensional space model and the entity store through a preset augmented reality AR device, and determines a target item interested by a user; providing information content related to order confirmation, the information content related to order confirmation comprising: attribute information associated with the target goods and resource information to be paid; and matching the voice of the user with the pre-trained voiceprint model, and triggering payment operation when the matching is successful.
For the working principle of the apparatus and the specific implementation of each module in the eighth embodiment, reference may be made to the description in the fourth embodiment, and details are not described here.
The main technical ideas of the various specific solutions provided based on the augmented reality technology and the various specific solutions provided based on the virtual reality technology are the same, and can be referred to each other.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it is further noted that, herein, relational terms such as first, second, third, fourth, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The data object interaction method and device in the virtual reality space environment and the data object interaction method and device in the augmented reality space environment provided by the application are respectively introduced in detail, specific examples are applied in the text to explain the principle and the implementation mode of the application, and the description of the above embodiments is only used for helping to understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (22)

1. A method of data object interaction in a virtual reality space environment, the method comprising:
providing a virtual reality shop object inner space environment by a client, wherein the shop object inner space environment comprises at least one interactive data object; executing the following steps under a virtual reality scene:
determining a target data object of interest to a user through interaction between the user and the data object; the interaction comprises interaction of a user gaze focus position or voice;
providing a first information panel and providing information content related to order confirmation on the first information panel, wherein the information content related to order confirmation comprises: attribute information and resource information to be paid associated with the target data object;
after confirming that the user has finished confirming the order information, matching the voice of the user with a pre-trained voiceprint model, and triggering payment operation when the matching is successful;
the information content related to order confirmation further comprises: the first operation control is used for modifying the order related information;
and when the sight focus of the user enters the area range where the first operation control is located and the staying time reaches a preset time threshold, providing a second information panel, and providing content for modifying the order related information in the second information panel.
2. The data object interaction method of claim 1, wherein prior to said matching the user's speech with a pre-trained voiceprint model, the method further comprises:
it is determined that the user has completed confirmation of the order information.
3. The data object interaction method of claim 2, wherein said determining that the user has completed confirmation of order information comprises:
when providing information content related to order confirmation, providing verification content, wherein the verification content is generated according to at least part of attribute information of the target data object;
and identifying whether the voice of the user contains the verification content, and if so, determining that the user completes the confirmation of the order information.
4. The data object interaction method of claim 2, wherein said determining that the user has completed confirmation of order information comprises:
when providing the information content related to order confirmation, providing a check code;
and identifying whether the voice of the user contains the check code, and if so, determining that the user completes the confirmation of the order information.
5. The data object interaction method of claim 4, wherein the check code is randomly generated by the client.
6. The data object interaction method of claim 4, wherein the matching of the user's speech to a pre-trained voiceprint model comprises:
and matching the voice containing the check code sent by the user with the pre-trained voiceprint model.
7. The data object interaction method of claim 6, wherein when the check code is not included in the voice of the recognized user, the method further comprises:
and providing another check code, identifying whether the voice of the user contains the other check code, and if so, determining that the user completes the confirmation of the order information.
8. The data object interaction method of claim 2, wherein said determining that the user has completed confirmation of order information comprises:
determining a user sight focus position;
and when the sight focus position enters the range of the interaction area of the information content relevant to order confirmation and the staying time reaches a preset time threshold value, determining that the user has completed the confirmation of the order information.
9. The data object interaction method of claim 2, wherein said determining that the user has completed confirmation of order information comprises:
and identifying whether the voice of the user contains the content capable of representing the meaning of order confirmation, and if so, determining that the user has completed the order information confirmation.
10. The data object interaction method of claim 1, wherein the determining a target data object of interest to the user comprises:
determining a user sight focus position;
and when the sight focus position enters the range of the interaction area of a certain data object and the staying time reaches a preset time threshold, determining the data object as a target data object interested by the user.
11. The data object interaction method of claim 1, wherein the determining a target data object of interest to the user comprises:
and identifying whether the voice of the user contains information content associated with a certain data object in the interactive data objects, and if so, taking the data object as a target data object interested by the user.
12. The data object interaction method of claim 11, further comprising:
when the voice of the user is identified not to contain the information content associated with one of the interactable data objects, a prompt is provided for prompting the user to select invalid information content.
13. The data object interaction method of claim 1, wherein the matching of the user's speech to a pre-trained voiceprint model comprises:
selecting a voiceprint model corresponding to the equipment identifier or selecting a voiceprint model corresponding to the user identifier from a plurality of voiceprint models trained in advance according to the equipment identifier or the user identifier of the client;
the user's speech is matched to the selected voiceprint model.
14. The data object interaction method of claim 1, further comprising: pre-training to obtain a voiceprint model by the following method:
collecting a plurality of voices of a user;
extracting a respective MFCC cepstrum coefficient for each of the plurality of voices;
and training according to the MFCC cepstrum coefficients to obtain a voiceprint model corresponding to the user.
15. The data object interaction method of claim 1, further comprising:
and optimizing the voiceprint model of the user in the pre-trained voiceprint models by using the voice sent by the user.
16. A method of data object interaction in a virtual reality space environment, the method comprising:
the method comprises the steps that a server stores virtual reality shop object internal space environment data, wherein the shop object internal space environment data comprise at least one interactive data object, and the interactive data object corresponds to an interactive area range and is associated with preset information content; executing the following steps under a virtual reality scene:
providing the virtual reality shop object internal space environment data to a client so that the client can provide the virtual reality shop object internal space environment data, and determining a target data object interested by a user through interaction between the user and the data object; the interaction comprises interaction of a user gaze focus position or voice; providing a first information panel and providing information content related to order confirmation on the first information panel, wherein the information content related to order confirmation comprises: attribute information and resource information to be paid associated with the target data object; after confirming that the user has finished confirming the order information, matching the voice of the user with a pre-trained voiceprint model, and triggering payment operation when the matching is successful;
the information content related to order confirmation further comprises: the first operation control is used for modifying the order related information;
and when the sight focus of the user enters the area range where the first operation control is located and the staying time reaches a preset time threshold, providing a second information panel, and providing content for modifying the order related information in the second information panel.
17. A method of data object interaction in an augmented reality space environment, the method comprising:
a client obtains a three-dimensional space model of the internal space environment of the entity shop; the physical store interior including a plurality of items;
after the three-dimensional space model is subjected to space matching with the entity shop through a preset Augmented Reality (AR) device, providing interaction area range information corresponding to the goods in the view range of a user of the AR device; executing the following steps under a virtual reality scene:
determining a target item of interest to the user through interaction between the user and the data object; the interaction comprises interaction of a user gaze focus position or voice;
providing a first information panel and providing information content related to order confirmation on the first information panel, wherein the information content related to order confirmation comprises: attribute information associated with the target goods and resource information to be paid; after confirming that the user has finished confirming the order information, matching the voice of the user with a pre-trained voiceprint model, and triggering payment operation when the matching is successful;
the information content related to order confirmation further comprises: the first operation control is used for modifying the order related information;
and when the sight focus of the user enters the area range where the first operation control is located and the staying time reaches a preset time threshold, providing a second information panel, and providing content for modifying the order related information in the second information panel.
18. A method of data object interaction in an augmented reality space environment, the method comprising:
the method comprises the steps that a server stores a three-dimensional space model of a space environment inside a physical shop, wherein the interior of the physical shop comprises a plurality of goods, interaction area range information corresponding to the goods, and interaction response information of data objects related to the interaction area ranges;
providing the three-dimensional space model to a client, so that the client provides interaction area range information corresponding to the goods in the user vision range of an AR device after the client performs space matching on the three-dimensional space model and the entity shop through a preset AR device; executing the following steps under a virtual reality scene: determining a target item of interest to the user through interaction between the user and the data object; the interaction comprises interaction of a user gaze focus position or voice; providing a first information panel and providing information content related to order confirmation on the first information panel, wherein the information content related to order confirmation comprises: attribute information associated with the target goods and resource information to be paid; after confirming that the user has finished confirming the order information, matching the voice of the user with a pre-trained voiceprint model, and triggering payment operation when the matching is successful;
the information content related to order confirmation further comprises: the first operation control is used for modifying the order related information;
and when the sight focus of the user enters the area range where the first operation control is located and the staying time reaches a preset time threshold, providing a second information panel, and providing content for modifying the order related information in the second information panel.
19. An apparatus for data object interaction in a virtual reality space environment, the apparatus comprising:
the virtual reality environment providing module is used for providing a virtual reality shop object internal space environment, and the shop object internal space environment comprises at least one interactive data object;
the target data object determining module is used for determining a target data object which is interested by a user through interaction between the user and the data object in a virtual reality scene; the interaction comprises interaction of a user gaze focus position or voice;
the order information providing module is used for providing a first information panel in a virtual reality scene, and providing information content related to order confirmation on the first information panel, wherein the information content related to order confirmation comprises: attribute information and resource information to be paid associated with the target data object;
the matching module is used for matching the voice of the user with a pre-trained voiceprint model after confirming that the user completes order information in a virtual reality scene, and triggering payment operation when matching is successful;
the information content related to order confirmation further comprises: the first operation control is used for modifying the order related information;
and when the sight focus of the user enters the area range where the first operation control is located and the staying time reaches a preset time threshold, providing a second information panel, and providing content for modifying the order related information in the second information panel.
20. An apparatus for data object interaction in a virtual reality space environment, the apparatus comprising:
the system comprises an environment data storage module, a virtual reality shop object internal space environment data processing module and a virtual reality data processing module, wherein the shop object internal space environment data comprises at least one interactive data object, and the interactive data object corresponds to an interactive region range and is associated with preset information content;
the system comprises an environment data providing module, a data processing module and a data processing module, wherein the environment data providing module is used for providing the internal space environment data of the virtual reality shop object to a client under a virtual reality scene so that the client can provide the internal space environment data of the virtual reality shop object and determine a target data object interested by a user through interaction between the user and the data object; the interaction comprises interaction of a user gaze focus position or voice; providing a first information panel and providing information content related to order confirmation on the first information panel, wherein the information content related to order confirmation comprises: attribute information and resource information to be paid associated with the target data object; after confirming that the user has finished confirming the order information, matching the voice of the user with a pre-trained voiceprint model, and triggering payment operation when the matching is successful;
the information content related to order confirmation further comprises: the first operation control is used for modifying the order related information;
and when the sight focus of the user enters the area range where the first operation control is located and the staying time reaches a preset time threshold, providing a second information panel, and providing content for modifying the order related information in the second information panel.
21. An apparatus for data object interaction in an augmented reality space environment, the apparatus comprising:
the system comprises a space model acquisition module, a space model acquisition module and a space model acquisition module, wherein the space model acquisition module is used for acquiring a three-dimensional space model of the internal space environment of the entity shop; the physical store interior including a plurality of items;
the space model matching module is used for providing interaction area range information corresponding to the goods in the visual field range of a user of the AR equipment after the three-dimensional space model is subjected to space matching with the entity shop through the preset augmented reality AR equipment;
the target goods determining module is used for determining target goods which are interesting to the user through interaction between the user and the data object in the virtual reality scene; the interaction comprises interaction of a user gaze focus position or voice;
the order information providing module is used for providing a first information panel in a virtual reality scene, and providing information content related to order confirmation on the first information panel, wherein the information content related to order confirmation comprises: attribute information associated with the target goods and resource information to be paid;
the matching module is used for matching the voice of the user with a pre-trained voiceprint model after confirming that the user completes order information in a virtual reality scene, and triggering payment operation when matching is successful;
the information content related to order confirmation further comprises: the first operation control is used for modifying the order related information;
and when the sight focus of the user enters the area range where the first operation control is located and the staying time reaches a preset time threshold, providing a second information panel, and providing content for modifying the order related information in the second information panel.
22. An apparatus for data object interaction in an augmented reality space environment, the apparatus comprising:
the system comprises a storage module, a display module and a display module, wherein the storage module is used for storing a three-dimensional space model of an internal space environment of an entity shop, and the inside of the entity shop comprises a plurality of goods, interaction area range information corresponding to the goods, and interaction response information of data objects related to the interaction area ranges;
the space model providing module is used for providing the three-dimensional space model to a client under a virtual reality scene, so that the client can provide interaction area range information corresponding to the goods in the visual field range of a user of an AR device after the three-dimensional space model is subjected to space matching with the entity shop through a preset augmented reality AR device, and the target goods which are interested by the user are determined through interaction between the user and a data object; the interaction comprises interaction of a user gaze focus position or voice; providing a first information panel and providing information content related to order confirmation on the first information panel, wherein the information content related to order confirmation comprises: attribute information associated with the target goods and resource information to be paid; after confirming that the user has finished confirming the order information, matching the voice of the user with a pre-trained voiceprint model, and triggering payment operation when the matching is successful;
the information content related to order confirmation further comprises: the first operation control is used for modifying the order related information;
and when the sight focus of the user enters the area range where the first operation control is located and the staying time reaches a preset time threshold, providing a second information panel, and providing content for modifying the order related information in the second information panel.
CN201610921190.XA 2016-10-21 2016-10-21 Data object interaction method and device in virtual reality/augmented reality space environment Active CN107977834B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610921190.XA CN107977834B (en) 2016-10-21 2016-10-21 Data object interaction method and device in virtual reality/augmented reality space environment
PCT/CN2017/105359 WO2018072617A1 (en) 2016-10-21 2017-10-09 Method and device for interaction of data objects in virtual reality/augmented reality spatial environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610921190.XA CN107977834B (en) 2016-10-21 2016-10-21 Data object interaction method and device in virtual reality/augmented reality space environment

Publications (2)

Publication Number Publication Date
CN107977834A CN107977834A (en) 2018-05-01
CN107977834B true CN107977834B (en) 2022-03-18

Family

ID=62003814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610921190.XA Active CN107977834B (en) 2016-10-21 2016-10-21 Data object interaction method and device in virtual reality/augmented reality space environment

Country Status (2)

Country Link
CN (1) CN107977834B (en)
WO (1) WO2018072617A1 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108563395A (en) * 2018-05-07 2018-09-21 北京知道创宇信息技术有限公司 The visual angles 3D exchange method and device
US10290049B1 (en) * 2018-06-27 2019-05-14 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for multi-user augmented reality shopping
US10482674B1 (en) * 2018-06-27 2019-11-19 Beijing Jingdong Shangke Information Technology Co., Ltd. System and method for mobile augmented reality
CN111179001B (en) * 2018-11-09 2023-07-04 阿里巴巴(深圳)技术有限公司 Content processing method, device and system
CN109583187B (en) * 2018-11-16 2022-11-18 中共中央办公厅电子科技学院 Augmented reality verification code method and application
CN109726954B (en) * 2018-12-11 2021-01-08 维沃移动通信有限公司 Information processing method and device and mobile terminal
CN109684490A (en) * 2018-12-26 2019-04-26 成都明图通科技有限公司 Indoor navigation and shopping guide method, device
CN110176097A (en) * 2019-04-29 2019-08-27 美佳亚太投资有限公司 A kind of vending system
CN110348198B (en) * 2019-06-21 2024-04-12 华为技术有限公司 Identity recognition method, related device and system of simulation object
CN111178847B (en) * 2019-12-31 2023-08-22 中国银行股份有限公司 Virtual banking site equipment, control device and working method based on VR technology
CN111243200A (en) * 2019-12-31 2020-06-05 维沃移动通信有限公司 Shopping method, wearable device and medium
CN111340598B (en) * 2020-03-20 2024-01-16 北京爱笔科技有限公司 Method and device for adding interactive labels
CN113298598A (en) * 2020-09-15 2021-08-24 阿里巴巴集团控股有限公司 Method and device for providing shop object information and electronic equipment
CN112162638B (en) * 2020-10-09 2023-09-19 咪咕视讯科技有限公司 Information processing method and server in Virtual Reality (VR) viewing
CN113298613A (en) * 2021-04-23 2021-08-24 阿里巴巴新加坡控股有限公司 Information interaction method and device
CN115578520A (en) * 2022-11-10 2023-01-06 一站发展(北京)云计算科技有限公司 Information processing method and system for immersive scene
CN117079651B (en) * 2023-10-08 2024-02-23 中国科学技术大学 Speech cross real-time enhancement implementation method based on large-scale language model

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070192203A1 (en) * 2006-02-16 2007-08-16 Di Stefano Michael V Virtual reality shopping system
US20100149093A1 (en) * 2006-12-30 2010-06-17 Red Dot Square Solutions Limited Virtual reality system including viewer responsiveness to smart objects
CN103679452A (en) * 2013-06-20 2014-03-26 腾讯科技(深圳)有限公司 Payment authentication method, device thereof and system thereof
CN104217336A (en) * 2014-08-25 2014-12-17 四川敬天爱人科技有限公司 An intelligence life circle e-commerce system based on cloud service
US20150012394A1 (en) * 2013-07-02 2015-01-08 Avenue Imperial UK Limited Virtual Shopping System
CN104599184A (en) * 2014-12-09 2015-05-06 北京奇虎科技有限公司 Stock information pushing system
CN104616190A (en) * 2015-03-05 2015-05-13 广州新节奏智能科技有限公司 Multi-terminal 3D somatosensory shopping method and system based on internet and mobile internet
US20160019717A1 (en) * 2014-07-18 2016-01-21 Oracle International Corporation Retail space planning system
US20160140553A1 (en) * 2014-11-17 2016-05-19 Visa International Service Association Authentication and transactions in a three-dimensional image enhancing display device
US20160253745A1 (en) * 2015-02-26 2016-09-01 Staging Design Inc. Virtual shopping system and method utilizing virtual reality and augmented reality technology
CN105955471A (en) * 2016-04-26 2016-09-21 乐视控股(北京)有限公司 Virtual reality interaction method and device

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008081411A1 (en) * 2006-12-30 2008-07-10 Kimberly-Clark Worldwide, Inc. Virtual reality system including smart objects
CN102054247B (en) * 2009-11-04 2013-05-01 沈阳迅景科技有限公司 Method for building three-dimensional (3D) panoramic live-action network business platform
KR101746838B1 (en) * 2010-06-14 2017-06-27 주식회사 비즈모델라인 Method for Operating Augmented Reality by using Display Stand
US9330413B2 (en) * 2013-03-14 2016-05-03 Sears Brands, L.L.C. Checkout and/or ordering systems and methods
CN103325037A (en) * 2013-06-06 2013-09-25 上海讯联数据服务有限公司 Mobile payment safety verification method based on voice recognition
CN103617029A (en) * 2013-11-20 2014-03-05 中网一号电子商务有限公司 3D instant messaging system
CN103761667A (en) * 2014-01-09 2014-04-30 贵州宝森科技有限公司 Virtual reality e-commerce platform system and application method thereof
KR102223278B1 (en) * 2014-05-22 2021-03-05 엘지전자 주식회사 Glass type terminal and control method thereof
CN106034063A (en) * 2015-03-13 2016-10-19 阿里巴巴集团控股有限公司 Method and device for starting service in communication software through voice
CN104820921A (en) * 2015-03-24 2015-08-05 百度在线网络技术(北京)有限公司 Method and device for transaction in user equipment
KR101613287B1 (en) * 2015-06-15 2016-04-19 김영덕 Travel destination one stop shopping system based on 3D panoramic image and control method thereof
KR101613278B1 (en) * 2015-08-18 2016-04-19 김영덕 System for providing shopping information based on augmented reality and control method thereof
CN105824409A (en) * 2016-02-16 2016-08-03 乐视致新电子科技(天津)有限公司 Interactive control method and device for virtual reality

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070192203A1 (en) * 2006-02-16 2007-08-16 Di Stefano Michael V Virtual reality shopping system
US20100149093A1 (en) * 2006-12-30 2010-06-17 Red Dot Square Solutions Limited Virtual reality system including viewer responsiveness to smart objects
CN103679452A (en) * 2013-06-20 2014-03-26 腾讯科技(深圳)有限公司 Payment authentication method, device thereof and system thereof
US20150012394A1 (en) * 2013-07-02 2015-01-08 Avenue Imperial UK Limited Virtual Shopping System
US20160019717A1 (en) * 2014-07-18 2016-01-21 Oracle International Corporation Retail space planning system
CN104217336A (en) * 2014-08-25 2014-12-17 四川敬天爱人科技有限公司 An intelligence life circle e-commerce system based on cloud service
US20160140553A1 (en) * 2014-11-17 2016-05-19 Visa International Service Association Authentication and transactions in a three-dimensional image enhancing display device
CN104599184A (en) * 2014-12-09 2015-05-06 北京奇虎科技有限公司 Stock information pushing system
US20160253745A1 (en) * 2015-02-26 2016-09-01 Staging Design Inc. Virtual shopping system and method utilizing virtual reality and augmented reality technology
CN104616190A (en) * 2015-03-05 2015-05-13 广州新节奏智能科技有限公司 Multi-terminal 3D somatosensory shopping method and system based on internet and mobile internet
CN105955471A (en) * 2016-04-26 2016-09-21 乐视控股(北京)有限公司 Virtual reality interaction method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"虚拟现实技术(VR)在网上购物中的应用研究";沈朝魁等;《科技视界》;20120131(第02期);第3-6页 *

Also Published As

Publication number Publication date
CN107977834A (en) 2018-05-01
WO2018072617A1 (en) 2018-04-26

Similar Documents

Publication Publication Date Title
CN107977834B (en) Data object interaction method and device in virtual reality/augmented reality space environment
EP3528156B1 (en) Virtual reality environment-based identity authentication method and apparatus
US11694280B2 (en) Systems/methods for identifying products for purchase within audio-visual content utilizing QR or other machine-readable visual codes
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN106598998B (en) Information acquisition method and information acquisition device
US20210191690A1 (en) Virtual Reality Device Control Method And Apparatus, And Virtual Reality Device And System
JP2019510291A (en) A method of supporting transactions using a humanoid robot
TW201814439A (en) Virtual reality scene-based business realization method and apparatus
WO2023020622A1 (en) Display method and apparatus, electronic device, computer-readable storage medium, computer program, and computer program product
CN110096155A (en) A kind of service implementation method and device based on virtual reality
CN110192386B (en) Information processing apparatus, information processing method, and computer program
CN110472099B (en) Interactive video generation method and device and storage medium
US20190333261A1 (en) Program, and information processing apparatus and method
WO2022115272A1 (en) System and method for generating augmented reality objects
CN113129045A (en) Video data processing method, video data display method, video data processing device, video data display device, electronic equipment and storage medium
US20170309074A1 (en) Automatic vending machine having holographic products
CN110809187B (en) Video selection method, video selection device, storage medium and electronic equipment
KR20120099814A (en) Augmented reality contents service system and apparatus and method
WO2018135246A1 (en) Information processing system and information processing device
US20180373884A1 (en) Method of providing contents, program for executing the method on computer, and apparatus for providing the contents
CN110889006A (en) Recommendation method and device
CN110858376A (en) Service providing method, device, system and storage medium
KR20210110030A (en) Apparatus and method for providing information related to product in multimedia contents
CN112218111A (en) Image display method and device, storage medium and electronic equipment
KR20150085254A (en) Service system and service method for live dance class and live dance music room

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1254561

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant