WO2019119977A1 - 基于增强现实的虚拟对象分配方法及装置 - Google Patents
基于增强现实的虚拟对象分配方法及装置 Download PDFInfo
- Publication number
- WO2019119977A1 WO2019119977A1 PCT/CN2018/112552 CN2018112552W WO2019119977A1 WO 2019119977 A1 WO2019119977 A1 WO 2019119977A1 CN 2018112552 W CN2018112552 W CN 2018112552W WO 2019119977 A1 WO2019119977 A1 WO 2019119977A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- augmented reality
- electronic voucher
- client
- virtual object
- image
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/04—Payment circuits
- G06Q20/06—Private payment circuits, e.g. involving electronic currency used among participants of a common payment scheme
- G06Q20/065—Private payment circuits, e.g. involving electronic currency used among participants of a common payment scheme using e-cash
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/38—Payment protocols; Details thereof
- G06Q20/382—Payment protocols; Details thereof insuring higher security of transaction
- G06Q20/3821—Electronic credentials
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/08—Payment architectures
- G06Q20/12—Payment architectures specially adapted for electronic shopping systems
- G06Q20/123—Shopping for digital content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/30—Payment architectures, schemes or protocols characterised by the use of specific devices or networks
- G06Q20/32—Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
- G06Q20/327—Short range or proximity payments by means of M-devices
- G06Q20/3276—Short range or proximity payments by means of M-devices using a pictured code, e.g. barcode or QR-code, being read by the M-device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/30—Payment architectures, schemes or protocols characterised by the use of specific devices or networks
- G06Q20/36—Payment architectures, schemes or protocols characterised by the use of specific devices or networks using electronic wallets or electronic money safes
- G06Q20/367—Payment architectures, schemes or protocols characterised by the use of specific devices or networks using electronic wallets or electronic money safes involving electronic purses or money safes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/30—Payment architectures, schemes or protocols characterised by the use of specific devices or networks
- G06Q20/36—Payment architectures, schemes or protocols characterised by the use of specific devices or networks using electronic wallets or electronic money safes
- G06Q20/367—Payment architectures, schemes or protocols characterised by the use of specific devices or networks using electronic wallets or electronic money safes involving electronic purses or money safes
- G06Q20/3672—Payment architectures, schemes or protocols characterised by the use of specific devices or networks using electronic wallets or electronic money safes involving electronic purses or money safes initialising or reloading thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/38—Payment protocols; Details thereof
- G06Q20/40—Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
- G06Q20/401—Transaction verification
- G06Q20/4014—Identity check for transactions
- G06Q20/40145—Biometric identity checks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/95—Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Definitions
- the present specification relates to the field of augmented reality, and in particular, to a virtual object allocation method and apparatus based on augmented reality.
- the present specification proposes a virtual object allocation method based on augmented reality, which is applied to an augmented reality client, and the method includes:
- the electronic voucher is configured to cause the augmented reality server to allocate a virtual object from a preset virtual object set based on the virtual object allocation request.
- the present specification also proposes a virtual object allocation method based on augmented reality, which is applied to an augmented reality server, and the method includes:
- the object allocation request includes a plurality of electronic credentials for extracting a business object
- the virtual reality client is allocated a virtual object from the preset virtual object set.
- the present specification also provides an augmented reality-based virtual object allocating device, which is applied to an augmented reality client, and the device includes:
- a scanning module that performs image scanning for the target user and initiates face recognition for the scanned image
- Obtaining a module acquiring an electronic voucher issued by the augmented reality server when the facial feature is recognized from the scanned image, and saving the obtained electronic voucher locally; wherein the electronic voucher is used to extract the virtual object;
- the first determining module determines whether the number of categories of the electronic voucher saved locally reaches a preset threshold
- a sending module if the number of categories of the electronic voucher saved locally reaches the preset threshold, sending a virtual object allocation request to the augmented reality server, where the number of categories carried in the virtual object allocation request is the pre-
- the threshold of the electronic voucher is set such that the augmented reality server allocates a virtual object from a preset set of virtual objects based on the virtual object allocation request.
- the present specification also provides an augmented reality-based virtual object allocating device, which is applied to an augmented reality server.
- the device includes:
- a second determining module when the augmented reality client performs image scanning for the target user, determining whether the facial feature is recognized from the image scanned by the augmented reality client;
- a sending module if the facial feature is recognized from the scanned image, sending an electronic voucher to the augmented reality client; wherein the electronic voucher is used to extract a virtual object;
- a receiving module configured to receive an object allocation request sent by the augmented reality client;
- the object allocation request includes a plurality of electronic credentials for extracting a business object;
- a third determining module determining whether the number of categories of the electronic voucher included in the object allocation request reaches a preset threshold
- an allocation module if the number of categories of the electronic voucher included in the object allocation request reaches the preset threshold, assigning a virtual object to the virtual reality client from the preset virtual object set.
- a new interactive mode which combines online requirements for assigning virtual objects to users based on augmented reality technology, and offline facial image scanning and face recognition using augmented reality clients; users can pass
- the augmented reality client performs image scanning on the target user and initiates face recognition on the scanned image; when the facial feature is recognized from the scanned image, the augmented reality server can be triggered to be sent to the augmented reality client.
- the electronic voucher of the virtual object is extracted, and the user can collect the electronic voucher issued by the augmented reality server through the augmented reality client; when the number of categories of the electronic voucher collected by the user reaches a preset threshold, the user can obtain the virtual object.
- the augmented reality client can actively send a virtual object allocation request to the augmented reality server, and carry the electronic voucher whose number of categories is the preset threshold in the virtual object allocation request, so as to be pre-prepared by the augmented reality server. Assigning an object to the user in the set of virtual objects, from It can significantly improve the interactive virtual object allocation and interesting.
- FIG. 1 is a process flow diagram of an augmented reality based virtual object allocation method according to an embodiment of the present disclosure
- FIG. 2 is a schematic diagram of an image scanning interface of an AR client according to an embodiment of the present disclosure
- FIG. 3 is a schematic diagram of an image scanning interface of another AR client according to an embodiment of the present disclosure.
- FIG. 4 is a schematic diagram of an AR client showing an acquired virtual Foca according to an embodiment of the present disclosure
- FIG. 5 is a schematic diagram showing another acquired virtual client in an AR client according to an embodiment of the present disclosure.
- FIG. 6 is a schematic diagram of a user obtaining a red envelope right through an AR client according to an embodiment of the present disclosure
- FIG. 7 is a logic block diagram of an augmented reality based virtual object allocation apparatus according to an embodiment of the present disclosure.
- FIG. 8 is a hardware structural diagram of an augmented reality client carrying the augmented reality-based virtual object allocating device according to an embodiment of the present disclosure
- FIG. 9 is a logic block diagram of another augmented reality based virtual object allocating apparatus according to an embodiment of the present disclosure.
- FIG. 10 is a hardware structural diagram of an augmented reality server that carries the another augmented reality-based virtual object allocating device according to an embodiment of the present disclosure.
- This specification aims to propose a new interactive mode that combines online requirements for assigning virtual objects to users based on augmented reality technology, and offline facial image scanning and face recognition using augmented reality clients.
- the user may perform image scanning for the target user's face area through the AR client, and the AR client initiates face recognition for the scanned image.
- the AR server can be configured to send an electronic voucher for extracting the virtual object for the AR client, and the user can collect the electronic voucher delivered by the AR server through the AR client.
- the user can obtain the allocation right of the virtual object, and the AR client can actively send the electronic voucher including the plurality of electronic voucher to the AR server.
- the number of categories is a virtual object allocation request of the preset threshold, so that the AR server allocates a virtual object for the user from the preset virtual object set.
- the user can perform image scanning on the face area of itself or other users through the image scanning function carried by the AR client, and the AR server can be triggered to allocate an electronic voucher for extracting the virtual object, and collect the electronic Credentials, to obtain the allocation rights of virtual objects, can significantly improve the interactivity and interest of virtual object allocation.
- the above-mentioned "virtual object” is used as an example of the "virtual red envelope” in the red envelope distribution scenario.
- the user can use the AR client to perform image scanning on the face of the user or other user to trigger the AR server to deliver the message to the AR client.
- the client may actively send a red packet allocation request to the AR server that includes a plurality of electronic vouchers and the number of categories of the electronic vouchers included is the preset threshold, and the AR server sends the red envelope fund pool from the preset “red packet fund pool”.
- the amount of red packets which can significantly improve the interactivity and interest of red envelopes.
- FIG. 1 is a schematic diagram of a virtual object allocation method based on augmented reality according to an embodiment of the present disclosure. The method performs the following steps:
- Step 102 The AR client performs image scanning on the target user, and initiates face recognition for the scanned image.
- Step 104 The AR server determines whether a facial feature is recognized from the real-life image scanned by the AR client; if the facial feature is recognized from the scanned image, the electronic credential is sent to the AR client; The electronic voucher is used to extract a virtual object;
- Step 106 The AR client obtains the electronic certificate issued by the AR server, and saves the obtained electronic certificate locally;
- Step 108 The AR client determines whether the number of categories of the electronic voucher saved locally reaches a preset threshold. If the number of categories of the electronic voucher saved locally reaches the preset threshold, the virtual object is sent to the AR server. An allocation request; wherein the virtual object allocation request carries the electronic voucher whose number of categories is the preset threshold;
- Step 110 The AR server determines whether the number of categories of the electronic voucher included in the object allocation request reaches a preset threshold; if the number of categories of the electronic voucher included in the object allocation request reaches the preset threshold, the preset A virtual object is allocated to the AR client in the virtual object set.
- the foregoing AR client refers to a client software developed based on the AR technology or integrated with the AR function; for example, the AR client may be an Alipay client integrated with the AR service function; the AR client may Equipped with image scanning function, image scanning of real scenes and people in the offline environment, and virtual data pushed by the AR server in the background through the AR engine in the AR client's foreground (such as some operators-defined actions) Visual rendering) Visual rendering, superimposing and blending with specific features (such as facial features or some custom offline graphic markers) identified from scanned image data.
- specific features such as facial features or some custom offline graphic markers
- the foregoing AR server includes a server, a server cluster, or a cloud platform built on the server cluster.
- the AR server can provide payment for the docking service for the Alipay APP integrated with the AR service function.
- a platform the AR server may perform image recognition on the image scanned by the AR client based on the background AR engine (of course, the image recognition process may also be performed by the AR client based on the AR engine of the foreground); and,
- the virtual data related to the offline service is used for content management, and based on the result of the image recognition, the related virtual data is pushed to the AR client; for example, the AR server can locally pre-configure the electronic voucher for extracting the virtual object. Perform content management, and push the electronic voucher to the AR client based on the result of the image recognition.
- the virtual object may include any type of virtual item that can be distributed and distributed online; for example, in one embodiment shown, the virtual object may be a "virtual red envelope" in a red envelope distribution scenario.
- the above electronic voucher may include any form of virtual credential for extracting a virtual object from the AR server.
- the AR server can pre-configure a certain number of different types of electronic voucher, and issue different kinds of electronic voucher to a specific user group according to a certain delivery rule; wherein the specific form of the electronic voucher is not Limitations can be strings, numbers, characters, passwords, virtual cards, and more.
- the following describes the virtual object as a "virtual red envelope" as an example.
- virtual red envelope a "virtual red envelope”
- the virtual object may also be other than “virtual red envelope”, and can distribute other forms of virtual items sent online; for example, electronic voucher, electronic shopping coupon, electronic coupon, and the like.
- a certain number of different types of electronic vouchers can be pre-configured on the AR server, and the user can perform image scanning on the face area of the user or other users through the AR client, and the AR client initiates the image for the scanned image. Face recognition, and when the facial features are recognized from the scanned image, the AR server is triggered to issue an electronic voucher for extracting the virtual object for the AR client.
- the user can collect the electronic voucher delivered by the AR server through the AR client.
- the user can obtain the allocation right of the virtual object, and the AR client can Proactively sending, to the AR server, a red envelope allocation request that includes a plurality of electronic credentials and the number of categories of electronic certificates included is the preset threshold, and the AR server sends a certain amount of red packets from the preset “red envelope fund pool”. .
- the above electronic voucher may include “Shou Kangfu”, “You Aifu”, “Fuqiangfu”, “Home and Fortune”, and “ ⁇ ” “5 categories of virtual Fuka.
- the AR client can perform image scanning on the face area of the user or other users to participate in the "Foka Raffle” to trigger the AR server to send the virtual Fuku card to the AR client and collect the AR through the AR client.
- the virtual Fuka issued by the server.
- the user will be granted the right to receive the red envelope, and the AR server will issue the red envelope for the user.
- a certain number of different types of electronic voucher can be pre-configured on the AR server, and the pre-configured electronic voucher is centrally managed by the AR server;
- the pre-configured electronic voucher is the only voucher for the user to obtain the red envelope issuance authority; the number of the pre-configured electronic voucher and the number of categories can be set based on actual needs.
- the AR server in addition to pre-configuring a certain number of different types of electronic credentials, the AR server can also bind the corresponding delivery conditions to the pre-configured electronic credentials.
- the foregoing issuing condition may specifically include whether a facial feature is recognized from an image scanned by the AR client. That is, as long as the facial features are recognized from the images scanned by the AR client, the AR server can immediately issue electronic credentials to the AR client.
- the user can perform image scanning on the face area of the user or other users through the image scanning function carried by the AR client, and trigger the AR server to send the electronic certificate to the AR client, and through the AR client. Collect the electronic credentials issued by the AR server to obtain the redistribution rights.
- the AR client can provide a function option of the image scanning function to the user by default; for example, the function option may specifically be a “sweep” function button provided to the user in the user homepage of the AR client.
- the user can trigger the function option by means of "clicking", and then enter the image scanning interface of the AR client.
- FIG. 2 is a schematic diagram of an image scanning interface of an AR client according to an example.
- an “AR red envelope” function button may be provided, and the user may click through, for example, “click”.
- a face scanning prompt may be further outputted in the image scanning page.
- the above-mentioned facial scanning prompt may specifically include a scanning frame for prompting the user to perform image scanning on the facial region, and prompt text statically displayed in the image scanning page.
- the facial scanning prompt may include a scanning frame of a face shape displayed in the middle of the screen, and a prompt text of “aligning the target to start scanning” that is statically resident and outputted under the scanning frame;
- the content of the prompt text may be dynamically updated in an actual application; for example, when the AR client scans the face region, the prompt text may be updated to “identifying the face”;
- the client can update the prompt text to "Please keep the lens stable”, “please rescan”, "change” when the client uses the face recognition algorithm or when the facial feature is not recognized by the AR server from the scanned image. Try it for yourself, or other similar text prompts that prompt the user to rescan.
- the user can point the camera mounted on the AR terminal device (such as a smart phone or AR glasses) where the AR client is located, or The face area of other users.
- the AR client can invoke the camera mounted on the AR terminal device to perform real-time image scanning, and initiate face recognition on the scanned image.
- the AR client may perform the face recognition on the scanned image based on the face recognition model carried on the AR client, or may upload the scanned image to the AR server in real time by the AR client. It is done by the AR server based on its locally carried face recognition model.
- the AR client may carry a face recognition model locally. After scanning the image by calling the camera of the AR terminal device, the image recognition model may continue to be called to perform the image matching the scan. Face recognition and upload the recognition result to the AR server.
- the AR client may not carry the image recognition model locally, but upload the scanned image to the AR server in real time, and the AR server may recognize the image based on the local image.
- the model performs face recognition on the image and then returns the recognition result to the AR client.
- the image recognition algorithm mounted in the image recognition model is not particularly limited in this example, and those skilled in the art can refer to the related art when implementing the technical solution of the present specification.
- the face recognition model described above may be a deep learning model trained based on a neural network combined with a large number of face image samples.
- the AR client may enhance the display of the preset motion picture at the position corresponding to the scanned facial features to prompt the user to present Scans facial features from scanned images. For example, in one implementation, a corresponding ambience map may be enhanced for display around the scanned facial features.
- the AR client can obtain a pre-configured motion picture, and then visually render the dynamic picture based on the AR engine of the AR client foreground, and according to the currently recognized facial feature in the image scanning interface.
- the relative position is superimposed and merged with the recognized facial features to enhance the display to the user.
- the above-mentioned dynamic effect screen may be pre-arranged in the AR client or may be dynamically sent by the AR server, and is not particularly limited in the present specification.
- the AR server can further confirm whether the facial feature is recognized from the real-time image scanned by the AR client based on the recognition result; if it is confirmed from the AR client The facial features are successfully identified in the scanned image, and the AR server can send the corresponding electronic voucher to the AR client from the pre-configured electronic voucher.
- the AR voucher when the AR server sends the electronic voucher to the AR client, the AR voucher can randomly send the electronic voucher to the AR client from the locally pre-configured electronic voucher; In the implementation manner, in addition to the completely random delivery mode, the AR server can also send the electronic voucher to the AR client from the locally pre-configured electronic voucher based on the preset delivery rule.
- the e-voucher may be directly sent to the AR client, or only the e-voucher may be sent to the AR client.
- Uniquely identifying, and locally storing, in the AR client, the correspondence between the unique identifier and the electronic credential, and the subsequent AR client can identify the corresponding electronic credential by using the unique identifier;
- the server can directly deliver the virtual card to the AR client, or generate a unique corresponding identifier for the virtual card, and send the identifier to the AR client, the AR client.
- the terminal may render the identifier as a corresponding virtual card to be displayed locally according to the saved correspondence.
- the above-mentioned issuance condition may include, in addition to whether the facial features are recognized from the image scanned by the AR client, in actual applications, in order to improve the interactivity and security of the electronic voucher issuance, the face may also be in the face.
- recognition mechanisms such as gesture recognition or expression recognition are further introduced. That is, after the facial features are recognized from the image scanned by the AR client, the expression recognition or gesture recognition of the scanned image may be further initiated, and only the preset gesture or preset is further recognized in the scanned image.
- the AR server can issue electronic credentials to the AR client.
- a gesture recognition mechanism can be further introduced based on face recognition.
- the AR client can further output a gesture scanning prompt in the image scanning page.
- the gesture scanning prompt is specifically used to prompt the user to perform a preset gesture, and may include a gesture image corresponding to a gesture preset by the AR server, and a prompt text statically displayed in the image scanning page.
- the specific type of the preset gesture is not specifically limited in this specification. In practical applications, the setting may be customized based on actual interaction requirements.
- the preset gesture may specifically be a gesture that the user waved in front of the camera.
- FIG. 3 is a schematic diagram of an image scanning interface of another AR client shown in this example.
- the gesture scanning prompt may include a text prompt “Please ask a friend to make a gesture, have a chance to get a card” displayed at the bottom of the screen, and A gesture picture corresponding to the preset gesture displayed below the text prompt.
- the scanned user can perform the gesture of the gesture scanning prompt before the camera of the AR terminal device where the AR client is located.
- the corresponding gesture can invoke the camera mounted on the AR terminal device to perform real-time image scanning, and initiate gesture recognition on the scanned image.
- the AR client can still perform the gesture recognition on the scanned image based on the gesture recognition model carried on the AR client, or upload the scanned image to the AR server in real time by the AR client. It is completed by the AR server based on its locally equipped gesture recognition model, and will not be described again.
- the image recognition algorithm mounted in the above-described gesture recognition model is not particularly limited in this example, and those skilled in the art can refer to the description in the related art when implementing the technical solution of the present specification.
- the AR server can further confirm whether the preset gesture is recognized from the real-image scanned by the AR client based on the recognition result; if confirmed, When the preset gesture is successfully recognized in the image scanned by the AR client, the AR server can send the corresponding electronic credential to the AR client from the pre-configured electronic credential.
- an expression recognition mechanism can be further introduced on the basis of face recognition.
- the AR client may further output an expression scanning prompt in the image scanning page.
- the expression scanning prompt is specifically used to prompt the user to execute the preset expression, and may include an emoticon image corresponding to the preset expression pre-set by the AR server, and prompt text statically displayed in the image scanning page.
- the above-mentioned expression scanning prompts may also include only one statically displayed prompt text.
- the preset expressions are not particularly limited in the present specification, and may be customized according to actual interaction requirements in practical applications;
- the above-mentioned expression scanning prompt may include a “Please smile, have a chance to get a curse card” displayed at the bottom of the screen. a text prompt and a pre-set smile image displayed below the text prompt.
- the scanned user can make a prompt on the camera of the AR terminal device where the AR client is located, at the prompt of the expression scan prompt.
- the corresponding expression The AR client can invoke the camera mounted on the AR terminal device to perform real-time image scanning, and initiate expression recognition on the scanned image.
- the AR client can still perform the expression recognition on the scanned image based on the expression recognition model carried on the AR client, or upload the scanned image to the AR server in real time by the AR client. It is completed by the AR server based on its locally-equipped expression recognition model, and will not be described again.
- the image recognition algorithm mounted in the gesture recognition model is not particularly limited in this example, and those skilled in the art can refer to the description in the related art when implementing the technical solution of the present specification; for example,
- the pre-set expression is a smile expression
- the image recognition algorithm for performing smile expression recognition may be pre-loaded on the AR client or the AR server.
- the AR server can further confirm whether the preset expression is recognized from the real-time image scanned by the AR client based on the recognition result; if confirmed, When the preset image is successfully recognized in the image scanned by the AR client, the AR server can send the corresponding electronic certificate to the AR client from the pre-configured electronic certificate.
- gesture recognition or expression recognition is further introduced:
- the user can trigger the AR server to issue an electronic voucher for himself only after making a preset gesture or expression in front of the camera of the camera mounted on the AR terminal device, thereby improving Interoperability between the user and the AR server;
- the AR server can also encrypt the electronic voucher through a preset encryption algorithm when the electronic voucher is sent to the AR client, and the AR client receives the encrypted electronic voucher. Thereafter, the electronic voucher can be decrypted using a decryption key corresponding to the above encryption algorithm. In this way, it is possible to prevent the electronic voucher from being spoofed by the user.
- the number of categories of the electronic voucher delivered by the AR server to the AR client may be smaller than the preset threshold.
- the server delivers the virtual card to the AR client.
- the number of categories of virtual cards can be a number less than 5 categories, such as 3 categories. In this way, the number of users with red envelope allocation rights can be effectively controlled.
- the server sends the electronic credential to the AR client so that the issued rules can be customized according to actual needs.
- the above-described delivery rules may include a selective delivery for a particular population.
- the AR server may preferentially deliver the electronic voucher to the user with higher activity, or preferentially issue a small number of electronic voucher for the user with higher activity.
- the activity may be characterized based on daily parameters related to the activity of the user; for example, in actual applications, when the number of friends of the user, Or the more services initiated, indicating that the user is more active. Therefore, the foregoing parameters may include parameters such as the number of friends of the user and the number of services to be initiated.
- the AR server may count the number of friends of the user, or count the number of services initiated by the user, and then perform thresholding. Process to determine if the user is an active user.
- the user can perform an image scan on the face area of the user or other user through the AR client to trigger the manner in which the AR server sends the electronic certificate to the AR client to collect the electronic certificate.
- electronic vouchers can also be collected by obtaining electronic vouchers shared by other users through the same AR client.
- the AR client may be an Alipay client integrated with the AR service function, and the Alipay client may provide a sharing interface, and other users may use the Alipay client to place the virtual card.
- the user is shared with the Alipay client, and the virtual card can also be shared to a third-party social platform or instant messaging software through a sharing interface provided by the Alipay client, in the form of a link or a clickable graphic.
- the virtual card shared by the second user can be collected and saved locally by clicking the link or the graphic and then jumping to the interface of the Alipay client.
- the AR client obtains the electronic certificate issued by the AR server by scanning the face area of itself or other users:
- the AR client can enhance the display of the preset motion picture at the position corresponding to the scanned facial feature, so as to prompt the user to obtain the electronic voucher.
- a preset motion picture may be played around the scanned facial features; for example, in an example, the electronic voucher is used as a virtual Foca, for example, based on scanning
- the facial features determine the position of the scanned user's head, and then "wear" the corresponding holiday crown at the head position of the scanned user, and dynamically play the animation of the Foca from the sky around the user's face.
- the AR client can also generate a corresponding display picture for the obtained electronic voucher, and then based on the AR engine of the AR client foreground.
- Visualizing the displayed image superimposing and merging the recognized facial features and the displayed image, and enhancing the displayed image at the corresponding position in the image scanning interface, and displaying the displayed image at the corresponding position in the image scanning interface
- a charging option is output on the picture.
- the charging option may be a function button for receiving an electronic voucher corresponding to the displayed picture, and the user may trigger the function button by means such as “clicking” to collect an electronic voucher corresponding to the displayed picture. Adding the generated display picture to a local placement corresponding to the electronic voucher, thereby saving the electronic voucher locally;
- the display image generated by the AR client for the electronic voucher may correspond to the type of the electronic voucher, that is, the display images corresponding to different types of electronic voucher are different from each other.
- the content shown on the above display images is not subject to any special restrictions.
- the display position provided by the AR client in the user interface may also correspond to the type of the electronic voucher, and the different display positions may respectively correspond to different types of electronic voucher.
- the AR client may also mark the currently obtained number of electronic vouchers corresponding to the display position on the display position, for example, at the upper right of the display position. The angle generates a numerical reminder. Moreover, when the number of electronic vouchers of a certain kind changes, the AR client can also update the number marked on the placement corresponding to the electronic vouchers based on the actual remaining number of the current electronic vouchers.
- the user can also share the obtained electronic voucher with other users.
- the user can share the electronic voucher corresponding to the displayed picture to other users by triggering the display picture added in the placement.
- the display position corresponding to the electronic voucher can be triggered by a click operation, etc.; after the display position is triggered, the display image corresponding to the display position can be displayed. In the user interface.
- a sharing option may be provided in the display image.
- the sharing option may be a triggering option of “send one to a friend”; when the first user presets by clicking, etc.
- the AR client can output a sharing mode selection interface, in which several target applications that can be selected by the user can be provided.
- the user can select the corresponding target application in the selection interface, and then further select the target user to be shared in the target application; when the selection is completed, the AR client can send the electronic certificate to the target selected by the user. user.
- the user shares the electronic voucher with the target user
- the target user is the contact or friend of the user in the AR client
- the user can share the electronic voucher to the target user within the AR client.
- the AR client can transmit the electronic voucher to be shared to the AR client of the target user through the AR server.
- the user can also pass the third-party client.
- the electronic voucher is shared with the target user.
- the AR client can generate a corresponding access link for the electronic voucher to be shared through the AR server, and then share the generated access link to the target user through the client of the third party.
- the AR client can also determine in the background the number of categories of the locally saved electronic voucher in a real-time manner, whether the preset threshold is reached; The threshold is set, at which time the user can obtain the allocation right of the virtual red packet; in this case, the AR client can send a red packet allocation request (corresponding to the virtual object allocation request) to the server, and in the red packet allocation request Carry a number of electronic vouchers.
- the number and the number of the electronic vouchers carried in the red packet allocation request may be the preset thresholds, so that the server may obtain the red packet allocation request after receiving the red packet allocation request.
- the red envelope is assigned an electronic voucher carried in the request and then verified.
- the operation of the AR client to send the red packet allocation request to the AR server may be triggered manually by the user, or may be triggered automatically when the AR client determines that the number of collected electronic credentials reaches the first number. .
- the red packet allocation request may be automatically initiated to the AR server.
- a trigger button for triggering the AR client to initiate an object allocation request to the AR server may be provided on the display location corresponding to each electronic voucher, and the AR client determines the collected event in the background.
- the user may output a prompt to prompt the user to obtain the red packet allocation permission, and then the AR client may send the red packet allocation to the AR server in response to the triggering operation of the trigger button by the user. request.
- the AR server verifies the number of categories of the electronic voucher carried in the red packet allocation request sent by the AR client, and if the AR server verifies the electronic voucher carried in the red packet allocation request, it determines The number of categories of the electronic voucher carried in the red packet allocation request reaches a preset threshold, and the user can be granted the right to allocate the red envelope, and immediately based on the preset allocation rule from the preset “red envelope fund pool” (corresponding to the above)
- the preset virtual object set issues a certain amount of red packets for the user, or after the specified red envelope issuance time, a certain amount is issued for the user from the preset “red envelope fund pool” based on the preset allocation rule. Red envelope.
- the allocation rules used by the AR server to issue red packets for the user from the preset "red packet fund pool” can be formulated based on actual business needs.
- the AR server can count the number of users who have been granted the red packet allocation authority, and calculate the average number of red packets in the "red envelope funds pool" to be issued based on the counted number of users; At this time, the calculated average number of allocations is the number of red packets that need to be distributed to each user. In this case, the AR server can issue a red envelope of the corresponding amount for each user from the "red envelope fund pool" based on the calculated average number of allocations.
- the AR server when the AR server issues a red envelope for the user, the AR server may also randomly extract a certain amount of red packets from the "red envelope fund pool"; for example, the AR server may be based on a preset The random algorithm combines the total amount of red packets to be issued in the "red envelope fund pool", calculates a random number for the user, and then issues the corresponding amount of red packets to the user according to the random number.
- the AR server may also send an allocation result to the AR client where the user is located. After receiving the allocation result, the AR client may send the allocation result to the AR client. The user shows.
- the allocation result may include one or more of the amount of the red envelope issued, the sender of the red envelope, the other recipients of the red envelope, the number of other recipients, and the allocation rule of the red envelope.
- the above AR client may be a payment client integrated with the AR service function.
- the AR server may be a payment platform integrated with the Alipay client and integrated with the AR service function.
- the above virtual object set may refer to a corporate fund account of a company that cooperates with the payment platform, and the funds under the enterprise fund account are the total amount of funds that the enterprise can use to distribute the red envelope to the user.
- the above electronic vouchers may include five types of virtual Foca such as “Shou Kangfu”, “You Aifu”, “Fuqiangfu”, “Home and Fortune” and “Caiwangfu”.
- the user can scan the face area of the user or other user through the AR client, and trigger the AR server to send the virtual Fuku card to the AR client when the facial feature is recognized from the scanned image.
- the user can collect the virtual Foca from the AR server through the AR client, and after collecting the five types of virtual Foca, the user will receive the right to receive the red envelope.
- the AR server is the user. Issue a red envelope.
- the payment platform can set a certain number of kinds of "rare” virtual Foca, such as 2 types.
- the issuance of such "rare” virtual Foca depends on the activity level of the user calculated by the payment platform based on the user's data, that is, such "rare” virtual cards can be preferentially issued to users with higher activity.
- the payment platform delivers the virtual Fukuoka for the ordinary users, the user can issue only five types of virtual "Shou Kangfu", "You Aifu”, “Fuqiangfu”, “Home and Fortune” and “Caiwangfu”. 2 to 3 cards in Foca.
- the AR client can initiate the image for the scanned image. Face recognition, and when a facial feature is recognized from the scanned image, the corresponding dynamic effect picture is augmented at a position corresponding to the facial feature to prompt the user to recognize and recognize the facial feature.
- the AR server may determine, based on the recognition result, whether the facial feature is recognized from the image scanned by the AR client;
- the AR server can directly issue the virtual foe card to the user through the AR client.
- a gesture recognition or expression recognition mechanism may be further introduced on the basis of face recognition.
- the AR client may prompt the user to make a preset gesture or a preset expression by continuing to output a gesture scanning prompt or an expression scanning prompt in the image scanning interface.
- the AR server may determine, according to the recognition result, whether a preset gesture or a preset expression is recognized from the image scanned by the AR client; if the image is scanned from the AR client The default gesture or the preset expression is recognized.
- the AR server can deliver the virtual foe card to the user through the AR client.
- the corresponding motion picture of the reality may be augmented at a position corresponding to the facial feature to prompt the user to receive the virtual Foca;
- the AR client can forward the virtual Foca issued by the AR server based on the AR engine of the AR client. And superimposing and combining the recognized facial features and the display image, and performing enhanced display on the recognized facial features in corresponding positions in the image scanning interface (shown in FIG. 4 is above the recognized facial regions) Covers a "Fu Fortune" virtual Foca) and outputs a charging option on the display image.
- the user can trigger the function button of “receiving the Fuku” by means of “clicking”, triggering the AR client to display the display image in the active interface.
- the contents of the display image generated by the client for different kinds of virtual cards are different from each other.
- the placements provided by the client can correspond to the types of virtual cards, and each virtual card can correspond to one placement.
- the AR client can provide a display in the user interface for five types of virtual Foca, such as "Shou Kangfu”, “You Aifu”, “Fuqiangfu”, “Home and Fortune”, and “Caiwangfu”.
- Position after the client adds the display image to the corresponding placement position, the client can also mark the number of the virtual card currently acquired on the preset position of the display position (the upper right corner shown in FIG. 5). At the same time, when the number of virtual cards corresponding to the placement changes, the client can also update the number.
- the AR client can also provide a "Five Fortune” placement in the user interface, when the user successfully collects "Shou Kangfu”, “You Aifu”, “Fuqiangfu”, “Home and Fortune” and After the five types of virtual cards such as "Caiwangfu", the "Five Fortunes" display position can be highlighted, such as highlighting, to prompt the user to have the right to receive the "Five Fortunes" red envelope.
- the user can manually trigger the "Five Fortunes” display position, trigger the client to send a red envelope allocation request to the payment platform, or the client automatically sends a red envelope allocation request to the payment platform.
- the red packet allocation request may carry the five types of virtual cards that have been collected, and after receiving the request, the payment platform may verify the type of the virtual card in the request, and if the payment platform verifies, it is determined that the request carries 5 A virtual card, the payment platform can immediately issue a certain amount of "red packets" to the user from the corporate fund account of the cooperative enterprise, or send a certain amount of "red packets" to the user when the delivery time arrives. For example, the payment platform can count the number of users who have obtained the permission to receive the "Five Fortunes" red envelope, and then evenly distribute the amount of red packets that can be used for payment in the corporate funds account.
- the payment platform can push a red envelope distribution result to the client, and the client can display the redemption result to the user in the activity interface after receiving the release result.
- the information displayed in the red envelope issuance result may include the amount received this time, the sender of the red envelope (the name of the company that cooperates with the payment platform), and the total number of people receiving the "Five Fortunes” red envelope this time, this time " Five Fu” red envelope distribution rules, and so on.
- the present specification also provides an embodiment of the apparatus.
- the present specification proposes an augmented reality-based virtual object allocating device 70, which is applied to an AR client; see FIG. 8, as an AR client that carries the augmented reality-based virtual object allocating device 70.
- the hardware architecture of the AR terminal usually includes a CPU, a memory, a non-volatile memory, a network interface, an internal bus, and the like.
- the augmented reality-based virtual object allocation device 70 can generally be understood as being loaded in The computer program in the memory, the logic device combined with the hardware and software formed after the CPU runs, the device 70 includes:
- the scanning module 701 performs image scanning for the target user and initiates face recognition for the scanned image
- the obtaining module 702 is configured to acquire an electronic voucher issued by the augmented reality server when the facial feature is recognized from the scanned image, and save the obtained electronic voucher locally; wherein the electronic voucher is used to extract the virtual object ;
- the first determining module 703 determines whether the number of categories of the electronic voucher saved locally reaches a preset threshold
- the sending module 704 if the number of categories of the electronic voucher saved locally reaches the preset threshold, sending a virtual object allocation request to the augmented reality server, where the number of categories carried in the virtual object allocation request is
- the electronic credential of the threshold is preset such that the augmented reality server allocates a virtual object from a preset set of virtual objects based on the virtual object allocation request.
- the obtaining module 702 the obtaining module 702:
- the gesture prompt information is output to the user; wherein the gesture prompt information is used to prompt the user to execute the preset gesture;
- Gesture recognition for the scanned image is initiated, and when the preset gesture is recognized from the scanned image, the electronic voucher issued by the augmented reality server is acquired.
- the obtaining module 702 the obtaining module 702:
- the facial expression information is output to the user; wherein the facial expression prompt information is used to prompt the user to execute the preset facial expression;
- An expression recognition for the scanned image is initiated, and when the preset expression is recognized from the scanned image, the electronic voucher issued by the augmented reality server is acquired.
- a display module 705 (not shown in FIG. 7) that recognizes a facial feature from the scanned image and enhances the display at a position corresponding to the facial feature when the electronic voucher issued by the augmented reality server is acquired Preset motion picture.
- the virtual object is a virtual red envelope.
- the present specification proposes an augmented reality-based virtual object allocating device 90, which is applied to an AR server; see FIG. 10, as an AR server that carries the augmented reality-based virtual object allocating device 90.
- the hardware architecture generally includes a CPU, a memory, a non-volatile memory, a network interface, an internal bus, and the like.
- the augmented reality-based virtual object allocation device 90 can be generally understood to be loaded in a memory.
- a computer program, a combination of hardware and software formed by a CPU operation, the device 90 comprising:
- the second determining module 901 determines, when the augmented reality client performs image scanning for the target user, whether to identify the facial feature from the image scanned by the augmented reality client;
- the issuing module 902 if the facial feature is recognized from the scanned image, sending an electronic voucher to the augmented reality client; wherein the electronic voucher is used to extract the virtual object;
- the receiving module 903 is configured to receive an object allocation request sent by the augmented reality client, where the object allocation request includes a plurality of electronic credentials for extracting a business object.
- the third determining module 904 is configured to determine whether the number of categories of the electronic voucher included in the object allocation request reaches a preset threshold
- the distribution module 905 allocates a virtual object to the virtual reality client from the preset virtual object set if the number of categories of the electronic voucher included in the object allocation request reaches the preset threshold.
- the sending module 902 the sending module 902:
- facial features are recognized from the scanned image, further determining whether a preset gesture is recognized from the image scanned by the augmented reality client;
- the electronic credential is issued to the augmented reality client.
- the sending module 902 the sending module 902:
- facial features are recognized from the scanned image, further determining whether the preset expression is recognized from the image scanned by the augmented reality client;
- the electronic voucher is issued to the augmented reality client.
- the sending module 902 further:
- the number of categories of the electronic voucher delivered by the augmented reality server to the augmented reality client is less than the preset threshold.
- the virtual object is a virtual red envelope.
- the device embodiment since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment.
- the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the present specification. Those of ordinary skill in the art can understand and implement without any creative effort.
- the apparatus, apparatus, module or unit set forth in the above embodiments may be implemented by a computer chip or an entity, or by a product having a certain function.
- a typical implementation device is a computer, and the specific form of the computer may be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email transceiver, and a game control.
- the present specification also provides an embodiment of an electronic device.
- the electronic device includes a processor and a memory for storing machine executable instructions; wherein the processor and the memory are typically interconnected by an internal bus.
- the device may also include an external interface to enable communication with other devices or components.
- the processor by reading and executing machine-executable instructions stored in the memory corresponding to control logic of augmented reality-based virtual object allocation, the processor is caused to:
- the electronic voucher is configured to cause the augmented reality server to allocate a virtual object from a preset virtual object set based on the virtual object allocation request.
- the processor by reading and executing the machine-executable instructions stored in the memory corresponding to the control logic of the augmented reality-based virtual object allocation, the processor is further caused to:
- the gesture prompt information is output to the user; wherein the gesture prompt information is used to prompt the user to execute the preset gesture;
- Gesture recognition for the scanned image is initiated, and when the preset gesture is recognized from the scanned image, the electronic voucher issued by the augmented reality server is acquired.
- the processor by reading and executing the machine-executable instructions stored in the memory corresponding to the control logic of the augmented reality-based virtual object allocation, the processor is further caused to:
- the facial expression information is output to the user; wherein the facial expression prompt information is used to prompt the user to execute the preset facial expression;
- An expression recognition for the scanned image is initiated, and when the preset expression is recognized from the scanned image, the electronic voucher issued by the augmented reality server is acquired.
- the processor by reading and executing the machine-executable instructions stored in the memory corresponding to the control logic of the augmented reality-based virtual object allocation, the processor is further caused to:
- the face feature is recognized from the scanned image, and when the electronic voucher issued by the augmented reality server is acquired, the preset motion picture is enhanced to be displayed at a position corresponding to the face feature.
- the processor is further caused to: by reading and executing machine-executable instructions stored in the memory corresponding to control logic of augmented reality-based virtual object allocation:
- the present specification also provides an embodiment of another electronic device.
- the electronic device includes a processor and a memory for storing machine executable instructions; wherein the processor and the memory are typically interconnected by an internal bus.
- the device may also include an external interface to enable communication with other devices or components.
- the processor by reading and executing machine-executable instructions stored in the memory corresponding to control logic of augmented reality-based virtual object allocation, the processor is caused to:
- the object allocation request includes a plurality of electronic credentials for extracting a business object
- the virtual reality client is allocated a virtual object from the preset virtual object set.
- the processor is further caused to: by reading and executing machine-executable instructions stored in the memory corresponding to control logic of augmented reality-based virtual object allocation:
- facial features are recognized from the scanned image, further determining whether a preset gesture is recognized from the image scanned by the augmented reality client;
- the electronic credential is issued to the augmented reality client.
- the processor by reading and executing the machine-executable instructions stored in the memory corresponding to the control logic of the augmented reality-based virtual object allocation, the processor is further caused to:
- facial features are recognized from the scanned image, further determining whether the preset expression is recognized from the image scanned by the augmented reality client;
- the electronic voucher is issued to the augmented reality client.
- the processor by reading and executing the machine-executable instructions stored in the memory corresponding to the control logic of the augmented reality-based virtual object allocation, the processor is further caused to:
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Finance (AREA)
- General Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Computer Security & Cryptography (AREA)
- Software Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
公开一种基于增强现实的虚拟对象分配方法,应用于增强现实客户端,包括:针对目标用户进行图像扫描,并发起针对扫描到的图像的人脸识别;获取在从扫描到的图像中识别出面部特征时由增强现实服务端下发的电子凭证,并将获取到的电子凭证在本地保存;其中,所述电子凭证用于提取虚拟对象;确定本地保存的所述电子凭证的类别数是否达到预设阈值;如果本地保存的所述电子凭证的类别数达到所述预设阈值,向所述增强现实服务端发送虚拟对象分配请求,其中,所述虚拟对象分配请求中携带类别数为所述预设阈值的所述电子凭证,以使得所述增强现实服务端基于该虚拟对象分配请求从预设的虚拟对象集合中分配虚拟对象。
Description
本说明书涉及增强现实领域,尤其涉及一种基于增强现实的虚拟对象分配方法及装置。
随着网络技术的发展,出现了多种多样的虚拟对象的分配方式。以“红包”形式的虚拟对象的分配为例,用户可以将电子贺卡、礼金等放入“红包”中,然后单独发放至某个用户,或者发放至群组内,由群组内的所有成员进行领取。然而,随着虚拟对象的分配场景的日益丰富,如何提升在分配虚拟对象时的交互性以及趣味性,对于提升用户体验具有十分重要的意义
发明内容
本说明书提出一种基于增强现实的虚拟对象分配方法,应用于增强现实客户端,所述方法包括:
针对目标用户进行图像扫描,并发起针对扫描到的图像的人脸识别;
获取在从扫描到的图像中识别出面部特征时由增强现实服务端下发的电子凭证,并将获取到的电子凭证在本地保存;其中,所述电子凭证用于提取虚拟对象;
确定本地保存的所述电子凭证的类别数是否达到预设阈值;
如果本地保存的所述电子凭证的类别数达到所述预设阈值,向所述增强现实服务端发送虚拟对象分配请求,其中,所述虚拟对象分配请求中携带类别数为所述预设阈值的所述电子凭证,以使得所述增强现实服务端基于该虚拟对象分配请求从预设的虚拟对象集合中分配虚拟对象。
本说明书还提出一种基于增强现实的虚拟对象分配方法,应用于增强现实服务端,所述方法包括:
当增强现实客户端针对目标用户进行图像扫描时,确定是否从所述增强现实客户端扫描到的图像中识别出面部特征;
如果从扫描到的图像中识别出面部特征,向所述增强现实客户端下发电子凭证;其中,所述电子凭证用于提取虚拟对象;
接收所述增强现实客户端发送的对象分配请求;所述对象分配请求中包含用于提取业务对象的若干电子凭证;
确定所述对象分配请求中包含的电子凭证的类别数是否达到预设阈值;
如果所述对象分配请求中包含的电子凭证的类别数达到所述预设阈值,从预设的虚拟对象集合中为所述虚拟现实客户端分配虚拟对象。
本说明书还提出一种基于增强现实的虚拟对象分配装置,应用于增强现实客户端,所述装置包括:
扫描模块,针对目标用户进行图像扫描,并发起针对扫描到的图像的人脸识别;
获取模块,获取在从扫描到的图像中识别出面部特征时由增强现实服务端下发的电子凭证,并将获取到的电子凭证在本地保存;其中,所述电子凭证用于提取虚拟对象;
第一确定模块,确定本地保存的所述电子凭证的类别数是否达到预设阈值;
发送模块,如果本地保存的所述电子凭证的类别数达到所述预设阈值,向所述增强现实服务端发送虚拟对象分配请求,其中,所述虚拟对象分配请求中携带类别数为所述预设阈值的所述电子凭证,以使得所述增强现实服务端基于该虚拟对象分配请求从预设的虚拟对象集合中分配虚拟对象。
本说明书还提出一种基于增强现实的虚拟对象分配装置,应用于增 强现实服务端,所述装置包括:
第二确定模块,当增强现实客户端针对目标用户进行图像扫描时,确定是否从所述增强现实客户端扫描到的图像中识别出面部特征;
下发模块,如果从扫描到的图像中识别出面部特征,向所述增强现实客户端下发电子凭证;其中,所述电子凭证用于提取虚拟对象;
接收模块,接收所述增强现实客户端发送的对象分配请求;所述对象分配请求中包含用于提取业务对象的若干电子凭证;
第三确定模块,确定所述对象分配请求中包含的电子凭证的类别数是否达到预设阈值;
分配模块,如果所述对象分配请求中包含的电子凭证的类别数达到所述预设阈值,从预设的虚拟对象集合中为所述虚拟现实客户端分配虚拟对象。
本说明书中,提出了一种基于增强现实技术将为用户分配虚拟对象的线上需求,与用户使用增强现实客户端的线下人脸图像扫描以及人脸识别进行结合的全新交互模式;用户可以通过增强现实客户端针对目标用户执行图像扫描,并发起对扫描到的图像进行人脸识别;当从扫描到的图像中识别出面部特征时,可以触发增强现实服务端向增强现实客户端下发用于提取虚拟对象的电子凭证,而用户可以通过增强现实客户端对增强现实服务端下发的电子凭证进行收集;当用户收集到的电子凭证的类别数达到预设阈值时,用户可以取得虚拟对象的分配权限,增强现实客户端可以主动向增强现实服务端发送虚拟对象分配请求,并在该虚拟对象分配请求中携带类别数为所述预设阈值的电子凭证,以由增强现实服务端从预设的虚拟对象集合中为该用户分配对象,从而可以显著提升虚拟对象分配的交互性以及趣味性。
图1是本说明书一实施例示出的一种基于增强现实的虚拟对象分配 方法的处理流程图;
图2是本说明书一实施例示出的一种AR客户端的图像扫描界面的示意图;
图3是本说明书一实施例示出的另一种AR客户端的图像扫描界面的示意图;
图4本说明书一实施例示出的一种AR客户端展示获取到的虚拟福卡的示意图;
图5本说明书一实施例示出的另一种AR客户端展示获取到的虚拟福卡的示意图;
图6本说明书一实施例示出的一种用户通过AR客户端取得领红包权限的示意图;
图7是本说明书一实施例示出的一种基于增强现实的虚拟对象分配装置的逻辑框图;
图8是本说明书一实施例提供的承载所述一种基于增强现实的虚拟对象分配装置的增强现实客户端所涉及的硬件结构图;
图9是本说明书一实施例示出的另一种基于增强现实的虚拟对象分配装置的逻辑框图;
图10是本说明书一实施例提供的承载所述另一种基于增强现实的虚拟对象分配装置的增强现实服务端所涉及的硬件结构图。
本说明书旨在提出一种基于增强现实技术将为用户分配虚拟对象的线上需求,与用户使用增强现实客户端的线下人脸图像扫描以及人脸识别进行结合的全新交互模式.
在实现时,用户可以通过AR客户端针对目标用户的面部区域执行图像扫描,并由AR客户端发起针对扫描的图像进行人脸识别。
当从扫描到的图像中识别出面部特征时,可以触发AR服务端为AR 客户端下发用于提取虚拟对象的电子凭证,用户可以通过AR客户端对AR服务端下发的电子凭证进行收集。
进一步的,当用户收集到的电子凭证的类别数达到预设阈值时,此时用户可以取得虚拟对象的分配权限,AR客户端可以主动向AR服务端发送包含若干电子凭证,并且包含的电子凭证的类别数为所述预设阈值的虚拟对象分配请求,以由AR服务端从预设的虚拟对象集合中为该用户分配虚拟对象。
通过以上技术方案,使得用户通过AR客户端搭载的图像扫描功能对自身或者其它用户的面部区域进行图像扫描,就可以触发AR服务端为自己分配用于提取虚拟对象的电子凭证,并通过收集电子凭证,来取得虚拟对象的分配权限,从而可以显著提升虚拟对象分配的交互性以及趣味性。
例如,以上述“虚拟对象”为红包发放场景中的“虚拟红包”为例,用户可以使用AR客户端的对自己或者其它用户的面部区域进行图像扫描,来触发AR服务端向AR客户端下发用于提取红包的电子凭证,并通过AR客户端对AR服务端下发的电子凭证进行收集;当用户收集到的电子凭证的类别数达到预设阈值时,用户可以取得红包的分配权限,AR客户端可以主动向AR服务端发送包含若干电子凭证并且包含的电子凭证的类别数为所述预设阈值的红包分配请求,由AR服务端从预设的“红包资金池”中为该发放一定数额的红包,从而可以显著提升红包发放的交互性以及趣味性。
下面通过具体实施例并结合具体的应用场景对本说明书进行描述。
请参考图1,图1是本说明书一实施例提供的一种基于增强现实的虚拟对象分配方法,所述方法执行以下步骤:
步骤102,AR客户端针对目标用户进行图像扫描,并发起针对扫描到的图像的人脸识别;
步骤104,AR服务端确定是否从所述AR客户端扫描到的实景图像 中识别出面部特征;如果从扫描到的图像中识别出面部特征时,向所述AR客户端下发电子凭证;其中,所述电子凭证用于提取虚拟对象;
步骤106,AR客户端获取AR服务端下发的电子凭证,并将获取到的电子凭证在本地保存;
步骤108,AR客户端确定本地保存的所述电子凭证的类别数是否达到预设阈值;如果本地保存的所述电子凭证的类别数达到所述预设阈值,向所述AR服务端发送虚拟对象分配请求;其中,所述虚拟对象分配请求中携带类别数为所述预设阈值的所述电子凭证;
步骤110,AR服务端确定所述对象分配请求中包含的电子凭证的类别数是否达到预设阈值;如果所述对象分配请求中包含的电子凭证的类别数达到所述预设阈值,从预设的虚拟对象集合中为所述AR客户端分配虚拟对象。
上述AR客户端,是指基于AR技术开发的,或者集成了AR功能的客户端软件;例如,上述AR客户端,可以是集成了AR服务功能的支付宝客户端(alipay);上述AR客户端可以搭载图像扫描功能,对线下环境中的现实场景、人物进行图像扫描,并通过上述AR客户端前台的AR引擎,对后台的AR服务端推送的虚拟数据(比如一些由运营方自定义的动效画面)进行可视化渲染,将其与从扫描到的图像数据中识别出的特定特征(比如面部特征或者一些自定义的线下图形标识物)进行叠加融合。
上述AR服务端,包括面向上述AR客户端提供服务的服务器、服务器集群或者基于服务器集群构建的云平台;例如,上述AR服务端,可以是面向集成了AR服务功能的支付宝APP提供对接服务的支付平台;上述AR服务端可以基于后台的AR引擎,对上述AR客户端扫描到的图像进行图像识别(当然该图像识别过程也可以由AR客户端基于前台的AR引擎来完成);以及,对与线下业务相关的虚拟数据进行内容管理,并基于上述图像识别的结果,向上述AR客户端推送相关的虚拟数 据;比如,AR服务端可以在本地对预配置的用于提取虚拟对象的电子凭证进行内容管理,并可以基于上述图像识别的结果,向上述AR客户端推送下发电子凭证。
上述虚拟对象,可以包括任意类型的可以在线上完成分配发放的虚拟物品;例如,在示出的一种实施方式中,上述虚拟对象可以是红包发放场景中的“虚拟红包”。
上述电子凭证,可以包括任意形式的用于向AR服务端提取虚拟对象时的虚拟凭证。在实际应用中,AR服务端上可以预配置一定数量的不同类别的电子凭证,并按照一定的下发规则向特定的用户人群下发不同种类的电子凭证;其中,上述电子凭证的具体形式不进行限制,可以是字符串、数字、字符、口令、虚拟卡片,等等。
以下以上述虚拟对象为“虚拟红包”为例进行详细说明。
显然,以上述虚拟对象为“虚拟红包”为例仅为示例性的。在实际应用中,上述虚拟对象也可以是“虚拟红包”以外的,能够在线上分配发送的其它形式的虚拟物品;比如,电子凭证、电子购物券、电子优惠券,等等。
在本例中,AR服务端上可以预配置一定数量的不同类别的电子凭证,用户可以通过AR客户端针对自己或者其它用户的面部区域执行图像扫描,由AR客户端发起针对扫描的图像进行人脸识别,并在从扫描到的图像中识别出面部特征时,来触发AR服务端为AR客户端下发用于提取虚拟对象的电子凭证。
而用户可以通过AR客户端对AR服务端下发的电子凭证进行收集,当用户收集到的电子凭证的类别数达到预设阈值时,此时用户可以取得虚拟对象的分配权限,AR客户端可以主动向AR服务端发送包含若干电子凭证并且包含的电子凭证的类别数为所述预设阈值的红包分配请求,由AR服务端从预设的“红包资金池”中为该发放一定数额的红包。
例如,在示出的一种“集五福分大奖”的红包发放场景中,上述电 子凭证可以包括“寿康福”、“友爱福”、“富强福”、“家和福”以及“财旺福”等5类虚拟福卡。用户可以通过AR客户端对自己或者其它用户的人脸区域进行图像扫描,来参与“福卡抽奖”,来触发AR服务端向AR客户端下发虚拟福卡,并通过AR客户端来收集AR服务端下发的虚拟福卡。当用户收集到以上示出的5类虚拟福卡后,会获得领取红包的权限,由AR服务端为该用户发放红包。
以下结合“电子凭证的收集”、“电子凭证的类别数验证”以及“红包的发放”三个阶段,对本说明书的技术方案进行详细说明。
1)“电子凭证的收集”
在初始状态下,在AR服务端上可以预配置一定数量的不同类别的电子凭证,并由AR服务端对预配置完成的电子凭证进行集中管理;
其中,预配置的上述电子凭证,是用户取得红包发放权限的唯一凭证;预配置的上述电子凭证的数量以及类别数,可以基于实际的需求进行设置。
在AR服务端上,除了可以预配置一定数量的不同类别的电子凭证以外,在AR服务端上还可以为预配置的电子凭证,绑定相应的发放条件。
在本例中,上述发放条件具体可以包括,是否从AR客户端扫描到的图像中识别出面部特征。即只要从AR客户端扫描到的图像中识别出面部特征,AR服务端就可以立即向该AR客户端发放电子凭证。
在这种情况下,用户可以通过AR客户端搭载的图像扫描功能,对自己或者其它用户的面部区域进行图像扫描,来触发AR服务端向AR客户端下发电子凭证,并通过AR客户端来收集AR服务端下发的电子凭证,以此来取得红包的发放权限。
在实现时,上述AR客户端默认可以面向用户提供一个图像扫描功能的功能选项;例如,该功能选项具体可以是该AR客户端的用户主页中面向用户提供的一个“扫一扫”的功能按钮。用户在需要使用AR客 户端对自己或者其它用户的人脸区域进行图像扫描时,可以通过诸如“点击”的方式触发该功能选项,进而进入到AR客户端的图像扫描界面。
请参见图2,图2为本例示出的一种AR客户端的图像扫描界面的示意图。
如图2所示,上述AR客户端的图像扫描扫描界面中,除了可以面向用户提供传统的“扫码”等功能按钮以外,还可以提供一“AR红包”的功能按钮,用户可以通过诸如“点击”的方式,来触发该功能按钮,进入到AR客户端的图像扫描界面,完成对自己或者其它用户的面部区域的图像扫描操作;其中,上述图像扫描界面,具体可以是一个实时的扫描界面,即该图像扫描界面的背景图片可以实时呈现被扫描用户的面部图像。
请继续参见图2,当上述AR客户端进入如图2所示出的图像扫描界面后,可以进一步在该图像扫描页面中输出一个面部扫描提示。
其中,上述面部扫描提示,具体可以包括用于提示用户对面部区域进行图像扫描的扫描框,以及在图像扫描页面中静态显示的提示文本。
如图2所示,上述面部扫描提示,可以包括一在画面正中显示的人脸形状的扫描框,以及在该扫描框下方静态常驻输出的一“对准目标开始扫描”的提示文本;
其中,该提示文本的内容,在实际应用中,可以进行动态更新;例如,当AR客户端对人脸区域进行扫描的过程中,可以将该提示文本更新为“正在识别脸部”;当AR客户端通过搭载的人脸识别算法,或者通过AR服务端从扫描到的图像中未识别出面部特征时,可以将该提示文本更新为“请保持镜头稳定”、“请重新扫描”、“换个人试试”,或者其它类似的提示用户重新进行扫描的文本提示。
当上述AR客户端在上述图像扫描页面中输出上述图像扫描提示后,此时用户可以将AR客户端所在的AR终端设备(比如智能手机,或者AR眼镜等)上搭载的摄像头,对准自己或者其它用户的面部区域。而 上述AR客户端可以调用上述AR终端设备上搭载的摄像头进行实时的图像扫描,并发起对扫描到的图像进行人脸识别。
其中,AR客户端在对扫描到的图像进行人脸识别时,可以基于AR客户端上搭载的人脸识别模型来完成,也可以通过由AR客户端将扫描到的图像实时上传至AR服务端,由AR服务端基于其本地搭载的人脸识别模型来完成。
在示出的一种实施方式中,上述AR客户端可以在本地搭载人脸识别模型,当通过调用AR终端设备的摄像头扫描到图像后,可以继续调用该图像识别模型对该扫描搭配的图像进行人脸识别,并将识别结果上传至AR服务端。
在示出的另一种实施方式中,上述AR客户端也可以不在本地搭载图像识别模型,而是将扫描到的图像实时上传至AR服务端,由上述AR服务端基于其本地搭载的图像识别模型对该图像进行人脸识别,然后向AR客户端返回识别结果。
其中,需要说明的是,上述图像识别模型中搭载的图像识别算法,在本例中不进行特别限定,本领域技术人员在将本说明书的技术方案付诸实现时,可以参考相关技术中的记载;例如,在一种实现方式中,上述人脸识别模型可以是基于神经网络结合大量的人脸图像样本训练成的深度学习模型。
当从AR客户端扫描到的图像中识别出面部特征后,为了增强互动性,AR客户端可以在扫描到的面部特征对应的位置上,增强显示预设的动效画面,用以提示用户当前从扫描到的图像中扫描到面部特征。例如,在一种实现方式中,可以在扫描到的面部特征周围,增强显示相应的氛围贴图。
在这种情况下,AR客户端可以获取预配置的动效画面,然后基于AR客户端前台的AR引擎,对该动态画面进行可视化渲染,并根据当前识别出的面部特征在图像扫描界面中的相对位置,将该动效画面与识别 出的面部特征进行叠加融合,向用户增强显示。
其中,需要说明的是,上述动效画面,具体可以是预先配置在AR客户端本地,也可以是由AR服务端进行动态下发,在本说明书中不进行特别限定。
当完成针对AR客户端扫描到的图像的人脸识别后,此时AR服务端可以基于识别结果,进一步确认是否从AR客户端扫描到的实景图像中识别出面部特征;如果经确认从AR客户端扫描到的图像中成功识别出面部特征,此时AR服务端可以从预配置的电子凭证中,向上述AR客户端下发相应的电子凭证。
例如,在一种实现方式中,AR服务端在向上述AR客户端下发电子凭证时,可以从本地预配置完成的电子凭证中,随机向该AR客户端下发电子凭证;在另一种实现方式中,除了完全随机的下发方式以外,AR服务端也可以基于预设的下发规则,从本地预配置完成的电子凭证中向上述AR客客户端下发电子凭证。
其中,需要说明的是,AR服务端在向AR客户端下发电子凭证时,可以直接将电子凭证下发至上述AR客户端,也可以仅向上述AR客户端下发一个与电子凭证对应的唯一标识,并在AR客户端本地保存上述唯一标识与电子凭证的对应关系,后续AR客户端可以通过该唯一标识来识别对应的电子凭证;
例如,假设电子凭证为虚拟卡片,那么服务端可以直接将该虚拟卡片下发至AR客户端,也可以为该虚拟卡片生成一个唯一对应的标识,将该标识下发至AR客户端,AR客户端在接收到该标识后,可以基于保存的上述对应关系,将该标识渲染为对应的虚拟卡片在其本地进行展示。
在本例中,上述发放条件除了可以包括是否从AR客户端扫描到的图像中识别出面部特征以外,在实际应用中,为了提升电子凭证发放的可交互性和安全性,也可以在人脸识别的基础上,进一步引入手势识别或者表情识别等机制。即在从AR客户端扫描到的图像中识别出面部特 征后,可以进一步发起对扫描到的图像的表情识别或者手势识别,只有在扫描到的图像中进一步识别出了预设的手势或者预设的表情后,AR服务端才可以向该AR客户端发放电子凭证。
在示出的一种实施方式中,可以在人脸识别的基础上,进一步引入手势识别机制。在这种情况下,当从AR客户端扫描到的图像中识别出面部特征后,此时AR客户端可以在上述图像扫描页面中进一步输出一个手势扫描提示。
上述手势扫描提示,具体用于提示用户执行预设的手势,可以包括由AR服务端预设置的手势对应的手势图像,以及在图像扫描页面中静态显示的提示文本。
其中,上述预设的手势的具体类型,在本说明书中不进行特别限定,在实际应用中,可以基于实际的交互需求,来自定义设置。例如,在一种实现方式中,上述预设的手势具体可以是用户在镜头前挥手的手势。
请参见图3,图3为本例示出的另一种AR客户端的图像扫描界面的示意图。
如图3所示,以上述电子凭证为“虚拟福卡”为例,上述手势扫描提示,可以包括一在画面下方显示的一“请朋友做手势,有机会得福卡”的文本提示,以及在该文本提示下方显示的与预先设定的手势对应的手势图片。
当上述AR客户端在上述图像扫描页面中输出上述手势扫描提示后,此时被扫描用户可以在上述手势扫描提示的提示下,在AR客户端所在的AR终端设备搭载的摄像头的镜头前,执行相应的手势。而上述AR客户端可以调用上述AR终端设备上搭载的摄像头进行实时的图像扫描,并发起对扫描到的图像进行手势识别。
其中,AR客户端在对扫描到的图像进行手势识别时,仍然可以基于AR客户端上搭载的手势识别模型来完成,也可以通过由AR客户端将扫描到的图像实时上传至AR服务端,由AR服务端基于其本地搭载 的手势识别模型来完成,不再赘述。
另外,上述手势识别模型中搭载的图像识别算法,在本例中也不进行特别限定,本领域技术人员在将本说明书的技术方案付诸实现时,可以参考相关技术中的记载。
当完成针对AR客户端扫描到的图像的手势识别后,此时AR服务端可以基于识别结果,进一步确认是否从AR客户端扫描到的实景图像中识别出了预设的手势;如果经确认从AR客户端扫描到的图像中成功识别出预设的手势时,此时AR服务端可以从预配置的电子凭证中,向上述AR客户端下发相应的电子凭证。
在示出的另一种实施方式中,也可以在人脸识别的基础上,进一步引入表情识别机制。在这种情况下,当从AR客户端扫描到的图像中识别出面部特征后,此时AR客户端可以在上述图像扫描页面中进一步输出一个表情扫描提示。
上述表情扫描提示,具体用于提示用户执行预设的表情,可以包括由AR服务端预设置的一个与预设置的表情对应的表情图片,以及在图像扫描页面中静态显示的提示文本。
当然,对于一些常规的表情(例如微笑表情),上述表情扫描提示也可以仅包括一个静态显示的提示文本即可。
其中,上述预设的表情,在本说明书中也不进行特别限定,在实际应用中也可以基于实际的交互需求,来自定义设置;
例如,以上述电子凭证为“虚拟福卡”,以及上述预设的表情为微笑表情为例,上述表情扫描提示,可以包括一在画面下方显示的一“请朋友微笑,有机会得福卡”的文本提示,以及在该文本提示下方显示的预先设定的笑脸图片。
当上述AR客户端在上述图像扫描页面中输出上述表情扫描提示后,此时被扫描用户可以在上述表情扫描提示的提示下,在AR客户端所在的AR终端设备搭载的摄像头的镜头前,作出相应的表情。而上述AR 客户端可以调用上述AR终端设备上搭载的摄像头进行实时的图像扫描,并发起对扫描到的图像进行表情识别。
其中,AR客户端在对扫描到的图像进行表情识别时,仍然可以基于AR客户端上搭载的表情识别模型来完成,也可以通过由AR客户端将扫描到的图像实时上传至AR服务端,由AR服务端基于其本地搭载的表情识别模型来完成,不再赘述。
另外,上述手势识别模型中搭载的图像识别算法,在本例中也不进行特别限定,本领域技术人员在将本说明书的技术方案付诸实现时,可以参考相关技术中的记载;例如,以上述预设置的表情为微笑表情为例,上述AR客户端或者AR服务端上可以预先搭载用于进行微笑表情识别的图像识别算法。
当完成针对AR客户端扫描到的图像的表情识别后,此时AR服务端可以基于识别结果,进一步确认是否从AR客户端扫描到的实景图像中识别出了预设的表情;如果经确认从AR客户端扫描到的图像中成功识别出预设的表情时,此时AR服务端可以从预配置的电子凭证中,向上述AR客户端下发相应的电子凭证。
通过在人脸识别的基础上,进一步引入手势识别或者表情识别的机制:
一方面,在进一步引入手势识别或者表情识别后,用户只有在AR终端设备搭载的摄像头的镜头前,作出预设的手势或者表情后,才能够触发AR服务端为自身发放电子凭证,因此可以提升用户与AR服务端之间的可交互性;
另一方面,在进一步引入手势手别或者表情识别后,相当于对被扫描用户执行了一次活体检测,从而可以避免通过AR客户端来扫描一些人脸图像,来取得AR服务端发放的电子凭证,能够提升电子凭证发放的公平性。
在本例中,为了防止电子凭证被仿冒,AR服务端在向AR客户端下 发电子凭证时,还可以通过预设的加密算法对电子凭证进行加密,AR客户端在收到加密的电子凭证后,可以采用与上述加密算法对应的解密密钥对电子凭证进行解密。通过这种方式,可以避免电子凭证被用户仿冒。
其中,需要说明的是,AR服务端向AR客户端下发的电子凭证的类别数,具体可以小于上述预设阈值;
例如,以上述电子凭证为虚拟卡片为例,假设用户需要收集到5种不同种类的虚拟卡片才能够获得虚拟红包的分配权限,那么服务端在向AR客户端下发虚拟卡片时,下发的虚拟卡片的类别数可以是一个小于5类的数字,比如3类。通过这种方式,可以有效的对具有红包分配权项的用户数量进行控制。
另外,需要指出的是,服务端在向AR客户端下发电子凭证使所采用的下发规则,可以根据实际的需求进行自定义。
在示出的一种实施方式中,上述下发规则可以包括针对特定人群有选择的进行下发。
例如,在一种实现方式中,AR服务端可以面向活跃度较高的用户,优先下发电子凭证;或者,针对活跃度较高的用户,优先下发一些数量较少的电子凭证。
通过这种方式,可以使得活跃度较高的用户可以更容易的获得电子凭证,或者更容易获得比较“少见”的电子凭证,从而将虚拟对象的获取权限向高活跃度的用户倾斜。
其中,需要指出的是,AR服务端在计算该用户的活跃度时,该活跃度可以基于日常的与用户的活跃度相关的参数进行表征;例如,在实际应用中,当用户的好友数量,或者发起的业务数越多,表明该用户越活跃。因此上述参数可以包括用户的好友数量以及发起的业务数等参数,AR服务端在计算该用户的活跃度时,可以统计该用户的好友数量,或者统计该用户发起的业务数量,然后进行阈值化处理,来判定该用户是 否为活跃用户。
当然,在实际应用中,用户除了可以通过AR客户端,针对自己或者其它用户的面部区域进行图像扫描,来触发AR服务端向AR客户端下发电子凭证的方式,来收集电子凭证以外,在实际应用中,也可以通过获取其它用户通过相同的AR客户端分享的电子凭证的方式,来收集电子凭证。
例如,以上述电子凭证为虚拟卡片为例,上述AR客户端可以是集成了AR服务功能的支付宝客户端,在该支付宝客户端可以提供分享接口,其它用户可以通过自己的支付宝客户端将虚拟卡片在支付宝客户端内部分享给该用户,也可以将虚拟卡片通过支付宝客户端提供的分享接口,以链接或者可点击的图文的形式分享至第三方的社交平台,或者即时通信软件,该用户在收到该其它用户分享的虚拟卡片后,可以通过点击链接或者图文然后跳转到支付宝客户端的界面中,来对第二用户分享的虚拟卡片进行收取并在本地保存。
在本例中,AR客户端通过扫描自己或者其它用户的面部区域的方式,获取到AR服务端下发的电子凭证后:
一方面,为了增强互动性,AR客户端可以在扫描到的面部特征对应的位置上,增强显示预设的动效画面,用以提示用户当前获取到了电子凭证。
例如,在一种实现方式中,可以在扫描到的面部特征周围,循环播放预先设定的动效画面;比如,在一个例子中,以上述电子凭证为虚拟福卡为例,可以基于扫描到的面部特征确定出被扫描用户的头部位置,然后在被扫描用户的头部位置“佩戴”相应的节日冠冕,并在用户的面部周围,动态的播放福卡从天而降的动画效果。
另一方面,当上述用于提示用户获得了电子凭证的动效画面显示完毕后,此时AR客户端还可以为获取到的电子凭证生成对应的展示图片,然后基于AR客户端前台的AR引擎,对该展示图片进行可视化渲染, 对识别出的面部特征与该展示图片进行叠加融合,将该展示图片在识别出的面部特征在图像扫描界面中对应的位置上进行增强显示,并在该展示图片上输出一个收取选项。
其中,上述收取选项,具体可以是一个用于收取对应于该展示图片的电子凭证的功能按钮,用户可以通过诸如“点击”等方式触发该功能按钮,来收取对应于该展示图片的电子凭证,将生成的所述展示图片添加至与所述电子凭证对应的本地展示位置,进而在本地保存该电子凭证;
其中,AR客户端为电子凭证生成的展示图片,可以与电子凭证的种类相对应,即不同种类的电子凭证所对应的展示图片互不相同。上述展示图片上展示的内容,不进行特别限制。同时,AR客户端在用户界面中提供的展示位置,也可以与电子凭证的种类相对应,不同的展示位置可以分别对应不同种类的电子凭证。
另外,AR客户端在将生成的展示图片添加到对应的展示位置后,还可以在该展示位置上标注当前获取到的与该展示位置对应的电子凭证的数量,例如,可以在展示位的右上角生成一个数字提醒。而且,当某一种类的电子凭证的数量发生变化后,AR客户端还可以基于当前该电子凭证的实际剩余数量对在与该电子凭证对应的展示位置上标注的数量进行更新。
在本例中,对于获取到的电子凭证,用户还可以分享给其它用户。
在示出的一种实施方式中,该用户可以通过触发展示位置中添加的展示图片,来将与该展示图片对应的电子凭证分享给其它用户。
例如,当该用户想要对某一类电子凭证进行分享,则可以通过点击等触发操作来触发该电子凭证对应的展示位置;当展示位置触发后,可以将与该展示位置对应的展示图片显示在用户界面中。
此时,在该展示图片中可以提供一分享选项,比如当该电子凭证为虚拟卡片时,该分享选项可以是一个“送一张给朋友”的触发选项;当第该用户通过点击等预设触发操作触发该触发选项后,AR客户端可以 输出一个分享方式的选择界面,在该选择界面中可以提供若干种可供用户选择的目标应用。此时用户可以在该选择界面中选择相应的目标应用,然后在该目标应用中进一步选择本次要分享的目标用户;当选择完成后,AR客户端可以将该电子凭证发送至用户选择的目标用户。
其中,该用户在向目标用户分享电子凭证时,如果目标用户为该用户在AR客户端中的联系人或者好友时,该用户可以在该AR客户端内部将电子凭证分享给目标用户。在这种情况下,AR客户端可以通过AR服务端将需要分享的电子凭证传输至目标用户的AR客户端。
当然,如果目标用户并非为该用户在AR客户端中的联系人或者好友,而是该用户在第三方的客户端中的联系人或者好友时,该用户也可以通过该第三方的客户端将电子凭证分享给目标用户。在这种情况下,AR客户端可以通过AR服务端为需要分享的电子凭证生成一个对应的访问链接,然后将生成的访问链接通过该第三方的客户端分享给目标用户。
2)“电子凭证的类别数验证”
在本例中,用户在通过以上描述的两种收集途径收集到电子凭证后,AR客户端还可以在后台实时的判断本地保存的电子凭证的类别数,是否达到预设阈值;如果达到该预设阈值,此时该用户已经可以获得虚拟红包的分配权限;在这种情况下,AR客户端可以向服务端发送红包分配请求(相当于上述虚拟对象分配请求),并在该红包分配请求中携带若干电子凭证。
其中,在示出的一种实施方式中,该红包分配请求中携带的电子凭证的数量和类别数,可以均为上述预设阈值,从而服务端在收到该红包分配请求后,可以获取该红包分配请求中携带的电子凭证,然后进行验证。
需要说明的是,AR客户端向AR服务端发送红包分配请求的操作,可以由用户手动触发,也可以是由AR客户端在判断出收集到的电子凭 证的类别数达到第一数量时自动触发。
例如,在一种情况下,当AR客户端在后台判断出收集到的电子凭证达到第一数量时,可以自动向AR服务端发起上述红包分配请求。在另一种情况下,可以在与各电子凭证对应的展示位置上,提供一个用于触发AR客户端向AR服务端发起对象分配请求的触发按钮,当AR客户端在后台判断出收集到的电子凭证达到预设阈值时,可以向用户输出提示,以提示用户当前已经可以取得红包分配的权限,然后AR客户端可以响应于用户针对该触发按钮的触发操作,向AR服务端发送上述红包分配请求。
在本例中,AR服务端在接收到AR客户端发送的红包分配请求中携带的电子凭证的类别数进行验证,如果AR服务端在对该红包分配请求中携带的电子凭证进行验证后,判断出该红包分配请求中携带的电子凭证的类别数达到预设阈值,此时可以授予该用户分配红包的权限,并立即基于预设的分配规则从预设的“红包资金池”(相当于上述预设的虚拟对象集合)中为该用户发放一定数额的红包,或者在指定的红包发放时间到达后,基于预设的分配规则从预设的“红包资金池”中为该用户发放一定数额的红包。
3)“红包的发放”
在本例中,AR服务端在从预设的“红包资金池”为该用户发放红包时所采用的分配规则,可以基于实际的业务需求进行制定。
在示出的一种实施方式中,AR服务端可以统计所有授予了红包分配权限的用户数量,并基于统计出的用户数量计算“红包资金池”中待发放的而红包总额的平均分配数;此时,计算出的该平均分配数即为需要向每一个用户发放的红包的数量。在这种情况下,AR服务端可以基于计算得到的平均分配数,从“红包资金池”中为每一个用户发放对应金额的红包。
在示出的另一种实施方式中,AR服务端在为该用户发放红包时, 也可以从“红包资金池”为该用户随机抽取一定数额的红包;例如,AR服务端可以基于预设的随机算法结合“红包资金池”中待发放的红包的总数额,为该用户计算出一个随机数,然后按照该随机数向该用户发放相应数额的红包。
当然,除了以上示出的分配规则,在实际应用中还可以由其它分配规则,在本说明书中不再一一列举。
在本例中,当AR服务端为该用户成功发放红包后,还可以向该用户所在的AR客户端发送一个分配结果,该AR客户端在收到该分配结果后,可以将该分配结果向该用户展示。
其中,该分配结果可以包括发放的红包的金额、红包的发送方、红包的其它接收方、其它接收方的数量、以及红包的分配规则等信息中的一个或者多个。
以下结合具体的应用场景对以上实施例中的技术方案进行详细描述。
在示出的一种“集五福分大奖”的红包发放场景中,上述AR客户端可以是集成了AR服务功能的支付客户端。上述AR服务端可以是与支付宝客户端对接的,集成了AR服务功能的支付平台。上述虚拟对象集合可以是指与支付平台合作的企业的企业资金账户,该企业资金账户下的资金即为该企业可用于向用户发放红包的资金总额。
上述电子凭证可以包括“寿康福”、“友爱福”、“富强福”、“家和福”以及“财旺福”等5类虚拟福卡。
一方面,用户可以通过AR客户端扫描自己或者其它用户的面部区域,并在从扫描到的图像中识别出面部特征时,触发AR服务端向AR客户端下发虚拟福卡。
另一方面,用户可以通过AR客户端来收集AR服务端下发的虚拟福卡,并在收集到这5类虚拟福卡后,会相应的获得领取红包的权限,由AR服务端为该用户发放红包。
其中,在以上5类虚拟福卡中,支付平台可以设定一定数量种类的 “少见”虚拟福卡,比如2类。这类“少见”虚拟福卡的发放与否取决于支付平台基于用户的数据计算出的该用户的活跃度,即可以优先将这类“少见”虚拟卡片发放给活跃度较高的用户。支付平台在针对普通用户下发虚拟福卡时,可以默认仅为用户下发“寿康福”、“友爱福”、“富强福”、“家和福”以及“财旺福”等5类虚拟福卡中的2~3类卡片。
当用户在如图2所示出的图像扫描界面中输出的人脸扫描提示的提示下,完成针对自己或者其它用户的面部区域的图像扫描后,AR客户端可以发起针对扫描到的图像的人脸识别,并在从扫描到的图像中识别出面部特征时,在该面部特征对应的位置上增强现实相应的动效画面,以提示用户识别识别出了面部特征。
当人脸识别完成后,AR服务端可以基于该识别结果,来确定是否从AR客户端扫描到的图像中识别出面部特征;
在一种实施方式中,如果从AR客户端扫描到的图像中识别出面部特征,此时AR服务端可以直接通过该AR客户端向该用户下发虚拟福卡。
在另一种实施方式中,在人脸识别的基础上,可以进一步引入手势识别或者表情识别机制。当从AR客户端扫描到的图像中识别出面部特征后,AR客户端可以通过在图像扫描界面中继续输出手势扫描提示或者表情扫描提示,来提示用户作出预设手势或者预设表情,并发起对扫描到的图像继续进行手势识别或者表情识别。当手势识别或者表情识别完成后,AR服务端可以基于该识别结果,来确定是否从AR客户端扫描到的图像中识别出预设手势或者预设表情;如果从AR客户端扫描到的图像中识别出预设手势或者预设表情,此时AR服务端可以通过该AR客户端向该用户下发虚拟福卡。
当AR客户端收到AR服务端下发的虚拟福卡后:
一方面,可以在该面部特征对应的位置上增强现实相应的动效画面,以提示用户收到了虚拟福卡;
另一方面,请参见图4,当AR客户端收到AR服务端下发的虚拟福卡后,可以基于AR客户端前台的AR引擎,对AR服务端下发的虚拟福卡进行可视化渲染,对识别出的面部特征与该展示图片进行叠加融合,将该展示图片在识别出的面部特征在图像扫描界面中对应的位置上进行增强显示(图4示出的为在识别出的面部区域上方覆盖显示一张“富强福”的虚拟福卡),并在该展示图片上输出一个收取选项。
在本例中,用户可以通过诸如“点击”的方式触发该“收下福卡”的功能按钮,触发上述AR客户端将该展示图片在活动界面中展示。其中,客户端为不同种类的虚拟卡片生成的展示图片上的内容互不相同。客户端提供的展示位置可以与虚拟卡片的种类一一对应,每一种虚拟卡片可以分别对应一个展示位置。
请参见图5,AR客户端可以在用户界面中为“寿康福”、“友爱福”、“富强福”、“家和福”以及“财旺福”等5类虚拟福卡分别提供一个展示位置,当客户端将展示图片添加至对应的展示位置后,客户端还可以在该展示位置的预设位置(图5示出的为右上角)上标注当前获取到的该虚拟卡片的数量,同时当与该展示位置对应的虚拟卡片的数量发生变化时,客户端还可以对该数量进行更新。
当用户通过获取支付平台下发的虚拟卡片,以及获取其它好友分享的虚拟卡片,成功收集到“寿康福”、“友爱福”、“富强福”、“家和福”以及“财旺福”等5类虚拟卡片后,此时用户已具有领取“五福”红包的权限。
请参见图6,AR客户端还可以在用户界面中提供一个“五福”的展示位置,在当用户成功收集到“寿康福”、“友爱福”、“富强福”、“家和福”以及“财旺福”等5类虚拟卡片后,可以将该“五福”展示位置突出显示,比如高亮显示,以提示用户当前已具有领取“五福”红包的权限。
当用户获取到领取“五福”红包的权限后,用户可以通过手动触发 该“五福”展示位置,触发客户端向支付平台发送红包分配请求,或者由客户端自动向支付平台发送红包分配请求。此时,该红包分配请求中可以携带已经收集到的5种虚拟卡,支付平台收到该请求后,可以对该请求中虚拟卡的种类进行验证,如果支付平台验证后确定该请求中携带5种虚拟卡,支付平台可以立即从合作的企业的企业资金账户中为该用户发放一定金额的“红包”,或者在发放时间到达时向用户发送一定金额的“红包”。例如,支付平台可以将统计所有获得领取“五福”红包权限的用户的数量,然后平均分配企业资金账户中可用于发放的红包金额。
当用户领取“红包”后,支付平台可以向客户端推送一个红包发放结果,客户端在收到该发放结果后,可以在活动界面中向用户展示。
其中,在该红包发放结果中展示的信息可以包含本次领取到的金额,本次红包的发送者(与支付平台合作的企业名称),本次领取“五福”红包的总人数,本次“五福”红包的分配规则,等等。
与上述方法实施例相对应,本说明书还提供了装置的实施例。
请参见图7,本说明书提出一种基于增强现实的虚拟对象分配装置70,应用于AR客户端;请参见图8,作为承载所述基于增强现实的虚拟对象分配装置70的AR客户端所涉及的AR终端的硬件架构中,通常包括CPU、内存、非易失性存储器、网络接口以及内部总线等;以软件实现为例,所述基于增强现实的虚拟对象分配装置70通常可以理解为加载在内存中的计算机程序,通过CPU运行之后形成的软硬件相结合的逻辑装置,所述装置70包括:
扫描模块701,针对目标用户进行图像扫描,并发起针对扫描到的图像的人脸识别;
获取模块702,获取在从扫描到的图像中识别出面部特征时由增强现实服务端下发的电子凭证,并将获取到的电子凭证在本地保存;其中,所述电子凭证用于提取虚拟对象;
第一确定模块703,确定本地保存的所述电子凭证的类别数是否达到预设阈值;
发送模块704,如果本地保存的所述电子凭证的类别数达到所述预设阈值,向所述增强现实服务端发送虚拟对象分配请求,其中,所述虚拟对象分配请求中携带类别数为所述预设阈值的所述电子凭证,以使得所述增强现实服务端基于该虚拟对象分配请求从预设的虚拟对象集合中分配虚拟对象。
在本例中,所述获取模块702:
在从扫描到的图像中识别出面部特征时,向用户输出手势提示信息;其中,所述手势提示信息用于提示用户执行预设手势;
发起针对扫描到的图像的手势识别,并在从扫描到的图像中识别出所述预设手势时,获取由增强现实服务端下发的电子凭证。
在本例中,所述获取模块702:
在从扫描到的图像中识别出面部特征时,向用户输出表情提示信息;其中,所述表情提示信息用于提示用户执行预设表情;
发起针对扫描到的图像的表情识别,并在从扫描到的图像中识别出所述预设表情时,获取由增强现实服务端下发的电子凭证。
在本例中,还包括:
显示模块705(图7中未示出),在从扫描到的图像中识别出面部特征,以及在获取到增强现实服务端下发的电子凭证时,在所述面部特征对应的位置上增强显示预设的动效画面。
在本例中,所述虚拟对象为虚拟红包。
请参见图9,本说明书提出一种基于增强现实的虚拟对象分配装置90,应用于AR服务端;请参见图10,作为承载所述基于增强现实的虚拟对象分配装置90的AR服务端所涉及的硬件架构中,通常包括CPU、内存、非易失性存储器、网络接口以及内部总线等;以软件实现为例,所述基于增强现实的虚拟对象分配装置90通常可以理解为加载在内存 中的计算机程序,通过CPU运行之后形成的软硬件相结合的逻辑装置,所述装置90包括:
第二确定模块901,当增强现实客户端针对目标用户进行图像扫描时,确定是否从所述增强现实客户端扫描到的图像中识别出面部特征;
下发模块902,如果从扫描到的图像中识别出面部特征,向所述增强现实客户端下发电子凭证;其中,所述电子凭证用于提取虚拟对象;
接收模块903,接收所述增强现实客户端发送的对象分配请求;所述对象分配请求中包含用于提取业务对象的若干电子凭证;
第三确定模块904,确定所述对象分配请求中包含的电子凭证的类别数是否达到预设阈值;
分配模块905,如果所述对象分配请求中包含的电子凭证的类别数达到所述预设阈值,从预设的虚拟对象集合中为所述虚拟现实客户端分配虚拟对象。
在本例中,所述下发模块902:
如果从扫描到的图像中识别出面部特征,进一步确定是否从所述增强现实客户端扫描到的图像中识别出预设手势;
如果从所述增强现实客户端扫描到的图像中识别出预设手势,向所述增强现实客户端下发电子凭证。
在本例中,所述下发模块902:
如果从扫描到的图像中识别出面部特征,进一步确定是否从所述增强现实客户端扫描到的图像中识别出预设表情;
如果从所述增强现实客户端扫描到的图像中识别出预设表情,向所述增强现实客户端下发电子凭证。
在本例中,所述下发模块902进一步:
向所述增强现实客户端下发预设的动效画面,以使所述增强现实客户端在从扫描到的图像中识别出面部特征时,以及在获取到由所述增强现实服务端下发的电子凭证时,在所述面部特征对应的位置上增强显示 所述动效画面。
在本例中,所述增强现实服务端为所述增强现实客户端下发的电子凭证的类别数小于所述预设阈值。
在本例中,所述虚拟对象为虚拟红包。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本说明书方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
上述实施例阐明的装置、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。
与上述方法实施例相对应,本说明书还提供了一种电子设备的实施例。该电子设备包括:处理器以及用于存储机器可执行指令的存储器;其中,处理器和存储器通常通过内部总线相互连接。在其他可能的实现方式中,所述设备还可能包括外部接口,以能够与其他设备或者部件进行通信。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的虚拟对象分配的控制逻辑对应的机器可执行指令,所述处理器被促使:
针对目标用户进行图像扫描,并发起针对扫描到的图像的人脸识别;
获取在从扫描到的图像中识别出面部特征时由增强现实服务端下发的电子凭证,并将获取到的电子凭证在本地保存;其中,所述电子凭证 用于提取虚拟对象;
确定本地保存的所述电子凭证的类别数是否达到预设阈值;
如果本地保存的所述电子凭证的类别数达到所述预设阈值,向所述增强现实服务端发送虚拟对象分配请求,其中,所述虚拟对象分配请求中携带类别数为所述预设阈值的所述电子凭证,以使得所述增强现实服务端基于该虚拟对象分配请求从预设的虚拟对象集合中分配虚拟对象。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的虚拟对象分配的控制逻辑对应的机器可执行指令,所述处理器还被促使:
在从扫描到的图像中识别出面部特征时,向用户输出手势提示信息;其中,所述手势提示信息用于提示用户执行预设手势;
发起针对扫描到的图像的手势识别,并在从扫描到的图像中识别出所述预设手势时,获取由增强现实服务端下发的电子凭证。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的虚拟对象分配的控制逻辑对应的机器可执行指令,所述处理器还被促使:
在从扫描到的图像中识别出面部特征时,向用户输出表情提示信息;其中,所述表情提示信息用于提示用户执行预设表情;
发起针对扫描到的图像的表情识别,并在从扫描到的图像中识别出所述预设表情时,获取由增强现实服务端下发的电子凭证。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的虚拟对象分配的控制逻辑对应的机器可执行指令,所述处理器还被促使:
在从扫描到的图像中识别出面部特征,以及在获取到增强现实服务端下发的电子凭证时,在所述面部特征对应的位置上增强显示预设的动效画面。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实 的虚拟对象分配的控制逻辑对应的机器可执行指令,所述处理器还被促使:
为获取到的电子凭证生成对应的展示图片;
将生成的所述展示图片在图像扫描界面中向用户输出;其中,所述展示图片上包括对应于所述电子凭证的收取选项;
响应于用户针对所述收取选项的触发操作,将生成的所述展示图片添加至与所述电子凭证对应的本地展示位置;其中,为不同类别的电子凭证生成的展示图片互不相同;不同类别的电子凭证对应的展示位置互不相同。
与上述方法实施例相对应,本说明书还提供了另一种电子设备的实施例。该电子设备包括:处理器以及用于存储机器可执行指令的存储器;其中,处理器和存储器通常通过内部总线相互连接。在其他可能的实现方式中,所述设备还可能包括外部接口,以能够与其他设备或者部件进行通信。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的虚拟对象分配的控制逻辑对应的机器可执行指令,所述处理器被促使:
当增强现实客户端针对目标用户进行图像扫描时,确定是否从所述增强现实客户端扫描到的图像中识别出面部特征;
如果从扫描到的图像中识别出面部特征,向所述增强现实客户端下发电子凭证;其中,所述电子凭证用于提取虚拟对象;
接收所述增强现实客户端发送的对象分配请求;所述对象分配请求中包含用于提取业务对象的若干电子凭证;
确定所述对象分配请求中包含的电子凭证的类别数是否达到预设阈值;
如果所述对象分配请求中包含的电子凭证的类别数达到所述预设阈值,从预设的虚拟对象集合中为所述虚拟现实客户端分配虚拟对象。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实 的虚拟对象分配的控制逻辑对应的机器可执行指令,所述处理器还被促使:
如果从扫描到的图像中识别出面部特征,进一步确定是否从所述增强现实客户端扫描到的图像中识别出预设手势;
如果从所述增强现实客户端扫描到的图像中识别出预设手势,向所述增强现实客户端下发电子凭证。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的虚拟对象分配的控制逻辑对应的机器可执行指令,所述处理器还被促使:
如果从扫描到的图像中识别出面部特征,进一步确定是否从所述增强现实客户端扫描到的图像中识别出预设表情;
如果从所述增强现实客户端扫描到的图像中识别出预设表情,向所述增强现实客户端下发电子凭证。
在本实施例中,通过读取并执行所述存储器存储的与基于增强现实的虚拟对象分配的控制逻辑对应的机器可执行指令,所述处理器还被促使:
向所述增强现实客户端下发预设的动效画面,以使所述增强现实客户端在从扫描到的图像中识别出面部特征时,以及在获取到由所述增强现实服务端下发的电子凭证时,在所述面部特征对应的位置上增强显示所述动效画面。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本说明书的其它实施方案。本说明书旨在涵盖本说明书的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本说明书的一般性原理并包括本说明书未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本说明书的真正范围和精神由下面的权利要求指出。
应当理解的是,本说明书并不局限于上面已经描述并在附图中示出 的精确结构,并且可以在不脱离其范围进行各种修改和改变。本说明书的范围仅由所附的权利要求来限制。
以上所述仅为本说明书的较佳实施例而已,并不用以限制本说明书,凡在本说明书的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本说明书保护的范围之内。
Claims (25)
- 一种基于增强现实的虚拟对象分配方法,应用于增强现实客户端,所述方法包括:针对目标用户进行图像扫描,并发起针对扫描到的图像的人脸识别;获取在从扫描到的图像中识别出面部特征时由增强现实服务端下发的电子凭证,并将获取到的电子凭证在本地保存;其中,所述电子凭证用于提取虚拟对象;确定本地保存的所述电子凭证的类别数是否达到预设阈值;如果本地保存的所述电子凭证的类别数达到所述预设阈值,向所述增强现实服务端发送虚拟对象分配请求,其中,所述虚拟对象分配请求中携带类别数为所述预设阈值的所述电子凭证,以使得所述增强现实服务端基于该虚拟对象分配请求从预设的虚拟对象集合中分配虚拟对象。
- 根据权利要求1所述的方法,所述获取在从扫描到的图像中识别出面部特征时由增强现实服务端下发的电子凭证,包括:在从扫描到的图像中识别出面部特征时,向用户输出手势提示信息;其中,所述手势提示信息用于提示用户执行预设手势;发起针对扫描到的图像的手势识别,并在从扫描到的图像中识别出所述预设手势时,获取由增强现实服务端下发的电子凭证。
- 根据权利要求1所述的方法,所述获取在从扫描到的图像中识别出面部特征时由增强现实服务端下发的电子凭证,包括:在从扫描到的图像中识别出面部特征时,向用户输出表情提示信息;其中,所述表情提示信息用于提示用户执行预设表情;发起针对扫描到的图像的表情识别,并在从扫描到的图像中识别出所述预设表情时,获取由增强现实服务端下发的电子凭证。
- 根据权利要求1所述的方法,还包括:在从扫描到的图像中识别出面部特征,以及在获取到增强现实服务端下发的电子凭证时,在所述面部特征对应的位置上增强显示预设的动 效画面。
- 根据权利要求1所述的方法,所述将获取到的电子凭证在本地保存,包括:为获取到的电子凭证生成对应的展示图片;将生成的所述展示图片在图像扫描界面中向用户输出;其中,所述展示图片上包括对应于所述电子凭证的收取选项;响应于用户针对所述收取选项的触发操作,将生成的所述展示图片添加至与所述电子凭证对应的本地展示位置;其中,为不同类别的电子凭证生成的展示图片互不相同;不同类别的电子凭证对应的展示位置互不相同。
- 根据权利要求1所述的方法,所述虚拟对象为虚拟红包。
- 一种基于增强现实的虚拟对象分配方法,应用于增强现实服务端,所述方法包括:当增强现实客户端针对目标用户进行图像扫描时,确定是否从所述增强现实客户端扫描到的图像中识别出面部特征;如果从扫描到的图像中识别出面部特征,向所述增强现实客户端下发电子凭证;其中,所述电子凭证用于提取虚拟对象;接收所述增强现实客户端发送的对象分配请求;所述对象分配请求中包含用于提取业务对象的若干电子凭证;确定所述对象分配请求中包含的电子凭证的类别数是否达到预设阈值;如果所述对象分配请求中包含的电子凭证的类别数达到所述预设阈值,从预设的虚拟对象集合中为所述虚拟现实客户端分配虚拟对象。
- 根据权利要求7所述的方法,所述如果从扫描到的图像中识别出面部特征,向所述增强现实客户端下发电子凭证,包括:如果从扫描到的图像中识别出面部特征,进一步确定是否从所述增强现实客户端扫描到的图像中识别出预设手势;如果从所述增强现实客户端扫描到的图像中识别出预设手势,向所述增强现实客户端下发电子凭证。
- 根据权利要求7所述的方法,所述如果从扫描到的图像中识别出面部特征,向所述增强现实客户端下发电子凭证,包括:如果从扫描到的图像中识别出面部特征,进一步确定是否从所述增强现实客户端扫描到的图像中识别出预设表情;如果从所述增强现实客户端扫描到的图像中识别出预设表情,向所述增强现实客户端下发电子凭证。
- 根据权利要求7所述的方法,还包括:向所述增强现实客户端下发预设的动效画面,以使所述增强现实客户端在从扫描到的图像中识别出面部特征时,以及在获取到由所述增强现实服务端下发的电子凭证时,在所述面部特征对应的位置上增强显示所述动效画面。
- 根据权利要求7所述的方法,所述增强现实服务端为所述增强现实客户端下发的电子凭证的类别数小于所述预设阈值。
- 根据权利要求7所述的方法,所述虚拟对象为虚拟红包。
- 一种基于增强现实的虚拟对象分配装置,应用于增强现实客户端,所述装置包括:扫描模块,针对目标用户进行图像扫描,并发起针对扫描到的图像的人脸识别;获取模块,获取在从扫描到的图像中识别出面部特征时由增强现实服务端下发的电子凭证,并将获取到的电子凭证在本地保存;其中,所述电子凭证用于提取虚拟对象;第一确定模块,确定本地保存的所述电子凭证的类别数是否达到预设阈值;发送模块,如果本地保存的所述电子凭证的类别数达到所述预设阈值,向所述增强现实服务端发送虚拟对象分配请求,其中,所述虚拟对 象分配请求中携带类别数为所述预设阈值的所述电子凭证,以使得所述增强现实服务端基于该虚拟对象分配请求从预设的虚拟对象集合中分配虚拟对象。
- 根据权利要求13所述的装置,所述获取模块:在从扫描到的图像中识别出面部特征时,向用户输出手势提示信息;其中,所述手势提示信息用于提示用户执行预设手势;发起针对扫描到的图像的手势识别,并在从扫描到的图像中识别出所述预设手势时,获取由增强现实服务端下发的电子凭证。
- 根据权利要求13所述的装置,所述获取模块:在从扫描到的图像中识别出面部特征时,向用户输出表情提示信息;其中,所述表情提示信息用于提示用户执行预设表情;发起针对扫描到的图像的表情识别,并在从扫描到的图像中识别出所述预设表情时,获取由增强现实服务端下发的电子凭证。
- 根据权利要求13所述的装置,还包括:显示模块,在从扫描到的图像中识别出面部特征,以及在获取到增强现实服务端下发的电子凭证时,在所述面部特征对应的位置上增强显示预设的动效画面。
- 根据权利要求13所述的装置,所述虚拟对象为虚拟红包。
- 一种基于增强现实的虚拟对象分配装置,应用于增强现实服务端,所述装置包括:第二确定模块,当增强现实客户端针对目标用户进行图像扫描时,确定是否从所述增强现实客户端扫描到的图像中识别出面部特征;下发模块,如果从扫描到的图像中识别出面部特征,向所述增强现实客户端下发电子凭证;其中,所述电子凭证用于提取虚拟对象;接收模块,接收所述增强现实客户端发送的对象分配请求;所述对象分配请求中包含用于提取业务对象的若干电子凭证;第三确定模块,确定所述对象分配请求中包含的电子凭证的类别数 是否达到预设阈值;分配模块,如果所述对象分配请求中包含的电子凭证的类别数达到所述预设阈值,从预设的虚拟对象集合中为所述虚拟现实客户端分配虚拟对象。
- 根据权利要求18所述的装置,所述下发模块:如果从扫描到的图像中识别出面部特征,进一步确定是否从所述增强现实客户端扫描到的图像中识别出预设手势;如果从所述增强现实客户端扫描到的图像中识别出预设手势,向所述增强现实客户端下发电子凭证。
- 根据权利要求18所述的装置,所述下发模块:如果从扫描到的图像中识别出面部特征,进一步确定是否从所述增强现实客户端扫描到的图像中识别出预设表情;如果从所述增强现实客户端扫描到的图像中识别出预设表情,向所述增强现实客户端下发电子凭证。
- 根据权利要求18所述的装置,所述下发模块进一步:向所述增强现实客户端下发预设的动效画面,以使所述增强现实客户端在从扫描到的图像中识别出面部特征时,以及在获取到由所述增强现实服务端下发的电子凭证时,在所述面部特征对应的位置上增强显示所述动效画面。
- 根据权利要求18所述的装置,所述增强现实服务端为所述增强现实客户端下发的电子凭证的类别数小于所述预设阈值。
- 根据权利要求18所述的方法,所述虚拟对象为虚拟红包。
- 一种电子设备,包括:处理器;用于存储机器可执行指令的存储器;其中,通过读取并执行所述存储器存储的与基于增强现实的虚拟对象分配的控制逻辑对应的机器可执行指令,所述处理器被促使:针对目标用户进行图像扫描,并发起针对扫描到的图像的人脸识别;获取在从扫描到的图像中识别出面部特征时由增强现实服务端下发的电子凭证,并将获取到的电子凭证在本地保存;其中,所述电子凭证用于提取虚拟对象;确定本地保存的所述电子凭证的类别数是否达到预设阈值;如果本地保存的所述电子凭证的类别数达到所述预设阈值,向所述增强现实服务端发送虚拟对象分配请求,其中,所述虚拟对象分配请求中携带类别数为所述预设阈值的所述电子凭证,以使得所述增强现实服务端基于该虚拟对象分配请求从预设的虚拟对象集合中分配虚拟对象。
- 一种电子设备,包括:处理器;用于存储机器可执行指令的存储器;其中,通过读取并执行所述存储器存储的与基于增强现实的虚拟对象分配的控制逻辑对应的机器可执行指令,所述处理器被促使:当增强现实客户端针对目标用户进行图像扫描时,确定是否从所述增强现实客户端扫描到的图像中识别出面部特征;如果从扫描到的图像中识别出面部特征,向所述增强现实客户端下发电子凭证;其中,所述电子凭证用于提取虚拟对象;接收所述增强现实客户端发送的对象分配请求;所述对象分配请求中包含用于提取业务对象的若干电子凭证;确定所述对象分配请求中包含的电子凭证的类别数是否达到预设阈值;如果所述对象分配请求中包含的电子凭证的类别数达到所述预设阈值,从预设的虚拟对象集合中为所述虚拟现实客户端分配虚拟对象。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18893214.9A EP3657416A4 (en) | 2017-12-20 | 2018-10-30 | AUGMENTED REALITY BASED VIRTUAL OBJECT ASSIGNMENT METHOD AND APPARATUS |
SG11202001469YA SG11202001469YA (en) | 2017-12-20 | 2018-10-30 | Augmented reality-based virtual object allocation method and apparatus |
US16/812,139 US20200211025A1 (en) | 2017-12-20 | 2020-03-06 | Augmented reality-based virtual object allocation method and apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711387928.XA CN108229937A (zh) | 2017-12-20 | 2017-12-20 | 基于增强现实的虚拟对象分配方法及装置 |
CN201711387928.X | 2017-12-20 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/812,139 Continuation US20200211025A1 (en) | 2017-12-20 | 2020-03-06 | Augmented reality-based virtual object allocation method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019119977A1 true WO2019119977A1 (zh) | 2019-06-27 |
Family
ID=62650032
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/112552 WO2019119977A1 (zh) | 2017-12-20 | 2018-10-30 | 基于增强现实的虚拟对象分配方法及装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20200211025A1 (zh) |
EP (1) | EP3657416A4 (zh) |
CN (1) | CN108229937A (zh) |
SG (1) | SG11202001469YA (zh) |
TW (1) | TW201928820A (zh) |
WO (1) | WO2019119977A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113536270A (zh) * | 2021-07-26 | 2021-10-22 | 网易(杭州)网络有限公司 | 一种信息验证的方法、装置、计算机设备及存储介质 |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108229937A (zh) * | 2017-12-20 | 2018-06-29 | 阿里巴巴集团控股有限公司 | 基于增强现实的虚拟对象分配方法及装置 |
CN109274575B (zh) * | 2018-08-08 | 2020-07-24 | 阿里巴巴集团控股有限公司 | 消息发送方法及装置和电子设备 |
CN109034757A (zh) * | 2018-08-10 | 2018-12-18 | 上海掌门科技有限公司 | 用于分配资源、领取资源、发布资源的方法和设备 |
CN109255596B (zh) * | 2018-08-14 | 2021-07-23 | 创新先进技术有限公司 | 一种口令红包的领取方法及装置 |
CN113112614B (zh) * | 2018-08-27 | 2024-03-19 | 创新先进技术有限公司 | 基于增强现实的互动方法及装置 |
CN110070448B (zh) * | 2019-03-08 | 2023-10-03 | 创新先进技术有限公司 | 一种电子保单的处理方法及服务器 |
CN110135887A (zh) * | 2019-04-10 | 2019-08-16 | 口碑(上海)信息技术有限公司 | 电子券生成和核销方法及装置 |
TW202117503A (zh) * | 2019-10-15 | 2021-05-01 | 視鏡科技股份有限公司 | 互動式眼鏡框試戴系統及方法 |
CN112883409B (zh) * | 2019-11-29 | 2024-09-27 | 顺丰科技有限公司 | 业务处理方法、装置、计算机设备及存储介质 |
CN111126975A (zh) * | 2019-12-11 | 2020-05-08 | 中国建设银行股份有限公司 | 电子红包的处理方法、装置、电子设备及可读存储介质 |
CN111192053B (zh) * | 2020-01-11 | 2021-07-13 | 支付宝(杭州)信息技术有限公司 | 基于电子凭证的互动方法及装置、电子设备 |
CN112560777A (zh) * | 2020-12-25 | 2021-03-26 | 北京思特奇信息技术股份有限公司 | 一种触发领取卡券的方法、系统及电子设备 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106920079A (zh) * | 2016-12-13 | 2017-07-04 | 阿里巴巴集团控股有限公司 | 基于增强现实的虚拟对象分配方法及装置 |
CN106997545A (zh) * | 2016-01-26 | 2017-08-01 | 阿里巴巴集团控股有限公司 | 业务实现方法及装置 |
CN108229937A (zh) * | 2017-12-20 | 2018-06-29 | 阿里巴巴集团控股有限公司 | 基于增强现实的虚拟对象分配方法及装置 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010034635A1 (en) * | 2000-01-26 | 2001-10-25 | Gil Winters | System and method for utilizing a fully-integrated, on-line digital collectible award redemption and instant win program |
CN104683302A (zh) * | 2013-11-29 | 2015-06-03 | 国际商业机器公司 | 认证方法、认证装置、终端设备、认证服务器及系统 |
CA3186147A1 (en) * | 2014-08-28 | 2016-02-28 | Kevin Alan Tussy | Facial recognition authentication system including path parameters |
CN111898108B (zh) * | 2014-09-03 | 2024-06-04 | 创新先进技术有限公司 | 身份认证方法、装置、终端及服务器 |
CN104901994B (zh) * | 2014-10-22 | 2018-05-25 | 腾讯科技(深圳)有限公司 | 网络系统中用户的属性数值转移方法、装置及系统 |
CN111654473B (zh) * | 2016-12-13 | 2022-07-19 | 创新先进技术有限公司 | 基于增强现实的虚拟对象分配方法及装置 |
-
2017
- 2017-12-20 CN CN201711387928.XA patent/CN108229937A/zh active Pending
-
2018
- 2018-10-16 TW TW107136299A patent/TW201928820A/zh unknown
- 2018-10-30 WO PCT/CN2018/112552 patent/WO2019119977A1/zh unknown
- 2018-10-30 EP EP18893214.9A patent/EP3657416A4/en not_active Withdrawn
- 2018-10-30 SG SG11202001469YA patent/SG11202001469YA/en unknown
-
2020
- 2020-03-06 US US16/812,139 patent/US20200211025A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106997545A (zh) * | 2016-01-26 | 2017-08-01 | 阿里巴巴集团控股有限公司 | 业务实现方法及装置 |
CN106920079A (zh) * | 2016-12-13 | 2017-07-04 | 阿里巴巴集团控股有限公司 | 基于增强现实的虚拟对象分配方法及装置 |
CN108229937A (zh) * | 2017-12-20 | 2018-06-29 | 阿里巴巴集团控股有限公司 | 基于增强现实的虚拟对象分配方法及装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3657416A4 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113536270A (zh) * | 2021-07-26 | 2021-10-22 | 网易(杭州)网络有限公司 | 一种信息验证的方法、装置、计算机设备及存储介质 |
CN113536270B (zh) * | 2021-07-26 | 2023-08-08 | 网易(杭州)网络有限公司 | 一种信息验证的方法、装置、计算机设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
EP3657416A1 (en) | 2020-05-27 |
US20200211025A1 (en) | 2020-07-02 |
TW201928820A (zh) | 2019-07-16 |
SG11202001469YA (en) | 2020-03-30 |
CN108229937A (zh) | 2018-06-29 |
EP3657416A4 (en) | 2020-11-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019119977A1 (zh) | 基于增强现实的虚拟对象分配方法及装置 | |
CN111654473B (zh) | 基于增强现实的虚拟对象分配方法及装置 | |
TWI669634B (zh) | 基於擴增實境的虛擬對象分配方法及裝置 | |
TWI673668B (zh) | 業務實現方法及裝置 | |
TWI782228B (zh) | 基於電子憑證的互動方法及裝置、電子設備 | |
CN111192053B (zh) | 基于电子凭证的互动方法及装置、电子设备 | |
CN108462658A (zh) | 对象分配方法及装置 | |
CN114866268A (zh) | 一种实现账号互通的方法、装置及电子设备 | |
CN114693294A (zh) | 基于电子凭证的互动方法、装置及电子设备 | |
CN116051087A (zh) | 基于虚拟场景的互动方法、装置、电子设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18893214 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2018893214 Country of ref document: EP Effective date: 20200217 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |