WO2016062173A1 - 用户属性数值转移方法及终端 - Google Patents

用户属性数值转移方法及终端 Download PDF

Info

Publication number
WO2016062173A1
WO2016062173A1 PCT/CN2015/089417 CN2015089417W WO2016062173A1 WO 2016062173 A1 WO2016062173 A1 WO 2016062173A1 CN 2015089417 W CN2015089417 W CN 2015089417W WO 2016062173 A1 WO2016062173 A1 WO 2016062173A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
attribute value
user
target
input
Prior art date
Application number
PCT/CN2015/089417
Other languages
English (en)
French (fr)
Inventor
王小叶
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2017503518A priority Critical patent/JP6359173B2/ja
Priority to US15/110,316 priority patent/US10127529B2/en
Publication of WO2016062173A1 publication Critical patent/WO2016062173A1/zh
Priority to US16/157,980 priority patent/US10417620B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/10Payment architectures specially adapted for electronic funds transfer [EFT] systems; specially adapted for home banking systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/12Payment architectures specially adapted for electronic shopping systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/22Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • H04L67/306User profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2101/00Indexing scheme associated with group H04L61/00
    • H04L2101/60Types of network addresses
    • H04L2101/618Details of network addresses
    • H04L2101/663Transport layer addresses, e.g. aspects of transmission control protocol [TCP] or user datagram protocol [UDP] ports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/02Terminal devices

Definitions

  • the present invention relates to network system technologies, and in particular, to a method and a terminal for transferring user attribute values.
  • each user is essentially mapped to one or more data objects, and each data object has a different attribute.
  • the categories according to the values of the attributes can be roughly classified into numeric types and non-numeric types.
  • a typical example of a numeric type is a value used to store the number of resources a user has.
  • the above-mentioned user-owned resources may be the number of physical materials (such as goods) owned by the user, or may be virtual resources (such as various tokens issued by various online service providers such as game coins, points, etc.), and may also be It is the user's account balance in the banking system and so on.
  • a user attribute value transfer method comprising:
  • a terminal includes a storage medium and a processor, wherein the storage medium stores instructions that, when executed by the processor, cause the processor to perform the following steps:
  • FIG. 1 is a schematic structural diagram of a user attribute value transfer system in an embodiment
  • FIG. 2 is an interaction timing diagram of a user attribute value transfer system performing an attribute value transfer operation in an embodiment
  • FIG. 3 is a schematic diagram of an interface on a first user terminal when a user attribute value transfer system performs an attribute value transfer operation in an embodiment
  • FIG. 4 is a second schematic diagram of an interface on a first user terminal when a user attribute value transfer system performs an attribute value transfer operation in an embodiment
  • FIG. 5 is a third schematic diagram of an interface on a first user terminal when a user attribute value transfer system performs an attribute value transfer operation in an embodiment
  • FIG. 6 is an interaction timing diagram when a user attribute value transfer system performs an attribute value transfer operation in an embodiment
  • FIG. 7 is a schematic diagram of an interface of a second user terminal when a user attribute value transfer system performs an attribute value transfer operation in an embodiment
  • FIG. 8 is a second schematic diagram of the interface of the second user terminal when the user attribute value transfer system performs the attribute value transfer operation in an embodiment
  • FIG. 9 is a diagram showing an attribute value transfer operation of a user attribute value transfer system in an embodiment.
  • FIG. 10 is a fourth schematic diagram of an interface of a second user terminal when a user attribute value transfer system performs an attribute value transfer operation in an embodiment
  • FIG. 11 is a flowchart of a user attribute value transfer method in an embodiment
  • FIG. 12 is a flowchart of a step of determining, by a first user terminal, a target face selected from a recognized face and a corresponding target attribute value according to an input instruction in an embodiment
  • FIG. 13 is a flowchart of a step of determining, by a first user terminal, a target face selected from a recognized face and a corresponding target attribute value according to an input instruction in another embodiment
  • Figure 15 is a flow chart showing the steps of detecting a face selection command in one embodiment
  • 16 is a flowchart of a user attribute value transfer method in an embodiment
  • 17 is a block diagram of a user attribute value transfer apparatus in an embodiment
  • FIG. 18 is a block diagram of a module of an interaction module in an embodiment
  • 19 is a block diagram of a module of an interaction module in another embodiment
  • 20 is a block diagram of a module of an interaction module in still another embodiment
  • 21 is a block diagram of a module of an interaction module in an embodiment
  • Figure 22 is a block diagram of a user attribute value transfer device in another embodiment
  • 23 is a schematic flow chart of a method for transferring user attribute values in an embodiment
  • Figure 24 is a block diagram showing the hardware structure of a user terminal in an embodiment.
  • FIG. 1 is a schematic structural diagram of a user attribute numerical value transfer system according to a first embodiment of the present invention.
  • the user attribute value transfer system can be applied to a network system.
  • the user attribute value transfer system includes a server system 10 and a plurality of user terminals 20.
  • Server system 10 may include one or more servers 11 and database 12.
  • the server 11 can read or modify the data stored in the database 12.
  • the data object corresponding to the user one-to-one can be stored in the database 12.
  • a data object can be used to describe a user's virtual character, and the virtual character can have many numerical attributes, such as the value of the game currency owned by the virtual character, the number of virtual items owned by the virtual character, and the like. Or the quantity can be stored as an attribute of the data object.
  • a data object can be used to describe the number of resources (whether virtual or physical) owned by the user, such as the bank account balance owned by the user or the number of certain items.
  • a data object may correspond to one or more records, and an attribute may correspond, for example, to one or more fields.
  • a data object can correspond to one or more files, or a fragment in the file.
  • the plurality of user terminals 20 include the first user terminal 21, the second user terminal 22, the third user terminal 23, and the fourth user terminal 24.
  • Specific examples of these user terminals 20 may include, but are not limited to, a smartphone, a tablet, a desktop computer, a notebook computer, and a wearable device.
  • the user terminal 20 can access the Internet in different ways and can perform network data interaction with the server system 10.
  • the first user terminal 21 is a smart phone, the first user terminal 21 accesses the Internet through the mobile base station 31, and the second user terminal 22 and the third user terminal 23 access the Internet through the same mobile base station 32; the fourth user terminal 23 Access the Internet through the Wifi hotspot 33.
  • the user uses various network services provided by the server system 10 by operating the user terminal 20 to perform network data interaction with the server system 10.
  • the first user terminal 21, the second user terminal 22, the third user terminal 23, and the fourth user terminal 24 are in one-to-one correspondence with the users A, B, C, and D, respectively.
  • the user attribute value transfer system of this embodiment is used to perform attribute value transfer operations between different users.
  • the specific work flow of the user attribute value transfer system of this embodiment is described below in conjunction with a specific application scenario.
  • an initiating user (the user initiating the attribute value transfer operation, in this embodiment, the initiating user is the user A) starts the application installed in the first user terminal 21 (can be a stand-alone application, or an function within the application) Module).
  • the application starts the application installed in the first user terminal 21 (can be a stand-alone application, or an function within the application) Module).
  • an interface of the application is displayed, in which the user can select a photo stored in the first user terminal 21 or activate a shooting function of the first user terminal 21 to take a photo in real time.
  • the selected photo or the photo taken in real time needs to include at least the party to perform the attribute value transfer operation.
  • the counterparty may be users B, C, and D.
  • the first user terminal 21 After acquiring the photo selected by the user, the first user terminal 21 implements an interaction mechanism with the user according to the selected photo to determine a transaction party that is to perform attribute value transfer with the current user and a target attribute value of each transaction party. .
  • the first user terminal 21 may first perform a face recognition process by the application to recognize all faces within the photo, and output a photo selected by the user in the user interface.
  • the purpose of outputting photos here is to allow the user to select a face in the photo.
  • FIG. 3 it is a schematic diagram of an interface for outputting photos in the first user terminal 21.
  • a photo 102 is displayed in the user interface 101, and three faces 103 are present in the photo 102.
  • the first user terminal 21 includes a touch screen, in order to detect the user's selection command for the face in the photo 102, the first user terminal 21 may track the event that the user clicks or touches the first user terminal 21. That is,
  • the face selection instruction may include a user input operation such as a click or a touch.
  • detecting a user's selection instruction for a face is performed in the following manner.
  • face recognition the area in which all faces in the photo 102 are located is identified.
  • a click or touch event of the parent container of the photo 102 can be detected, and a process of determining whether a face is selected is performed when these events are triggered.
  • the parent container described above refers to various interface elements (or controls) that accommodate the photo 102. Take the Android system as an example.
  • the parent container can be, for example, an ImageView control. Other operating systems are different from the Android system, but all have similar functions.
  • the above process of determining whether a face is selected is as follows: obtaining the coordinates of the contact; determining whether the coordinates of the contact are in the area where the face is located; if the coordinates of the contact are in the area where the face is located Inside, it is determined that the user has selected the corresponding face.
  • detecting a user's selection instruction for a face is performed in the following manner.
  • face recognition the area in which the face of the photo 102 is located is identified.
  • a corresponding label object 104 is displayed in the user interface 101 corresponding to the location of the face, and the label object 104 can be, for example, a border or a transparent floating layer.
  • the application also detects clicks or touch events on the marked object. After the click or touch event of the marker object 104 is triggered, it can be determined that the user has selected the corresponding face. Compared with the previous method, this method can make full use of the system's own click or touch event trigger mechanism, without having to compare the contact coordinates every time in the area where the face is located.
  • the shape of the marking object 104 may be matched to a human face, however, it is understood that the marking object 104 may also be a regular shape, such as a rectangle, a circle, or a square. In the case, detecting a click or touch event of the marker object 104 makes the coordinate comparison process much simpler and more efficient.
  • the first user terminal 21 can display a text input box 105, prompting the user to input a face corresponding to the selected face.
  • the current user needs to transfer the attribute value to the user corresponding to the selected target face; the negative number indicates that the user corresponding to the target face needs to transfer the attribute data to the current user.
  • the target attribute value input by the user can be obtained and saved.
  • the prompt information may also be displayed in the user interface 101, for example, a prompt box 106 is displayed above the target human face, and the content of the prompt box 106 includes the user just input. Value. The user can select the target face again at any time to adjust the input value.
  • the user interface 101 may further include a submit button 107.
  • the application After the submit button 107 is pressed, the application generates an attribute value transfer request based on the value input by the user.
  • the attribute value transfer request may include, for example, identification information corresponding to each recognized face and a numerical value corresponding thereto; the attribute value transfer request may further include identification information of the current user.
  • the encryption process can also be performed. Then, it can be sent to the server 11 using a predetermined network protocol such as a hypertext transfer protocol.
  • each target face requires the user to input the target attribute value once.
  • the input of the target attribute value is not limited to this manner.
  • the user may first select the target face from the photo.
  • a corresponding indication object 108 can also be displayed in the user interface in the area where the face is located.
  • Both the marker object 108 and the marker object 104 of FIG. 3 are capable of responding to a user's click or touch event, but the marker object 108 is not specifically responsive to the marker object 104.
  • the indication object 108 is clicked or touched to correspond to the switching of the target face selected or unselected state. That is, the user can select the target face by clicking on the marker object 108. In the default state, all of the marker objects 108 can be unselected or both selected. After the user completes the selection of the target face, the user can click the button 109 to confirm the selection.
  • the popup window 110 includes an input box 111 and a button 112 for the user to input a total number.
  • the user can confirm by button 112.
  • the application determines all target faces selected by the user according to the selected or unselected state of the marker object 108, and the target attribute value of each target face is the total number divided by all the targets selected by the user. The number of faces.
  • the application transmits an attribute value transfer request to the server 11.
  • any way of selecting a target user to perform attribute value transfer by clicking on a face in a photo can be applied to the present embodiment, and the target attribute value of each target user can be determined by means of an average number. It can also be entered separately.
  • the server 11 performs a corresponding value transfer operation after receiving the attribute value transfer request transmitted by the first user terminal 21.
  • the server 11 parses the user who wants to perform the attribute value transfer request and the corresponding target attribute value from the attribute value transfer request. For example, the server 11 acquires a corresponding user identifier (ID) based on the face recognition information.
  • ID a corresponding user identifier
  • the face recognition information of each user needs to be stored in the database 12 in advance, and then after receiving the numerical transfer request sent by the first user terminal 21, Obtain the face recognition information in the attribute value transfer request, and retrieve the obtained face recognition information in the database to obtain the corresponding user ID.
  • the parsing attribute value transfer request obtains the following data: user A: -45; user B: 20; user C: 10; user D: 15.
  • the database 12 can be modified to add the attribute values of each related user to the parsed values. It can be understood that, according to the values in the application scenario, the attribute value of the user A is reduced by 45, the attribute value of the user B is increased by 20, the attribute value of the user C is increased by 10, and the attribute data of the user D is increased by 15. Therefore, in this application scenario, user A transfers a certain amount of attribute values to user B, user C, and user D, respectively. It can be understood that the sum of the attribute value changes of all users involved in a numerical transfer operation should be zero.
  • the parsing attribute value transfer request obtains the following data: user A: -25; user B: 20; user C: -10; user D: 15.
  • the database 12 can be modified to add the attribute values of each related user to the parsed values. It can be understood that, according to the value in the application scenario, the attribute value of the user A is reduced by 25, the attribute value of the user B is increased by 20, the attribute value of the user C is decreased by 10, and the attribute data of the user D is increased by 15. Therefore, in this application scenario, user A and user C are attribute value transferees, and user B and user D are attribute value transfer parties.
  • the parsing attribute value transfer request obtains the following data: user A: 30; user B: -10; user C: -10; user D: -10.
  • the database 12 can be modified to add the attribute values of each related user to the parsed values. It can be understood that, according to the value in the application scenario, the attribute value of the user A is increased by 30, and the attribute values of the user B, the user C, and the user D are decreased by 10. Therefore, in this application scenario, user A charges the attribute value to each of the other users.
  • the parsing attribute value transfer request obtains the following data: user A: 30; user B: -10; user C: -10; user D: -10.
  • the attribute value transfer operation between different users can be realized. It can be understood that, in these manners, as long as the initiating user initiates the attribute value transfer request, the value transfer operation is directly performed. If the application launches the scene in which the user transfers the attribute value to other users, the data security problem is not caused, but the application is Scenarios that initiate users to charge attribute values to other users can lead to data security issues.
  • the server 11 may push a confirmation request for each involved user (except the initiating user), and other users may receive the confirmation request after starting the corresponding application through other user terminals.
  • the user B can receive the confirmation request by the application receiving server 11 in the second user terminal 22, or actively query the server 11 whether there is a confirmation request belonging to itself. Other users and so on, no longer repeat description.
  • FIG. 7 it is a schematic diagram of an interface after the application in the second user terminal 22 receives the confirmation request sent by the server 11.
  • the user interface 201 includes a confirmation request 202, as well as a button 203 and a button 204.
  • the button 203 is used to let the user B confirm the extracted attribute value transferred by the user A
  • the button 204 is used to let the user B reject the attribute value transferred by the user A.
  • the application After the button 203 or the button 204 is pressed, the application generates corresponding confirmation information and transmits the confirmation information to the server 11.
  • the server 11 performs the value transfer operation corresponding to the user B, that is, the slave user A In the attribute value, the corresponding value is transferred to the attribute value of the user B. Performing this process for each user completes the entire property value transfer operation.
  • the confirmation request may not be directly sent, and as shown in FIG. 8, the user is allowed to perform an additional authentication, for example, allowing the user to input the reservation in the network system. Confirm password, dynamic password, SMS verification code, etc. for authentication.
  • the second user terminal 22 checks the verification information input by the user, and sends a confirmation message indicating that the user accepts the attribute value transfer request to the server 11 after the verification is passed.
  • the confirmation of the user making the confirmation request is not limited to the manner shown in interface 201.
  • the user interface 201 includes a confirmation request 202, as well as a button 205. After the button 205 is pressed, the application starts the photographing function to take a photo of the current user, and sends the photographed face or face recognition information of the photo to the server 11 authenticating. According to this embodiment, the user's convenience and security can be improved by using face recognition instead of the password or other authentication measures input by the user.
  • the server 11 performs face recognition analysis first.
  • the attribute value transfer request includes the face recognition information of the user B, and the face recognition information of the user B included in the attribute value transfer request and the face recognition information uploaded by the second user terminal 22 or uploaded.
  • the face recognition analysis results of the photos match each other, and the user authorization confirms the passage, and the server 11 can perform the numerical transfer operation, that is, modify the data in the database 12.
  • the same user may receive multiple acknowledgment requests, in which case all acknowledgment requests may be acknowledged at one time.
  • the confirmation information is sent once for each confirmation request, or the confirmation information includes confirmation information for all confirmation requests.
  • the server 11 after receiving the confirmation information sent by the second user terminal 22, the server 11 performs the transfer operation of the attribute value according to the confirmation information, that is, the attribute value in the database is modified.
  • the user can initiate the attribute value transfer operation through a photo, and only use the face on the photo to input the value to be transferred, without inputting the user's account, and the user initiated attribute value is improved. Convenience when transferring operations. Moreover, the face recognition method can also prevent input errors when inputting a user account, thereby improving the security of the attribute value transfer operation.
  • FIG. 11 is a flowchart of a user attribute value transfer method according to an embodiment of the present invention, where the user attribute value transfer method is applicable to a network system. As shown in FIG. 11, the method of this embodiment includes the following steps:
  • Step S101 obtaining a photo.
  • the first user terminal 21 acquires a photo that initiates user selection.
  • the initiating user is the user who initiates the attribute value transfer operation.
  • the initiating user is the user A.
  • An initiating user launches an application (which may be a stand-alone application or a functional module within the application) installed in the first user terminal 21.
  • an interface of the application is displayed, in which the user can select one stored in the first user terminal 21.
  • the photo or the shooting function of the first user terminal 21 is activated to take a photo in real time, and the selected photo or the photo taken in real time needs to include at least the transaction party to perform the attribute value transfer operation.
  • Step S102 performing face recognition processing on the photo to recognize the face.
  • the first user terminal 21 performs face recognition processing on the photo to recognize all the faces.
  • the algorithm used for face recognition is not limited at all, and all algorithms capable of accurately and effectively recognizing a face can be applied to the present embodiment.
  • the face recognition may employ a face recognition algorithm based on template matching, a subspace analysis face recognition algorithm, a Locality Preserving Projections (LPP) face recognition, a principal component analysis (PCA) algorithm, Eigenface method (based on KL transform), artificial neural network face recognition algorithm, and support vector machine face recognition algorithm.
  • LPP Locality Preserving Projections
  • PCA principal component analysis
  • Eigenface method based on KL transform
  • artificial neural network face recognition algorithm and support vector machine face recognition algorithm.
  • the core idea of face recognition algorithm based on template matching uses a person's facial feature law to establish a stereoscopic model frame. After positioning the face position of the person, the model frame is used to locate and adjust the facial features of the person. Factors such as observation angle, occlusion, and expression changes during face recognition.
  • Step S103 determining a target face selected from the recognized face and a corresponding target attribute value according to the input instruction.
  • the first user terminal 21 determines a target face selected from all the recognized faces and a corresponding target attribute value according to an instruction input by the user. After acquiring the photo selected by the user, the first user terminal 21 implements an interaction mechanism with the user according to the selected photo to determine a transaction party (target face) that performs attribute value transfer with the current user and a target attribute of each transaction party. Value.
  • Step S104 generating an attribute value transfer request according to the target value and the corresponding face recognition information.
  • the first user terminal 21 generates an attribute value transfer request according to the target attribute value and the corresponding face recognition information.
  • the first user terminal 21 generates an attribute value transfer request based on the value input by the user through the application.
  • Attribute value transfer requests for example, can be identified for each identified The identification information corresponding to the face and the value corresponding thereto; the attribute value transfer request may further include the identification information of the current user.
  • the encryption process can also be performed.
  • Step S105 the attribute value transfer request is sent to the server, so that the server performs the attribute value transfer operation according to the attribute value transfer request.
  • the first user terminal 21 may send to the server 11 by using a predetermined network protocol (such as a hypertext transfer protocol).
  • the server 11 performs a value transfer operation according to the attribute value transfer request after receiving the attribute value transfer request.
  • the user can initiate the attribute value transfer operation by using the photo, and only use the face on the photo to input the value to be transferred, without inputting the user's account, thereby improving the user-initiated attribute value transfer operation.
  • the face recognition method can also prevent input errors when inputting a user account, thereby improving the security of the attribute value transfer operation.
  • the object of the attribute value transfer operation is automatically determined by the automated and highly accurate face recognition technology, so that human error can be avoided as much as possible, and the efficiency and accuracy of the object for determining the attribute value transfer operation can be improved. Sexuality further enhances the security of attribute value transfer operations.
  • the method before step S105, further includes: extracting user information corresponding to the target face from the server and displaying; obtaining an acknowledgement instruction for the displayed user information, and performing step S105 according to the confirmation instruction.
  • the object that needs to perform the attribute value transfer operation can be confirmed, and the target attribute value can be prevented from being transferred to the similar non-target user.
  • step S105 includes: sending an attribute value transfer request to the server, so that the server performs the verification of the social relationship according to the attribute value transfer request and the user relationship chain, and performs the attribute value according to the attribute value transfer request after the verification is passed. Transfer operation.
  • the server may verify the social relationship between the two users according to the user relationship chain of the initiating user and the face recognition information in the attribute value transfer request. If there is a social relationship, the attribute value transfer operation can be directly performed according to the attribute value transfer request. If there is no social relationship Then, the attribute value transfer request may be rejected, or the first user terminal may be requested to confirm, and after receiving the confirmation instruction of the first user terminal, the attribute value transfer operation is performed according to the attribute value transfer request.
  • the social relationship is verified by combining the user relationship chain, so that the attribute value transfer operation can be quickly performed when the two parties have a social relationship, and the security of the attribute value transfer operation is guaranteed by the social relationship. And avoid moving the target attribute value to a non-target user who looks like a similar but does not have a social relationship.
  • step S103 specifically includes the following steps:
  • Step S106 displaying an input interface for receiving user input in response to the input face selection instruction.
  • the first user terminal 21 displays an input interface for receiving user input in response to a face selection instruction input by the user.
  • detecting a user's selection instruction for a face is performed in the following manner.
  • face recognition the area in which the face of the photo 102 is located is identified.
  • a corresponding label object 104 is displayed in the user interface 101 corresponding to the location of the face, and the label object 104 can be, for example, a border or a transparent floating layer.
  • the application also detects clicks or touch events on the marked object. After the click or touch event of the marker object 104 is triggered, it can be determined that the user has selected the corresponding face. Compared with the previous method, this method can make full use of the system's own click or touch event trigger mechanism, without having to compare the contact coordinates every time in the area where the face is located.
  • the shape of the marker object 104 is matched to the face, however, it can be understood that the marker object 104 can also have a regular shape, such as a rectangle, a circle, or a square, in which case It is much simpler and more efficient to detect the click or touch event coordinate comparison process of the marker object 104.
  • Step S107 determining a target face selected from the recognized face corresponding to the face selection command, and determining a target attribute value corresponding to the target face as a value input in the input interface.
  • the first user terminal 21 determines a target face selected from the recognized face corresponding to the face selection instruction, and determines a target attribute value corresponding to the target face as a value input by the user in the input interface. .
  • a text input box 105 may be displayed, prompting the user to input a value corresponding to the selected face, this value It represents the attribute value to be transferred between the current user and the user corresponding to the target face.
  • a positive number indicates that the current user needs to transfer the attribute value to the user corresponding to the selected target face;
  • a negative number indicates that the user corresponding to the target face needs to transfer the attribute data to the current user.
  • the input interface can be triggered by clicking on the face, so that the corresponding target attribute value is input in the input interface, and the user's account number is not required to be input, thereby improving the convenience when the user initiates the attribute value transfer operation.
  • the face recognition method can also prevent input errors when inputting a user account, thereby improving the security of the attribute value transfer operation.
  • all faces in the photo can be recognized efficiently and accurately, so that the corresponding target attribute values can be quickly input in the input interface corresponding to each face, thereby improving the efficiency of the attribute value transfer operation. .
  • multiple target faces and corresponding target attribute values can be quickly determined in one photo, which can further improve the efficiency of the attribute value transfer operation.
  • step S103 specifically includes the following steps:
  • Step S108 determining a target face selected from the recognized faces according to the input instruction, and obtaining the total number of inputs.
  • the first user terminal 21 determines a target face selected from the recognized faces according to an instruction input by the user, and acquires the total number of user inputs.
  • Step S109 determining that the target attribute value of the selected target face is the total number divided by the target face total.
  • the user can first select a target face from the photo.
  • a corresponding indication object 108 can also be displayed in the user interface in the area where the face is located.
  • Both the marker object 108 and the marker object 104 of FIG. 3 are capable of responding to a user's click or touch event, but the marker object 108 is not specifically responsive to the marker object 104.
  • the indication object 108 is clicked or touched to correspond to the switching of the target face selected or unselected state. That is, the user can select the target face by clicking on the marker object 108. In the default state, all of the marker objects 108 can be unselected or both selected. After the user completes the selection of the target face, the user can click the button 109 to confirm the selection.
  • the popup window 110 includes an input box 111 and a button 112 for the user to input a total number. After the input is completed, it can be confirmed by the button 112. After the button 112 is clicked, the application determines all target faces selected by the user according to the selected or unselected state of the marker object 108, and the target attribute value of each target face is the total number divided by all the targets selected by the user. The number of faces. Then, similar to after the button 107 is pressed, the application transmits an attribute value transfer request to the server 11.
  • the embodiment it is only necessary to input the total number once, and it is not necessary to input the target attribute value once for each target face, thereby improving the efficiency of performing the attribute value transfer operation and further improving the convenience of the user input operation.
  • step S102 the following steps are further included between step S102 and step S103:
  • step S110 a photo is output in the user interface, and the user interface is displayed on the touch screen.
  • step S111 after detecting the contact between the operation object and the touch screen, it is determined whether the coordinates of the contact point are within the area corresponding to the face in the photo, and if so, the face selection command is detected.
  • the area in which all faces in the photo 102 are located is identified.
  • a click or touch event of the parent container of the photo 102 can be detected.
  • a process of determining whether a face is selected is performed.
  • the parent container described above refers to various interface elements (or controls) that accommodate the photo 102. Take the Android system as an example.
  • the parent container can be, for example, an ImageView control. Other operating systems are different from the Android system, but all have similar functions.
  • the above process of determining whether a face is selected is as follows: obtaining the coordinates of the contact; determining whether the coordinates of the contact are in the area where the face is located; if the coordinates of the contact are in the area where the face is located Inside, it is determined that the user has selected the corresponding face.
  • the user interface is displayed on the touch screen, so that when the object touches the face in the photo displayed on the touch screen, the detection of the face selection instruction can be triggered, so that the user can directly and efficiently through the object. Determining the target face further improves the ease of operation.
  • step S102 the following steps are further included between step S102 and step S103:
  • Step S112 outputting a photo in the user interface.
  • a photo 102 is displayed in the user interface 101, and three faces 103 are present in the photo 102.
  • Step S113 generating a label object corresponding to the face in the photo in the user interface.
  • a corresponding marker object 104 is displayed in the user interface 101 corresponding to the location of the face, and the marker object 104 can be, for example, a border or a transparent floating layer.
  • step S115 a face selection instruction is detected if the registered click event of the marked object is triggered.
  • the first user terminal 21 may register a click event of the marked object before step S115, and then detect a trigger of the click event of the marked object when step S115 is performed. A face selection instruction is detected if a click event of the marked object is triggered.
  • the face is marked by the object 104, so that the click or touch event triggering mechanism of the system can be fully utilized, and it is not necessary to compare the contact coordinates every time in the area where the face is located, which is reduced multiple times. Compare the calculated pressure of the contact coordinates to improve the detection of face selection The efficiency of the instruction.
  • the method further includes:
  • step S116 the target value is also displayed on the photo after determining the target attribute value corresponding to each face.
  • the prompt information may also be displayed in the user interface 101, for example, a prompt box 106 is displayed above the target human face, and the content of the prompt box 106 includes the user just input. Value. The user can select the target face again at any time to adjust the input value.
  • a user attribute value transfer device is provided. As shown in FIG. 17, the apparatus of this embodiment includes: an obtaining module 21, a face recognition module 22, an interaction module 23, a request generation module 24, and a request sending module 25.
  • the acquisition module 21 is for acquiring a photo.
  • An initiating user (the user initiating the attribute value transfer operation, in this embodiment, the initiating user is the user A) starts the application installed in the first user terminal 21 (which may be a stand-alone application or a functional module in the application) .
  • the application starts, after the application is started, an interface of the application is displayed, in which the user can select a photo stored in the first user terminal 21 or activate a shooting function of the first user terminal 21 to take a photo in real time.
  • the selected photo or the photo taken in real time needs to include at least the party to perform the attribute value transfer operation.
  • the face recognition module 22 is configured to perform face recognition processing on the photo to recognize the face.
  • the algorithm used for face recognition is not limited at all, and all algorithms capable of accurately and effectively recognizing a face can be applied to the present embodiment.
  • the interaction module 23 is configured to determine a target face selected from the recognized face and a corresponding target attribute value according to the input instruction.
  • the mechanism determines the counterparty (target face) with which the current user wants to transfer attribute values and the target attribute value of each trader.
  • the request generation module 24 is configured to generate an attribute value transfer request according to the target value and the corresponding face recognition information.
  • the application generates an attribute value transfer request based on the value entered by the user.
  • the attribute value transfer request may include, for example, the following information, identification information corresponding to each recognized face, and a numerical value corresponding thereto; the attribute value transfer request may further include identification information of the current user.
  • the encryption process can also be performed.
  • the request sending module 25 is configured to send an attribute value transfer request to the server, so that the server performs an attribute value transfer operation according to the attribute value transfer request.
  • the attribute value transfer request After the attribute value transfer request is generated, it may be sent to the server 11 using a predetermined network protocol such as a hypertext transfer protocol.
  • the server 11 performs a value transfer operation according to the attribute value transfer request after receiving the attribute value transfer request.
  • the user can initiate the attribute value transfer operation through a photo, and only use the face on the photo to input the value to be transferred, without inputting the user's account, and the user initiated attribute value is improved. Convenience when transferring operations. Moreover, the face recognition method can also prevent input errors when inputting a user account, thereby improving the security of the attribute value transfer operation.
  • the interaction module 23 in the user attribute value transfer apparatus specifically includes: a first display unit 231 and a first determining unit 232.
  • the first display unit 231 is configured to display an input interface for receiving user input in response to the input face selection instruction.
  • detecting a user's selection instruction for a face is performed in the following manner.
  • face recognition the area in which the face of the photo 102 is located is identified.
  • a corresponding label object 104 is displayed in the user interface 101 corresponding to the location of the face, and the label object 104 can be, for example, a border or a transparent floating layer.
  • the sequence also detects clicks or touch events of the marked object. After the click or touch event of the marker object 104 is triggered, it can be determined that the user has selected the corresponding face. Compared with the previous method, this method can make full use of the system's own click or touch event trigger mechanism, without having to compare the contact coordinates every time in the area where the face is located.
  • the shape of the marker object 104 is matched to the face, however, it can be understood that the marker object 104 can also have a regular shape, such as a rectangle, a circle, or a square, in which case Detecting the click or touch event of the marker object 104 makes the coordinate comparison process much simpler and more efficient.
  • the first determining unit 232 is configured to determine a target face selected from the recognized face corresponding to the face selection instruction, and determine that the target attribute value corresponding to the target face is a value input in the input interface.
  • a text input box 105 may be displayed, prompting the user to input a value corresponding to the selected face, this value It represents the attribute value to be transferred between the current user and the user corresponding to the target face.
  • a positive number indicates that the current user needs to transfer the attribute value to the user corresponding to the selected target face;
  • a negative number indicates that the user corresponding to the target face needs to transfer the attribute data to the current user.
  • the target attribute value can be input by clicking on the face, and the user's account number is not required to be input, thereby improving the convenience when the user initiates the attribute value transfer operation.
  • the face recognition method can also prevent input errors when inputting a user account, thereby improving the security of the attribute value transfer operation.
  • the interaction module 23 in the user attribute value transfer apparatus specifically includes: an obtaining unit 233 and a second determining unit 234.
  • the obtaining unit 233 is configured to determine a target face selected from the recognized faces according to the input instruction, and acquire the total number of inputs.
  • the second determining unit 234 is configured to determine that the target attribute value of the selected target face is the total number divided by the total number of the target faces.
  • the user can first select a target face from the photo.
  • a corresponding indication object 108 can also be displayed in the user interface in the area where the face is located.
  • Both the marker object 108 and the marker object 104 of FIG. 3 are capable of responding to a user's click or touch, but the marker object 108 is not specifically responsive to the marker object 104.
  • the indication object 108 is clicked or touched to correspond to the switching of the target face selected or unselected state. That is, the user can select the target face by clicking on the marker object 108. In the default state, all of the marker objects 108 can be unselected or both selected. After the user completes the selection of the target face, the user can click the button 109 to confirm the selection.
  • the popup window 110 includes an input box 111 and a button 112 for the user to input a total number. After the input is completed, it can be confirmed by the button 112. After the button 112 is clicked, the application determines all target faces selected by the user according to the selected or unselected state of the marker object 108, and the target attribute value of each target face is the total number divided by all the targets selected by the user. The number of faces. Then, similar to after the button 107 is pressed, the application transmits an attribute value transfer request to the server 11.
  • the embodiment it is only necessary to input the total number once, and it is not necessary to input the target attribute value once for each target face, thereby further improving the convenience of the user input operation.
  • the interaction module 23 in the user attribute value transfer apparatus further includes: a second display unit 235 and a first detection unit 236.
  • the second display unit 235 is configured to output a photo in the user interface, and the user interface is displayed in the touch screen.
  • the first detecting unit 236 is configured to determine whether the coordinates of the contact point are in the area corresponding to the face in the photo after detecting the contact between the operation object and the touch screen, and if yes, the face selection command is detected.
  • the area in which all faces in the photo 102 are located is identified.
  • a click or touch event of the parent container of the photo 102 can be detected, and a process of determining whether a face is selected is performed when these events are triggered.
  • the parent container described above refers to various interface elements (or controls) that accommodate the photo 102. Take the Android system as an example.
  • the parent container can be, for example, an ImageView control. Other operating systems are different from the Android system, but all have similar functions.
  • the above process of determining whether a face is selected is as follows: obtaining the coordinates of the contact; determining whether the coordinates of the contact are in the area where the face is located; if the coordinates of the contact are in the area where the face is located Inside, it is determined that the user has selected the corresponding face.
  • the detection of the face selection instruction can be realized by detecting the click event of the parent container of the displayed photo, and the technical solution is simple.
  • the interaction module 23 in the user attribute value transfer apparatus further includes: a third display unit 237, a fourth display unit 238, and a second detection unit 240.
  • the third display unit 237 is for outputting a photo in the user interface.
  • a photo 102 is displayed in the user interface 101, and three faces 103 are present in the photo 102.
  • the fourth display unit 238 is configured to generate a label object corresponding to the face in the photo in the user interface.
  • a corresponding marker object 104 is displayed in the user interface 101 corresponding to the location of the face, and the marker object 104 can be, for example, a border or a transparent floating layer.
  • the second detecting unit 240 is configured to detect a face selection instruction if the registered click event of the marked object is triggered.
  • the click or touch event triggering mechanism of the system can be fully utilized, and it is not necessary to compare the contact coordinates every time in the area where the face is located.
  • the interaction module 23 further includes an event registration unit 239 for registering a click event for the marked object.
  • the interaction module 23 in the user attribute value transfer apparatus further includes a prompting unit 241.
  • the prompting unit 241 is configured to display the target value on the photo after determining the target attribute value corresponding to each face.
  • the prompt information may also be displayed in the user interface 101, for example, a prompt box 106 is displayed above the target human face, and the content of the prompt box 106 includes the user just input. Value. The user can select the target face again at any time to adjust the input value.
  • a user attribute value transfer method which specifically includes the following steps:
  • step S2302 the first user terminal acquires a photo.
  • step S2304 the first user terminal performs face recognition processing on the photo to identify the face.
  • Step S2306 the first user terminal determines, according to the input instruction, the target face selected from the recognized face and the corresponding target attribute value.
  • Step S2308 The first user terminal generates an attribute value transfer request according to the target attribute value and the corresponding face recognition information.
  • step S2310 the first user terminal sends an attribute value transfer request to the server.
  • step S2312 the server performs an attribute value transfer operation according to the attribute value transfer request.
  • step S2306 includes: the first user terminal determines, according to the input instruction, the target face selected from the recognized faces, and obtains the total number of inputs; and determines a target attribute of the selected target face. The value is the total divided by the total number of target faces.
  • step S2306 includes: the first user terminal displays an input interface for receiving user input in response to the input face selection instruction; and determines that the selected face is selected corresponding to the face selection instruction.
  • the user attribute value transfer method further includes: the first user terminal outputs a photo in the user interface, the user interface is displayed in the touch screen; and the first user terminal detects the operation object and the touch screen. After the contact, it is judged whether the coordinates of the contact are in the area corresponding to the face in the photo, and if so, the face selection command is detected.
  • the user attribute value transfer method further includes: the first user terminal outputs a photo in the user interface; generating a label object corresponding to the face in the photo in the user interface; and if the target object is clicked The face selection command is detected when the event is triggered.
  • the user attribute value transfer method further includes: the first user terminal further displaying the target attribute value on the photo after determining the target attribute value corresponding to each face.
  • Fig. 24 is a block diagram showing the configuration of an embodiment of the first user terminal 21 described above. It can be understood that other user terminals can have a similar hardware architecture as the first user terminal. As shown in FIG. 24, the first user terminal 21 includes a memory 212, a memory controller 214, one or more (only one shown) processor 216, peripheral interface 218, network module 220, and display 222. These components communicate with one another via one or more communication bus/signal lines.
  • FIG. 24 is merely illustrative, and the first user terminal 21 described above may further include more or less components than those shown in FIG. 24 or have a different configuration from that shown in FIG.
  • the components shown in Figure 24 can be implemented in hardware, software, or a combination thereof.
  • the memory 212 can be used to store software programs and modules, such as program instructions/modules corresponding to the methods and devices in the embodiments of the present invention.
  • the processor 216 executes various functional applications by running software programs and modules stored in the memory 212. And data processing, that is, to achieve the above method.
  • Memory 212 can include high speed random access memory and can also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, memory 212 can further include memory remotely located relative to processor 216, which can be connected to the server via a network. Examples of the above networks include but Not limited to the Internet, intranets, local area networks, mobile communication networks, and combinations thereof. Access to the memory 212 by the processor 216 and other possible components can be performed under the control of the memory controller 214.
  • Peripheral interface 218 couples various input/input devices to processor 216.
  • the processor 216 runs various software within the memory 212, instructs the server to perform various functions, and performs data processing.
  • peripheral interface 218, processor 216, and memory controller 214 can be implemented in a single chip. In other instances, they can be implemented by separate chips.
  • the network module 220 is configured to receive and transmit network signals.
  • the above network signal may include a wireless signal or a wired signal.
  • the network signal is a wired network signal.
  • the network module 220 may include components such as a processor, a random access memory, a converter, a crystal oscillator, and the like.
  • the network signal described above is a wireless signal (eg, a radio frequency signal).
  • the network module 220 is substantially a radio frequency module, and receives and transmits electromagnetic waves to realize mutual conversion between electromagnetic waves and electrical signals, thereby communicating with a communication network or other devices.
  • the radio frequency module can include various existing circuit components for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, a memory, and the like.
  • the RF module can communicate with various networks such as the Internet, intranets, wireless networks or with other devices over a wireless network.
  • the wireless network described above may include a cellular telephone network, a wireless local area network, or a metropolitan area network.
  • the above wireless network can use various communication standards, protocols and technologies, including but not limited to Global System for Mobile Communication (GSM), Enhanced Data GSM Environment (EDGE), and wideband code.
  • GSM Global System for Mobile Communication
  • EDGE Enhanced Data GSM Environment
  • W-CDMA Wideband code division multiple access
  • CDMA Code division access
  • TDMA time division multiple access
  • WiFi wireless fidelity
  • Wi-Max Worldwide Interoperability for Microwave Access
  • Display module 222 is used to display information entered by the user, information provided to the user, and various graphical user interfaces, which may be comprised of graphics, text, icons, video, and any combination thereof.
  • display module 222 includes a display panel.
  • the display panel can be, for example, a liquid crystal display (LCD), an Organic Light-Emitting Diode Display (OLED) display panel, an electro-optical display panel (EPD), or the like.
  • the touch surface may be disposed on the display panel to be integrated with the display panel.
  • display module 222 may also include other types of display devices, including, for example, a projection display device.
  • the projection display device also needs to include some components for projection such as a lens group as compared to a general display panel.
  • Operating system 224 may include various software components and/or drivers for managing system tasks (eg, memory management, storage device control, power management, etc.) and may communicate with various hardware or software components to provide additional software.
  • the application 22 runs on the basis of the operating system 224 for implementing the various methods in the above embodiments.
  • an embodiment of the present invention further provides a computer readable storage medium having stored therein computer executable instructions, such as a non-volatile memory such as an optical disk, a hard disk, or a flash memory.
  • computer executable instructions described above are for causing a computer or similar computing device to perform the methods of the above embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Human Computer Interaction (AREA)
  • Finance (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种用户属性数值转移方法,包括:获取照片;对所述照片进行人脸识别处理以识别出人脸;根据输入的指令确定从识别出的人脸中选定的目标人脸以及对应的目标属性数值;根据所述目标属性数值以及对应的人脸识别信息生成属性数值转移请求;及将所述属性数值转移请求发送给服务器,以使所述服务器根据所述属性数值转移请求进行属性数值转移操作。

Description

用户属性数值转移方法及终端
本申请要求于2014年10月22日提交中国专利局、申请号为201410567731.4、发明名称为“网络系统中用户的属性数值转移方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及网络系统技术,尤其涉及一种用户属性数值的转移方法及终端。
背景技术
随着互联网的发展,人们构建出各式各样的网络系统用来解决人们生活中碰到的各种问题。在网络系统中,每个用户实质上都是被映射为一个或者多个数据对象,而每个数据对象具有不同的属性。按照属性的值的类别大体上可以分为数值类型与非数值类型。数值类型的一个典型的例子就是用来存储用户拥有的资源的数量的数值。上述的用户拥有的资源可以是用户拥有的实体物资(例如货物)的数量,也可以是虚拟资源(例如各种线上服务提供商发布的各种代币如游戏币、积分等),还可以是用户在银行系统中的账户余额等等。
在线上交易系统要完成交易双方之间的交易操作,就会涉及到要进行交易双方之间的数值转移操作。在现有技术中,这种数值转移操作一般都是由用户发起。例如,用户要在线上游戏系统中将游戏币交付给其他用户,则其需要通过游戏内建立的交易系统或者系统邮件系统来实现。在交易系统中,用户需要找到对方的虚拟角色,然后才能进行当面(指在同一虚拟场景中)的交易过程;而在邮件系统中,用户需要输入接收方的用户名、邮箱地址等等用来唯一标识接收方,然后还要输入要向每个接收方转移的 数值。而在网上支付系统,例如网上银行、第三方支付或者其他金融账户中,要实现给其他人转账(账户余额被映射为用户的属性数值),一样每次输入接收方的账号与转账金额。
也就是说,在各种网络系统中发起用户要实现属性数值的转移,输入或者选择目标用户以及转移数值的输入是必不可少的步骤。然而,在现有的技术方案中,当一个用户需要同时向多个用户发起数值转移的操作时,需要逐一选择或者输入,其流程相当繁琐,输入或者选择目标用户以及数值的操作耗费发起用户相当长的时间。
发明内容
一种用户属性数值转移方法,所述方法包括:
获取照片;
对所述照片进行人脸识别处理以识别出人脸;
根据输入的指令确定从识别出的人脸中选定的目标人脸以及对应的目标属性数值;
根据所述目标属性数值以及对应的人脸识别信息生成属性数值转移请求;
将所述属性数值转移请求发送给服务器,以使所述服务器根据所述属性数值转移请求进行属性数值转移操作。
一种终端,包括存储介质及处理器,所述存储介质中储存有指令,所述指令被所述处理器执行时,可使得所述处理器执行以下步骤:
获取照片;
对所述照片进行人脸识别处理以识别出人脸;
根据输入的指令确定从识别出的人脸中选定的目标人脸以及对应的目标属性数值;
根据所述目标属性数值以及对应的人脸识别信息生成属性数值转移 请求;及
将所述属性数值转移请求发送给服务器,以使所述服务器根据所述属性数值转移请求进行属性数值转移操作。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中用户属性数值转移系统的架构示意图;
图2为一个实施例中用户属性数值转移系统进行属性数值转移操作时的交互时序图;
图3为一个实施例中用户属性数值转移系统进行属性数值转移操作时第一用户终端上的界面示意图之一;
图4为一个实施例中用户属性数值转移系统进行属性数值转移操作时第一用户终端上的界面示意图之二;
图5为一个实施例中用户属性数值转移系统进行属性数值转移操作时第一用户终端上的界面示意图之三;
图6为一个实施例中用户属性数值转移系统进行属性数值转移操作时的交互时序图;
图7为一个实施例中用户属性数值转移系统进行属性数值转移操作时第二用户终端的界面示意图之一;
图8为一个实施例中用户属性数值转移系统进行属性数值转移操作时第二用户终端的界面示意图之二;
图9为一个实施例中用户属性数值转移系统进行属性数值转移操作时 第二用户终端的界面示意图之三;
图10为一个实施例中用户属性数值转移系统进行属性数值转移操作时第二用户终端的界面示意图之四;
图11为一个实施例中用户属性数值转移方法的流程图;
图12为一个实施例中第一用户终端根据输入的指令确定从识别出的人脸中选定的目标人脸以及对应的目标属性数值的步骤的流程图;
图13为另一个实施例中第一用户终端根据输入的指令确定从识别出的人脸中选定的目标人脸以及对应的目标属性数值的步骤的流程图;
图14为另一个实施例中用户属性数值转移方法的流程图;
图15为一个实施例中检测人脸选择指令的步骤的流程图;
图16为一个实施例中用户属性数值转移方法的流程图;
图17为一个实施例中用户属性数值转移装置的模块框图;
图18为一个实施例中交互模块的模块框图;
图19为另一个实施例中交互模块的模块框图;
图20为再一个实施例中交互模块的模块框图;
图21为一个实施例中交互模块的模块框图;
图22为另一个实施例中用户属性数值转移装置的模块框图;
图23为一个实施例中用户属性数值转移方法的流程示意图;
图24为一个实施例中用户终端的硬件结构框图。
具体实施方式
为更进一步阐述本发明为实现预定发明目的所采取的技术手段及功效,以下结合附图及较佳实施例,对依据本发明的具体实施方式、结构、特征及其功效,详细说明如后。
图1为本发明第一实施例提供的用户属性数值转移系统的架构示意图,该用户属性数值转移系统可应用于网络系统。如图1所示,本实施例 的用户属性数值转移系统包括服务器系统10以及多个用户终端20。服务器系统10可以包括一台或者多台服务器11以及数据库12。服务器11可以读取或者修改数据库12内存储的数据。在本实施例中,数据库12内可以存储与用户一一对应的数据对象。根据不同的应用场景,这些数据对象可以具有不同的用途。例如,在线上游戏系统中,数据对象可用来描述用户的虚拟角色,虚拟角色可以具有很多数值属性,例如,虚拟角色拥有的游戏币的数值,虚拟角色拥有的虚拟物品的数量等等,这些数据或者数量都可以作为数据对象的属性进行存储。又例如,在线上交易系统中,数据对象可用来描述用户拥有的资源(无论是虚拟资源还是实物资产)的数量,例如用户拥有的银行账户余额或者某种商品的数量。
在关系数据库中,数据对象可以对应于一条或者多条记录,而一个属性例如可以对应于一个或者多个字段(Field)。而在以文件方式进行存储的数据库中,一个数据对象可以对应于一份或者多份文件,或者文件中的某个片段。
可以理解的是,无论在数据库中采取何种存储模型,其实质都是将用户映射为一个或者多个数据对象,而服务器11通过操作这些数据对象,就可以实现用户之间的数值转移操作,进而实现虚拟物品或者实体货物的线上交易、支付、赠与等功能。
本实施例中,上述的多个用户终端20包括第一用户终端21、第二用户终端22、第三用户终端23、以及第四用户终端24。这些用户终端20具体的实例可包括但并不限于:智能手机、平板电脑、台式电脑、笔记本电脑、可穿戴设备。用户终端20可以通过不同的方式接入互联网,并可与服务器系统10进行网络数据交互。
例如,第一用户终端21为智能手机,第一用户终端21通过移动基站31接入互联网;而第二用户终端22、第三用户终端23通过同一移动基站32接入互联网;第四用户终端23通过Wifi热点33接入互联网。
用户通过操作用户终端20与服务器系统10进行网络数据交互而使用服务器系统10提供的各种网络服务。在图1所示的架构中,第一用户终端21、第二用户终端22、第三用户终端23、以及第四用户终端24分别与用户A、B、C、D一一对应。
本实施例的用户属性数值转移系统是用于在不同用户之间进行属性数值转移操作,以下结合具体的应用场景描述本实施例的用户属性数值转移系统的具体工作流程。
参阅图2,其为本实施例的用户属性数值转移系统进行属性数值转移操作时的交互时序图。首先,一个发起用户(发起属性数值转移操作的用户,本实施例中发起用户为用户A)启动安装在第一用户终端21内的应用程序(可为独立的应用程序,或者应用程序内的功能模块)。相应地,该应用程序被启动后显示应用程序的界面,在此界面中,用户可以选择一个存储在第一用户终端21内的照片或者启动第一用户终端21的拍摄功能实时拍摄一张照片,选择的照片或者实时拍摄的照片内需要至少包括要进行属性数值转移操作的交易方。例如,在本实施例中,交易方可以是用户B、C及D。
第一用户终端21通过应用程序在获取到用户选择的照片后,根据选择的照片实现与用户的交互机制,以确定与当前用户要进行属性数值转移的交易方以及每个交易方的目标属性数值。
具体地,第一用户终端21通过应用程序首先可以进行人脸识别处理以识别出照片内的所有人脸,并在用户界面中输出用户选择的照片。这里输出照片的目的是让用户可以选择照片中的人脸。参阅图3,其为在第一用户终端21中输出照片的界面示意图。如图3所示,在用户界面101中显示有照片102,而照片102中具有三个人脸103。当第一用户终端21包括触控屏幕时,为了检测用户对照片102中人脸选择指令,第一用户终端21可以对用户点击或者触摸第一用户终端21的事件进行追踪。也就是说, 人脸选择指令可以包括点击或者触摸等用户输入操作。
在一个具体的实施方式中,检测用户对人脸的选择指令是按照以下方式进行的。在进行人脸识别的过程中,识别出照片102中所有人脸所在的区域。而在用户界面101中,可以检测照片102的父容器的点击或触控事件,在这些事件被触发时执行判定是否有人脸被选择的过程。上述的父容器是指容纳照片102的各种界面元素(或者说是控件)。以安卓系统为例,父容器例如可为ImageView控件,其他操作系统与安卓系统不尽相同,但都具有类似功能的控件。
对于触控屏幕,上述的判定是否有人脸被选择的过程具体如下:获取触点的坐标;判定触点的坐标是否在人脸所在的区域内;若触点的坐标在一个人脸所在的区域内,则判定用户选择了对应的人脸。
在另一个具体的实施方式中,检测用户对人脸的选择指令是按照以下方式进行的。在进行人脸识别的过程中,识别过照片102中人脸所在的区域。然后,在用户界面101中对应于人脸所在的位置显示一个对应的标示对象104,标示对象104例如可为一个边框或者一个透明的浮层。应用程序还检测标示对象的点击或触摸事件。在标示对象104的点击或触摸事件被触发后,即可判定用户选择了对应的人脸。与上一种方式相比,这种方式可以充分利用系统自带的点击或触摸事件触发机制,无须每次比较触点坐标是否在人脸所在的区域内。在图3所示的示例中,标示对象104的形状可以是与人脸相匹配,然而可以理解的是,标示对象104也可以是规则的形状,例如长方形、圆形、或者正方形,在此种情形下,检测标示对象104的点击或触摸事件使得坐标比较过程会简单得多,具有更高的效率。
在检测到用户选择某个人脸的用户指令后,例如,当一个标示对象104被点击或者触摸后,第一用户终端21可以显示一个文本输入框105,提示用户可以输入与选择的人脸对应的一个数值,这个数值就表示要在当前用户与该目标人脸所对应的用户之间要转移的属性数值。在一个具体的实施 方式中,用正数表示当前用户需要转移属性数值给选择的目标人脸对应的用户;负数表示目标人脸对应的用户需要转移属性数据给当前的用户。当然,也可以预先确定用正数表示目标人脸对应的用户需要转移属性数据给当前用户;用负数表示前用户需要转移属性数值给选择的目标人脸对应的用户。
在用户完成一个目标人脸的选择以及目标属性数值的输入后,可以获取并保存用户输入的目标属性数值。参阅图4,进一步地,为了让用户及时了解输入的数值,还可以在用户界面101中显示提示信息,例如在目标人脸的上方显示一个提示框106,提示框106的内容包括用户刚刚输入的数值。用户可以随时再次选择目标人脸进行输入的数值的调整。
参照图4,用户界面101中还可以包括一个提交按钮107,在提交按钮107被按下后,应用程序根据用户输入的数值生成属性数值转移请求。属性数值转移请求例如可包括每个识别出的人脸对应的识别信息以及与之对应的数值;属性数值转移请求还可包括当前用户的识别信息。在生成属性数值转移请求后,还可以进行加密处理。然后,可以采用预定的网络协议(如超文本传输协议)发送给服务器11。
在上述的方式中,每个目标人脸都需要用户输入一次目标属性数值,然而,目标属性数值的输入并不限于这种方式,例如,用户可先从照片中选择目标人脸。参阅图5,按照与图3相似的方式,在用户界面中还可以在人脸所在的区域显示对应标示对象108。标示对象108与图3中的标示对象104都能够对用户的点击或触摸事件做出响应,但标示对象108与标示对象104具体的响应不同。在图5所示的方式中,标示对象108被点击或触摸后对应于目标人脸选中或未被选中状态的切换。也就是说,用户可以通过对标示对象108的点选以选择目标人脸。在默认状态下,所有的标示对象108均可处于未被选中或者均处于被选中状态。用户在完成对目标人脸的选择后,可以点击按钮109确认选择。
参照图5,在按钮109被点击后,应用程序弹出弹窗110,弹窗110中包括输入框111以及按钮112,输入框111用于让用户输入一个总数。在输入完成后,用户可以通过按钮112进行确认。在按钮112被点击后,应用程序根据标示对象108的选中或未被选中的状态确定用户选择的所有目标人脸,而每个目标人脸的目标属性数值为总数除以用户选定的所有目标人脸的数量。然后,与按钮107被按下后相似,应用程序会向服务器11发送属性数值转移请求。
总之,任何通过照片中人脸的点选以选择要进行属性数值转移的目标用户的方式均可应用于本实施例中,而每个目标用户的目标属性数值可以是通过平均数的方式确定,也可以是单独输入。
服务器11在接收到第一用户终端21发送的属性数值转移请求后执行相应的数值转移操作。首先,服务器11从属性数值转移请求中解析出要进行属性数值转移请求的用户以及对应的目标属性数值。例如,服务器11根据人脸识别信息获取对应的用户标识符(ID)。可以理解,为了实现根据人脸识别信息获取到对应的用户ID,需要预先在数据库12中存储每个用户的人脸识别信息,然后在接收到第一用户终端21发送的数值转移请求后,可以获取属性数值转移请求中的人脸识别信息,在数据库中检索获取的人脸识别信息,即可获取对应的用户ID。
在一个具体的应用场景中,解析属性数值转移请求得到如下数据:用户A:-45;用户B:20;用户C:10;用户D:15。在解析出这些数据后,即可修改数据库12,将每个相关用户的属性数值与解析得到的数值相加。可以理解,根据本应用场景中的数值,用户A的属性数值减少45,用户B的属性数值增加20,用户C的属性数值增加10,用户D的属性数据增加15。因此,在本应用场景中,用户A分别向用户B、用户C以及用户D转移一定数额的属性数值。可以理解,一次数值转移操作中涉及的所有用户的属性数值变化之和应为零。
在另一个具体的应用场景中,解析属性数值转移请求得到如下数据:用户A:-25;用户B:20;用户C:-10;用户D:15。在解析出这些数据后,即可修改数据库12,将每个相关用户的属性数值与解析得到的数值相加。可以理解,根据本应用场景中的数值,用户A的属性数值减少25,用户B的属性数值增加20,用户C的属性数值减少10,用户D的属性数据增加15。因此,在本应用场景中,用户A与用户C为属性数值转出方,而用户B与用户D为属性数值转入方。
在另一个具体的应用场景中,解析属性数值转移请求得到如下数据:用户A:30;用户B:-10;用户C:-10;用户D:-10。在解析出这些数据后,即可修改数据库12,将每个相关用户的属性数值与解析得到的数值相加。可以理解,根据本应用场景中的数值,用户A的属性数值增加30,用户B、用户C以及用户D的属性数值减少10。因此,在本应用场景中,用户A向其他每个用户收取了属性数值。
在另一个具体的应用场景中,解析属性数值转移请求得到如下数据:用户A:30;用户B:-10;用户C:-10;用户D:-10。在解析出这些数据后,即可修改数据库12,但与以上的方式不同的地方在于,并不是直接将每个相关用户的属性数值与解析得到的数值相加,而是对解析得到的数值按一定比例(例如1%)提取转入指定的第三方用户。因此,此时用户A的属性数值只增加30*(1-1%)=29.7。
根据以上的方式,就可以实现不同用户之间进行属性数值转移操作。可以理解,在这些方式中,只要发起用户发起了属性数值转移请求,就直接进行了数值转移操作,如果应用在发起用户向其他用户转出属性数值的场景不会导致数据安全问题,然而应用在发起用户向其他用户收取属性数值的场景就可能导致数据安全问题。
因此,在执行属性数值转移请求的过程中,参阅图6,在正式执行数值修改操作之前,还可进行其他用户进行授权确认过程。例如,对应于一 个属性数值转移请求,服务器11可以为每个涉及的用户(除发起用户外)推送一条确认请求,其他用户通过其他用户终端启动对应的应用程序后可以收到该确认请求。例如,用户B可以通过第二用户终端22内的应用程序收取服务器11推送确认请求,或者主动向服务器11查询是否有属于自己的确认请求。其他用户以此类推,不再重复描述。
在收取到服务器11发送的确认请求后,应用程序可将确认请求在用户界面中输出,并让用户可以对确认请求进行确认。参阅图7,其为第二用户终端22内应用程序收取到服务器11发送的确认请求后的界面示意图。用户界面201中包括确认请求202,以及按钮203以及按钮204。其中,按钮203用于让用户B确认提取用户A转移的属性数值,而按钮204用于让用户B拒绝用户A转移的属性数值。在按钮203或者按钮204被按下后,应用程序生成相应的确认信息,并将确认信息发送给服务器11。相应地,在确认请求被用户B确认通过后,或者说收到第二用户终端22发送的表示确认接收的确认信息后,服务器11才执行与用户B对应的数值转移操作,即,从用户A的属性数值中向用户B的属性数值转移对应的数值。对于每一个用户执行此流程就可以完成整个属性数值转移操作。
此外,为了进一步提升安全性,在按钮203被按下后,还可不直接发送确认请求,而如图8所示,让用户进行一次额外的身份验证,例如让用户输入预留在网络系统中的确认密码、动态口令、短信验证码等等以进行身份验证。按钮205被按下后,第二用户终端22对用户输入的验证信息进行校验,校验通过后才向服务器11发送表示用户接受属性数值转移请求的确认信息。
可以理解的是,让用户进行确认请求的确认并不限于界面201中所示的方式。参阅图9,在另一个实例中,用户界面201中包括确认请求202,以及按钮205。其中按钮205被按下后,应用程序启动拍照功能拍摄当前用户的照片,并将拍摄的照片或者照片的人脸识别信息发送给服务器11 进行验证。根据这种实施方式,采用人脸识别取代用户输入的密码或者其他身份验证措施,可以提升用户便利性与安全性。
如果接收到的是照片,服务器11先进行人脸识别分析。以用户B为例,属性数值转移请求内包括用户B的人脸识别信息,若属性数值转移请求内包括的用户B的人脸识别信息与第二用户终端22上传的人脸识别信息或者上传的照片的人脸识别分析结果相互匹配,则用户授权确认通过,服务器11可以执行数值转移操作,即修改数据库12内的数据。
参阅图10,可以理解,在某一时刻,同一用户可能收到多个确认请求,在此种情形下,可以一次性对所有的确认请求进行确认操作。例如,对每个确认请求分别发送一次确认信息,或者在确认信息包括所有确认请求的确认信息。如此,服务器11在接收到第二用户终端22发送的确认信息后,根据确认信息执行属性数值的转移操作,即修改数据库中内的属性数值。
根据本实施例的用户属性数值转移系统,用户可以通过一张照片发起属性数值转移操作,只用在照片上点击人脸,输入要转移的数值,无须输入用户的账号,提升了用户发起属性数值转移操作时的便利性。而且,通过人脸识别的方式还可以防止在输入用户账号时输入错误,从而提升了属性数值转移操作时的安全性。
图11为本发明一个实施例提供的用户属性数值转移方法的流程图,该用户属性数值转移方法可应用于网络系统。如图11所示,本实施例的方法包括以下步骤:
步骤S101,获取照片。
具体地,第一用户终端21获取发起用户选择的照片。发起用户是指发起属性数值转移操作的用户,本实施例中发起用户为用户A。一个发起用户启动安装在第一用户终端21内的应用程序(可为独立的应用程序,或者应用程序内的功能模块)。相应地,该应用程序被启动后显示应用程序的界面,在此界面中,用户可以选择一个存储在第一用户终端21内的 照片或者启动第一用户终端21的拍摄功能实时拍摄一张照片,选择的照片或者实时拍摄的照片内需要至少包括要进行属性数值转移操作的交易方。
步骤S102,对照片进行人脸识别处理以识别出人脸。
具体地,第一用户终端21对照片进行人脸识别处理以识别出所有的人脸。人脸识别所采用的算法并不受任何限制,所有能够准确有效的识别出人脸的算法均可应用于本实施例。
在一个实施例中,人脸识别可采用基于模板匹配的人脸识别算法、子空间分析人脸识别算法、局部保持投影(Locality Preserving Projections,LPP)人脸识别、主成分分析(PCA)算法、特征脸法(基于KL变换)、人工神经网络人脸识别算法以及支持向量机人脸识别算法中的任意一种。其中基于模板匹配的人脸识别算法核心思想利用人的脸部特征规律建立一个立体可调的模型框架,在定位出人的脸部位置后用模型框架定位和调整人的脸部特征部位,解决人脸识别过程中的观察角度、遮挡和表情变化等因素影响。
步骤S103,根据输入的指令确定从识别出的人脸中选定的目标人脸以及对应的目标属性数值。
具体地,第一用户终端21根据用户输入的指令确定从识别出的所有的人脸中选定的目标人脸以及对应的目标属性数值。第一用户终端21在获取到用户选择的照片后,根据选择的照片实现与用户的交互机制,以确定与当前用户进行属性数值转移的交易方(目标人脸)以及每个交易方的目标属性数值。
步骤S104,根据目标数值以及对应的人脸识别信息生成属性数值转移请求。
具体地,第一用户终端21根据目标属性数值以及对应的人脸识别信息生成属性数值转移请求。第一用户终端21通过应用程序根据用户输入的数值生成属性数值转移请求。属性数值转移请求例如可包每个识别出的 人脸对应的识别信息以及与之对应的数值;属性数值转移请求还可包括当前用户的识别信息。在生成属性数值转移请求后,还可以进行加密处理。
步骤S105,将属性数值转移请求发送给服务器,以使服务器根据属性数值转移请求进行属性数值转移操作。
具体地,第一用户终端21在生成属性数值转移请求后,可以采用预定的网络协议(如超文本传输协议)发送给服务器11。服务器11在接收到属性数值转移请求后根据属性数值转移请求执行数值转移操作。
根据本实施例的用户属性数值转移方法,用户可以通过照片发起属性数值转移操作,只用在照片上点击人脸,输入要转移的数值,无须输入用户的账号,提升了用户发起属性数值转移操作时的便利性。而且,通过人脸识别的方式还可以防止在输入用户账号时输入错误,从而提升了属性数值转移操作时的安全性。再者,通过自动化、高精准度的人脸识别技术来自动确定进行属性数值转移操作的对象,这样可以尽可能避免人为的误操作,而且可以提高确定进行属性数值转移操作的对象的效率和准确性,进一步提升了属性数值转移操作时的安全性。
在一个实施例中,步骤S105之前还包括:从服务器拉取目标人脸对应的用户信息并显示;获取对于显示的用户信息的确认指令,根据该确认指令来执行步骤S105。本实施例中,通过从服务器拉取目标人脸对应的用户信息,可以对需要进行属性数值转移操作的对象进行确认,可防止将目标属性数值转移给长相相似的非目标用户。
在一个实施例中,步骤S105包括:将属性数值转移请求发送给服务器,以使服务器根据属性数值转移请求结合用户关系链进行社交关系的验证,并在验证通过后根据属性数值转移请求进行属性数值转移操作。
本实施例中,服务器可根据发起用户的用户关系链和属性数值转移请求中的人脸识别信息来验证双方用户的社交关系。若存在社交关系,就可以直接根据属性数值转移请求进行属性数值转移操作。若不存在社交关 系,则可以拒绝属性数值转移请求,或者请求第一用户终端进行确认,在接收到第一用户终端的确认指令后根据属性数值转移请求进行属性数值转移操作。
本实施例中,结合用户关系链来验证社交关系,这样可以在双方存在社交关系时快速进行属性数值转移操作,通过社交关系来保障进行属性数值转移操作的安全性。而且避免将目标属性数值转移给长相相似但不存在社交关系的非目标用户。
参照图12,在一个实施例中,步骤S103具体包括以下步骤:
步骤S106,响应于输入的人脸选择指令显示用于接收用户输入的输入界面。
具体地,第一用户终端21响应于用户输入的人脸选择指令显示用于接收用户输入的输入界面。
在另一个具体的实施方式中,检测用户对人脸的选择指令是按照以下方式进行的。在进行人脸识别的过程中,识别过照片102中人脸所在的区域。然后,在用户界面101中对应于人脸所在的位置显示一个对应的标示对象104,标示对象104例如可为一个边框或者一个透明的浮层。应用程序还检测标示对象的点击或触摸事件。在标示对象104的点击或触摸事件被触发后,即可判定用户选择了对应的人脸。与上一种方式相比,这种方式可以充分利用系统自带的点击或触摸事件触发机制,无须每次比较触点坐标是否在人脸所在的区域内。在图3所示的示例中,标示对象104的形状是与人脸相匹配,然而可以理解的是,标示对象104也可以规则的形状,例如长方形、圆形、或者正方形,在此种情形下,检测标示对象104的点击或触摸事件坐标比较过程会简单得多,具有更高的效率。
步骤S107,确定与人脸选择指令对应的从识别出的人脸中选定的目标人脸,并确定与目标人脸对应的目标属性数值为在输入界面中输入的数值。
具体地,第一用户终端21确定与人脸选择指令对应的从识别出的人脸中选定的目标人脸,并确定与目标人脸对应的目标属性数值为用户在输入界面中输入的数值。
在检测到用户选择某个人脸的用户指令后,例如,当一个标示对象104被点击或者触摸后,可以显示一个文本输入框105,提示用户可以输入与选择的人脸对应的一个数值,这个数值就表示要在当前用户与该目标人脸所对应的用户之间要转移的属性数值。在一个具体的实施方式中,用正数表示当前用户需要转移属性数值给选择的目标人脸对应的用户;负数表示目标人脸对应的用户需要转移属性数据给当前的用户。当然,也可以预先确定用正数表示目标人脸对应的用户需要转移属性数据给当前用户;用负数表示前用户需要转移属性数值给选择的目标人脸对应的用户。
根据本实施例,可以通过点击人脸来触发输入界面,从而在输入界面中输入相应的目标属性数值,无须输入用户的账号,提升了用户发起属性数值转移操作时的便利性。而且,通过人脸识别的方式还可以防止在输入用户账号时输入错误,从而提升了属性数值转移操作时的安全性。再者,通过人脸识别可以高效、精准地识别出照片中的所有人脸,从而可以快速地在与各个人脸对应的输入界面中输入相应的目标属性数值,提升进行属性数值转移操作的效率。而且若需要确定的目标人脸比较多时,可以在一张照片中快速地确定多个目标人脸以及相应的目标属性数值,可进一步提升进行属性数值转移操作的效率。
参照图13,在一个实施例中,步骤S103具体包括以下步骤:
步骤S108,根据输入的指令确定从识别出的人脸中选定的目标人脸,并获取输入的总数。
具体地,第一用户终端21根据用户输入的指令确定从识别出的人脸中选定的目标人脸,并获取用户输入的总数。
步骤S109,确定选定的目标人脸的目标属性数值为总数除以目标人脸 的总数。
例如,用户可先从照片中选择目标人脸。参阅图5,按照与图3相似的方式,在用户界面中还可以在人脸所在的区域显示对应标示对象108。标示对象108与图3中的标示对象104都能够对用户的点击或触摸事件做出响应,但标示对象108与标示对象104具体的响应不同。在图5所示的方式中,标示对象108被点击或触摸后对应于目标人脸选中或未被选中状态的切换。也就是说,用户可以通过对标示对象108的点选以选择目标人脸。在默认状态下,所有的标示对象108均可处于未被选中或者均处于被选中状态。用户在完成对目标人脸的选择后,可以点击按钮109确认选择。
在按钮109被点击后,应用程序弹出弹窗110,弹窗110中包括输入框111以及按钮112,输入框111用于让用户输入一个总数。在输入完成后,可以通过按钮112进行确认。在按钮112被点击后,应用程序根据标示对象108的选中或未被选中的状态确定用户选择的所有目标人脸,而每个目标人脸的目标属性数值为总数除以用户选定的所有目标人脸的数量。然后,与按钮107被按下后相似,应用程序会向服务器11发送属性数值转移请求。
根据本实施例中,只需要输入一次总数,无须为每个目标人脸输入一次目标属性数值,提高了进行属性数值转移操作的效率,进一步提升用户输入操作的便利性。
参照图14,在一个实施例中,在步骤S102与步骤S103之间还包括以下步骤:
步骤S110,在用户界面中输出照片,用户界面被显示于触控屏幕中。
步骤S111,在检测到操作对象与触控屏幕的接触后,判断触点的坐标是否在照片中的人脸对应的区域内,若是,则检测到人脸选择指令。
在进行人脸识别的过程中,识别出照片102中所有人脸所在的区域。而在用户界面101中,可以检测照片102的父容器的点击或触控事件,在 这些事件被触发时执行判定是否有人脸被选择的过程。上述的父容器是指容纳照片102的各种界面元素(或者说是控件)。以安卓系统为例,父容器例如可为ImageView控件,其他操作系统与安卓系统不尽相同,但都具有类似功能的控件。
对于触控屏幕,上述的判定是否有人脸被选择的过程具体如下:获取触点的坐标;判定触点的坐标是否在人脸所在的区域内;若触点的坐标在一个人脸所在的区域内,则判定用户选择了对应的人脸。
根据本实施例,用户界面被显示于触控屏幕中,这样在对象触摸触控屏幕显示的照片中的人脸时,便可以触发人脸选择指令的检测,这样用户可以通过对象直接、高效地确定目标人脸,进一步提高了操作便利性。
参照图15,在一个实施例中,在步骤S102与步骤S103之间还包括以下步骤:
步骤S112,在用户界面中输出照片。
如图3所示,在用户界面101中显示有照片102,而照片102中具有三个人脸103。
步骤S113,在用户界面中生成与照片中的人脸一一对应的标示对象。
在用户界面101中对应于人脸所在的位置显示一个对应的标示对象104,标示对象104例如可为一个边框或者一个透明的浮层。
步骤S115,若标示对象的已注册的点击事件被触发则检测到人脸选择指令。
具体地,第一用户终端21可在步骤S115之前注册标示对象的点击事件,然后执行步骤S115时检测标示对象的点击事件的触发。若标示对象的点击事件被触发则检测到人脸选择指令。
根据本实施例,通过标示对象104来标示出人脸,这样可以充分利用系统自带的点击或触摸事件触发机制,无须每次比较触点坐标是否在人脸所在的区域内,减少了多次比较触点坐标的计算压力,提升检测到人脸选 择指令的效率。
参照图16,在一个实施例中,步骤S103之后还包括:
步骤S116,在确定每个人脸对应的目标属性数值后还在照片上显示目标数值。
参阅图4,进一步地,为了让用户及时了解输入的数值,还可以在用户界面101中显示提示信息,例如在目标人脸的上方显示一个提示框106,提示框106的内容包括用户刚刚输入的数值。用户可以随时再次选择目标人脸进行输入的数值的调整。
参照图17,在一个实施例中,提供了一种用户属性数值转移装置。如图17所示,本实施例的装置包括:获取模块21、人脸识别模块22、交互模块23、请求生成模块24以及请求发送模块25。
获取模块21用于获取照片。
一个发起用户(发起属性数值转移操作的用户,本实施例中发起用户为用户A)启动安装在第一用户终端21内的应用程序(可为独立的应用程序,或者应用程序内的功能模块)。相应地,该应用程序被启动后显示应用程序的界面,在此界面中,用户可以选择一个存储在第一用户终端21内的照片或者启动第一用户终端21的拍摄功能实时拍摄一张照片,选择的照片或者实时拍摄的照片内需要至少包括要进行属性数值转移操作的交易方。
人脸识别模块22用于对照片进行人脸识别处理以识别出人脸。
人脸识别所采用的算法并不受任何限制,所有能够准确有效的识别出人脸的算法均可应用于本实施例。
交互模块23用于根据输入的指令确定从识别出的人脸中选定的目标人脸以及对应的目标属性数值。
在获取到用户选择的照片后需要根据选择的照片实现与用户的交互 机制以确定与当前用户要进行属性数值转移的交易方(目标人脸)以及每个交易方的目标属性数值。
请求生成模块24用于根据目标数值以及对应的人脸识别信息生成属性数值转移请求。
应用程序根据用户输入的数值生成属性数值转移请求。属性数值转移请求例如可包括以下信息,每个识别出的人脸对应的识别信息,以及与之对应的数值;属性数值转移请求还可包括当前用户的识别信息。在生成属性数值转移请求后,还可以进行加密处理。
请求发送模块25用于将属性数值转移请求发送给服务器,以使服务器根据属性数值转移请求进行属性数值转移操作。
在生成属性数值转移请求后,可以采用预定的网络协议(如超文本传输协议)发送给服务器11。服务器11在接收到属性数值转移请求后根据属性数值转移请求执行数值转移操作。
根据本实施例的用户属性数值转移装置,用户可以通过一张照片发起属性数值转移操作,只用在照片上点击人脸,输入要转移的数值,无须输入用户的账号,提升了用户发起属性数值转移操作时的便利性。而且,通过人脸识别的方式还可以防止在输入用户账号时输入错误,从而提升了属性数值转移操作时的安全性。
参照图18,在一个实施例中,用户属性数值转移装置中的交互模块23具体包括:第一显示单元231以及第一确定单元232。
第一显示单元231用于响应于输入的人脸选择指令显示用于接收用户输入的输入界面。
在另一个具体的实施方式中,检测用户对人脸的选择指令是按照以下方式进行的。在进行人脸识别的过程中,识别过照片102中人脸所在的区域。然后,在用户界面101中对应于人脸所在的位置显示一个对应的标示对象104,标示对象104例如可为一个边框或者一个透明的浮层。应用程 序还检测标示对象的点击或触摸事件。在标示对象104的点击或触摸事件被触发后,即可判定用户选择了对应的人脸。与上一种方式相比,这种方式可以充分利用系统自带的点击或触摸事件触发机制,无须每次比较触点坐标是否在人脸所在的区域内。在图3所示的示例中,标示对象104的形状是与人脸相匹配,然而可以理解的是,标示对象104也可以规则的形状,例如长方形、圆形、或者正方形,在此种情形下,检测标示对象104的点击或触摸事件使得坐标比较过程会简单得多,具有更高的效率。
第一确定单元232用于确定与人脸选择指令对应的从识别出的人脸中选定的目标人脸,并确定与目标人脸对应的目标属性数值为在输入界面中输入的数值。
在检测到用户选择某个人脸的用户指令后,例如,当一个标示对象104被点击或者触摸后,可以显示一个文本输入框105,提示用户可以输入与选择的人脸对应的一个数值,这个数值就表示要在当前用户与该目标人脸所对应的用户之间要转移的属性数值。在一个具体的实施方式中,用正数表示当前用户需要转移属性数值给选择的目标人脸对应的用户;负数表示目标人脸对应的用户需要转移属性数据给当前的用户。当然,也可以预先确定用正数表示目标人脸对应的用户需要转移属性数据给当前用户;用负数表示前用户需要转移属性数值给选择的目标人脸对应的用户。
根据本实施例,可以通过点击人脸按个输入目标属性数值,无须输入用户的账号,提升了用户发起属性数值转移操作时的便利性。而且,通过人脸识别的方式还可以防止在输入用户账号时输入错误,从而提升了属性数值转移操作时的安全性。
参照图19,在一个实施例中,用户属性数值转移装置中的交互模块23具体包括:获取单元233以及第二确定单元234。
获取单元233用于根据输入的指令确定从识别出的人脸中选定的目标人脸,并获取输入的总数。
第二确定单元234用于确确定选定的目标人脸的目标属性数值为总数除以目标人脸的总数。
例如,用户可先从照片中选择目标人脸。参阅图5,按照与图3相似的方式,在用户界面中还可以在人脸所在的区域显示对应标示对象108。标示对象108与图3中的标示对象104都能够对用户的点击或触摸做出响应,但标示对象108与标示对象104具体的响应不同。在图5所示的方式中,标示对象108被点击或触摸后对应于目标人脸选中或未被选中状态的切换。也就是说,用户可以通过对标示对象108的点选以选择目标人脸。在默认状态下,所有的标示对象108均可处于未被选中或者均处于被选中状态。用户在完成对目标人脸的选择后,可以点击按钮109确认选择。
在按钮109被点击后,应用程序弹出弹窗110,弹窗110中包括输入框111以及按钮112,输入框111用于让用户输入一个总数。在输入完成后,可以通过按钮112进行确认。在按钮112被点击后,应用程序根据标示对象108的选中或未被选中的状态确定用户选择的所有目标人脸,而每个目标人脸的目标属性数值为总数除以用户选定的所有目标人脸的数量。然后,与按钮107被按下后相似,应用程序会向服务器11发送属性数值转移请求。
根据本实施例中,只需要输入一次总数,无须为每个目标人脸输入一次目标属性数值,进一步提升用户输入操作的便利性。
参照图20,在一个实施例中,用户属性数值转移装置中的交互模块23还包括:第二显示单元235以及第一检测单元236。
第二显示单元235用于在用户界面中输出照片,用户界面被显示于触控屏幕中。
第一检测单元236用于在检测到操作对象与触控屏幕的接触后,判断触点的坐标是否在照片中的人脸对应的区域内,若是,则检测到人脸选择指令。
在进行人脸识别的过程中,识别出照片102中所有人脸所在的区域。而在用户界面101中,可以检测照片102的父容器的点击或触控事件,在这些事件被触发时执行判定是否有人脸被选择的过程。上述的父容器是指容纳照片102的各种界面元素(或者说是控件)。以安卓系统为例,父容器例如可为ImageView控件,其他操作系统与安卓系统不尽相同,但都具有类似功能的控件。
对于触控屏幕,上述的判定是否有人脸被选择的过程具体如下:获取触点的坐标;判定触点的坐标是否在人脸所在的区域内;若触点的坐标在一个人脸所在的区域内,则判定用户选择了对应的人脸。
根据本实施例,通过检测显示的照片的父容器的点击事件即可实现对人脸选择指令的检测,技术方案简单。
参照图21,在一个实施例中,用户属性数值转移装置中的交互模块23还包括:第三显示单元237、第四显示单元238、以及第二检测单元240。
第三显示单元237用于在用户界面中输出照片。
如图3所示,在用户界面101中显示有照片102,而照片102中具有三个人脸103。
第四显示单元238用于在用户界面中生成与照片中的人脸一一对应的标示对象。
在用户界面101中对应于人脸所在的位置显示一个对应的标示对象104,标示对象104例如可为一个边框或者一个透明的浮层。
第二检测单元240用于若标示对象的已注册的点击事件被触发则检测到人脸选择指令。
根据本实施例,可以充分利用系统自带的点击或触摸事件触发机制,无须每次比较触点坐标是否在人脸所在的区域内。
参照图21,在一个实施例中,交互模块23还包括事件注册单元239用于注册标示对象的点击事件。
参照图22,在一个实施例中,用户属性数值转移装置中的交互模块23还包括:提示单元241。
提示单元241用于在确定每个人脸对应的目标属性数值后还在照片上显示目标数值。
参阅图4,进一步地,为了让用户及时了解输入的数值,还可以在用户界面101中显示提示信息,例如在目标人脸的上方显示一个提示框106,提示框106的内容包括用户刚刚输入的数值。用户可以随时再次选择目标人脸进行输入的数值的调整。
如图23所示,在一个实施例中,提供了一种用户属性数值转移方法,具体包括如下步骤:
步骤S2302,第一用户终端获取照片。
步骤S2304,第一用户终端对照片进行人脸识别处理以识别出人脸。
步骤S2306,第一用户终端根据输入的指令确定从识别出的人脸中选定的目标人脸以及对应的目标属性数值。
步骤S2308,第一用户终端根据目标属性数值以及对应的人脸识别信息生成属性数值转移请求。
步骤S2310,第一用户终端将属性数值转移请求发送给服务器。
步骤S2312,服务器根据属性数值转移请求进行属性数值转移操作。
在一个实施例中,步骤S2306包括:第一用户终端根据输入的指令确定从识别出的人脸中选定的目标人脸,并获取输入的总数;以及确定选定的目标人脸的目标属性数值为总数除以目标人脸的总数。
在一个实施例中,步骤S2306包括:第一用户终端响应于输入的人脸选择指令显示用于接收用户输入的输入界面;以及确定与人脸选择指令对应的从识别出的人脸中选定的目标人脸,并确定与目标人脸对应的目标属性数值为在输入界面中输入的数值。
在一个实施例中,该用户属性数值转移方法还包括:第一用户终端在用户界面中输出照片,用户界面被显示于触控屏幕中;第一用户终端在检测到操作对象与触控屏幕的接触后,判断触点的坐标是否在照片中的人脸对应的区域内,若是,则检测到人脸选择指令。
在一个实施例中,该用户属性数值转移方法还包括:第一用户终端在用户界面中输出照片;在用户界面中生成与照片中的人脸一一对应的标示对象;以及若标示对象的点击事件被触发则检测到人脸选择指令。
在一个实施例中,该用户属性数值转移方法还包括:第一用户终端在确定每个人脸对应的目标属性数值后还在照片上显示目标属性数值。
图24示出了上述的第一用户终端21的一个实施例的结构框图。可以理解,其他用户终端可以与第一用户终端具有类似的硬件架构。如图24所示,第一用户终端21包括存储器212、存储控制器214,一个或多个(图中仅示出一个)处理器216、外设接口218、网络模块220、以及显示器222。这些组件通过一条或多条通讯总线/信号线相互通讯。
可以理解,图24所示的结构仅为示意,上述第一用户终端21还可包括比图24中所示更多或者更少的组件,或者具有与图24所示不同的配置。图24中所示的各组件可以采用硬件、软件或其组合实现。
存储器212可用于存储软件程序以及模块,如本发明实施例中的各方法及装置对应的程序指令/模块,处理器216通过运行存储在存储器212内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的方法。
存储器212可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器212可进一步包括相对于处理器216远程设置的存储器,这些远程存储器可以通过网络连接至上述服务器。上述网络的实例包括但 不限于互联网、企业内部网、局域网、移动通信网及其组合。处理器216以及其他可能的组件对存储器212的访问可在存储控制器214的控制下进行。
外设接口218将各种输入/输入装置耦合至处理器216。处理器216运行存储器212内的各种软件、指令上述服务器执行各种功能以及进行数据处理。在一些实施例中,外设接口218、处理器216以及存储控制器214可以在单个芯片中实现。在其他一些实例中,他们可以分别由独立的芯片实现。
网络模块220用于接收以及发送网络信号。上述网络信号可包括无线信号或者有线信号。在一个实例中,上述网络信号为有线网络信号。此时,网络模块220可包括处理器、随机存储器、转换器、晶体振荡器等元件。在一个实施例中,上述的网络信号为无线信号(例如射频信号)。此时网络模块220实质是射频模块,接收以及发送电磁波,实现电磁波与电信号的相互转换,从而与通讯网络或者其他设备进行通讯。射频模块可包括各种现有的用于执行这些功能的电路元件,例如,天线、射频收发器、数字信号处理器、加密/解密芯片、用户身份模块(SIM)卡、存储器等等。射频模块可与各种网络如互联网、企业内部网、无线网络进行通讯或者通过无线网络与其他设备进行通讯。上述的无线网络可包括蜂窝式电话网、无线局域网或者城域网。上述的无线网络可以使用各种通信标准、协议及技术,包括但并不限于全球移动通信系统(Global System for Mobile Communication,GSM)、增强型移动通信技术(Enhanced Data GSM Environment,EDGE),宽带码分多址技术(wideband code division multiple access,W-CDMA),码分多址技术(Code division access,CDMA)、时分多址技术(time division multiple access,TDMA),无线保真技术(Wireless,Fidelity,WiFi)(如美国电气和电子工程师协会标准IEEE 802.11a,IEEE 802.11b,IEEE 802.11g和/或IEEE 802.11n)、网络电话(Voice over internet  protocal,VoIP)、全球微波互联接入(Worldwide Interoperability for Microwave Access,Wi-Max)、其他用于邮件、即时通讯及短消息的协议,以及任何其他合适的通讯协议,甚至可包括那些当前仍未被开发出来的协议。
显示模块222用于显示由用户输入的信息、提供给用户的信息以及各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。在一个实例中,显示模块222包括一个显示面板。显示面板例如可为一个液晶显示面板(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode Display,OLED)显示面板、电泳显示面板(Electro-Phoretic Display,EPD)等。进一步地,触控表面可设置于显示面板上从而与显示面板构成一个整体。在另一些实施例中,显示模块222还可包括其他类型的显示装置,例如包括一个投影显示装置。相比于一般的显示面板,投影显示装置还需要包括一些用于投影的部件例如透镜组。
上述的软件程序以及模块包括:操作系统224、以及应用程序226。操作系统224其可包括各种用于管理系统任务(例如内存管理、存储设备控制、电源管理等)的软件组件和/或驱动,并可与各种硬件或软件组件相互通讯,从而提供其他软件组件的运行环境。应用程序22运行在操作系统224的基础上,用于实现上实施例中的各种方法。
此外,本发明实施例还提供一种计算机可读存储介质,其内存储有计算机可执行指令,上述的计算机可读存储介质例如为非易失性存储器例如光盘、硬盘、或者闪存。上述的计算机可执行指令用于让计算机或者类似的运算装置完成上述实施例中的方法。
以上所述,仅是本发明的较佳实施例而已,并非对本发明作任何形式上的限制,虽然本发明已以较佳实施例揭示如上,然而并非用以限定本发明,任何本领域技术人员,在不脱离本发明技术方案范围内,当可利用上 述揭示的技术内容做出些许更动或修饰为等同变化的等效实施例,但凡是未脱离本发明技术方案内容,依据本发明的技术实质对以上实施例所作的任何简介修改、等同变化与修饰,均仍属于本发明技术方案的范围内。

Claims (12)

  1. 一种用户属性数值转移方法,包括:
    获取照片;
    对所述照片进行人脸识别处理以识别出人脸;
    根据输入的指令确定从识别出的人脸中选定的目标人脸以及对应的目标属性数值;
    根据所述目标属性数值以及对应的人脸识别信息生成属性数值转移请求;及
    将所述属性数值转移请求发送给服务器,以使所述服务器根据所述属性数值转移请求进行属性数值转移操作。
  2. 根据权利要求1所述的方法,其特征在于,所述根据输入的指令确定从识别出的人脸中选定的目标人脸以及对应的目标属性数值,包括:
    根据输入的指令确定从识别出的人脸中选定的目标人脸,并获取输入的总数;及
    确定所述选定的目标人脸的目标属性数值为所述总数除以所述目标人脸的总数。
  3. 根据权利要求1所述的方法,其特征在于,所述根据输入的指令确定从识别出的人脸中选定的目标人脸以及对应的目标属性数值,包括:
    响应于输入的人脸选择指令显示用于接收用户输入的输入界面;以及
    确定与所述人脸选择指令对应的从识别出的人脸中选定的目标人脸,并确定与所述目标人脸对应的目标属性数值为在所述输入界面中输入的数值。
  4. 根据权利要求3所述的方法,其特征在于,还包括:
    在用户界面中输出所述照片,所述用户界面被显示于触控屏幕中;及
    在检测到操作对象与所述触控屏幕的接触后,判断触点的坐标是否在 所述照片中的人脸对应的区域内,若是,则检测到所述人脸选择指令。
  5. 根据权利要求3所述的方法,其特征在于,还包括:
    在用户界面中输出所述照片;
    在所述用户界面中生成与所述照片中的人脸一一对应的标示对象;以及
    若所述标示对象的已注册的点击事件被触发则检测到所述人脸选择指令。
  6. 根据权利要求1所述的方法,其特征在于,还包括:
    在确定每个人脸对应的目标属性数值后还在所述照片上显示所述目标属性数值。
  7. 一种终端,包括存储介质及处理器,所述存储介质中储存有指令,所述指令被所述处理器执行时,可使得所述处理器执行以下步骤:
    获取照片;
    对所述照片进行人脸识别处理以识别出人脸;
    根据输入的指令确定从识别出的人脸中选定的目标人脸以及对应的目标属性数值;
    根据所述目标属性数值以及对应的人脸识别信息生成属性数值转移请求;及
    将所述属性数值转移请求发送给服务器,以使所述服务器根据所述属性数值转移请求进行属性数值转移操作。
  8. 根据权利要求7所述的终端,其特征在于,所述根据输入的指令确定从识别出的人脸中选定的目标人脸以及对应的目标属性数值,包括:
    根据输入的指令确定从识别出的人脸中选定的目标人脸,并获取输入的总数;及
    确定所述选定的目标人脸的目标属性数值为所述总数除以所述目标 人脸的总数。
  9. 根据权利要求7所述的终端,其特征在于,所述根据输入的指令确定从识别出的人脸中选定的目标人脸以及对应的目标属性数值,包括:
    响应于输入的人脸选择指令显示用于接收用户输入的输入界面;以及
    确定与所述人脸选择指令对应的从识别出的人脸中选定的目标人脸,并确定与所述目标人脸对应的目标属性数值为在所述输入界面中输入的数值。
  10. 根据权利要求9所述的终端,其特征在于,所述处理器还被用于执行以下步骤:
    在用户界面中输出所述照片,所述用户界面被显示于触控屏幕中;及
    在检测到操作对象与所述触控屏幕的接触后,判断触点的坐标是否在所述照片中的人脸对应的区域内,若是,则检测到所述人脸选择指令。
  11. 根据权利要求9所述的终端,其特征在于,所述处理器还被用于执行以下步骤:
    在用户界面中输出所述照片;
    在所述用户界面中生成与所述照片中的人脸一一对应的标示对象;以及
    若所述标示对象的已注册的点击事件被触发则检测到所述人脸选择指令。
  12. 根据权利要求7所述的终端,其特征在于,所述处理器还被用于执行以下步骤:
    在确定每个人脸对应的目标属性数值后还在所述照片上显示所述目标属性数值。
PCT/CN2015/089417 2014-10-22 2015-09-11 用户属性数值转移方法及终端 WO2016062173A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2017503518A JP6359173B2 (ja) 2014-10-22 2015-09-11 ユーザ属性値転送方法および端末
US15/110,316 US10127529B2 (en) 2014-10-22 2015-09-11 User attribute value transfer method and terminal
US16/157,980 US10417620B2 (en) 2014-10-22 2018-10-11 User attribute value transfer method and terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410567731.4A CN104901994B (zh) 2014-10-22 2014-10-22 网络系统中用户的属性数值转移方法、装置及系统
CN201410567731.4 2014-10-22

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US15/110,316 A-371-Of-International US10127529B2 (en) 2014-10-22 2015-09-11 User attribute value transfer method and terminal
US16/157,980 Continuation US10417620B2 (en) 2014-10-22 2018-10-11 User attribute value transfer method and terminal

Publications (1)

Publication Number Publication Date
WO2016062173A1 true WO2016062173A1 (zh) 2016-04-28

Family

ID=54034391

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/089417 WO2016062173A1 (zh) 2014-10-22 2015-09-11 用户属性数值转移方法及终端

Country Status (4)

Country Link
US (2) US10127529B2 (zh)
JP (1) JP6359173B2 (zh)
CN (1) CN104901994B (zh)
WO (1) WO2016062173A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104901994B (zh) 2014-10-22 2018-05-25 腾讯科技(深圳)有限公司 网络系统中用户的属性数值转移方法、装置及系统
US20160184710A1 (en) * 2014-12-31 2016-06-30 Wrafl, Inc. Secure Computing for Virtual Environment and Interactive Experiences
CN108229937A (zh) * 2017-12-20 2018-06-29 阿里巴巴集团控股有限公司 基于增强现实的虚拟对象分配方法及装置
US10733676B2 (en) * 2018-05-17 2020-08-04 Coupa Software Incorporated Automatic generation of expense data using facial recognition in digitally captured photographic images
CN108992926A (zh) * 2018-06-15 2018-12-14 广州市点格网络科技有限公司 基于人脸识别游戏登录方法、装置与计算机可读存储介质
CN109493073B (zh) * 2018-10-25 2021-07-16 创新先进技术有限公司 一种基于人脸的身份识别方法、装置及电子设备
CN111353357B (zh) * 2019-01-31 2023-06-30 杭州海康威视数字技术股份有限公司 一种人脸建模系统、方法和装置
JP2020136899A (ja) * 2019-02-19 2020-08-31 ソニーセミコンダクタソリューションズ株式会社 撮像装置、電子機器、および撮像方法
CN110442290A (zh) * 2019-07-29 2019-11-12 吴新建 一种城市安全系统的数据获取方法、装置及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101520924A (zh) * 2008-12-31 2009-09-02 上海序参量科技发展有限公司 利用二维条码实现的金融交易终端、系统及其实现方法
US20120084200A1 (en) * 2010-10-01 2012-04-05 Michel Triana Systems and methods for completing a financial transaction
CN103076879A (zh) * 2012-12-28 2013-05-01 中兴通讯股份有限公司 基于人脸信息的多媒体交互方法及装置及终端
CN103824068A (zh) * 2014-03-19 2014-05-28 上海看看智能科技有限公司 人脸支付认证系统及方法
CN104901994A (zh) * 2014-10-22 2015-09-09 腾讯科技(深圳)有限公司 网络系统中用户的属性数值转移方法、装置及系统

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4552632B2 (ja) * 2004-12-03 2010-09-29 株式会社ニコン 携帯機器
JP5283884B2 (ja) * 2007-10-23 2013-09-04 楽天株式会社 ポイント管理システム
US9626722B2 (en) * 2009-10-20 2017-04-18 Trading Technologies International, Inc. Systems and methods of an interface for use in electronic trading
CN102890604B (zh) * 2011-07-21 2015-12-16 腾讯科技(深圳)有限公司 人机交互中在机器侧标识目标对象的方法及装置
US10223710B2 (en) * 2013-01-04 2019-03-05 Visa International Service Association Wearable intelligent vision device apparatuses, methods and systems
US20150012426A1 (en) * 2013-01-04 2015-01-08 Visa International Service Association Multi disparate gesture actions and transactions apparatuses, methods and systems
WO2013103912A1 (en) * 2012-01-05 2013-07-11 Visa International Service Association Transaction visual capturing apparatuses, methods and systems
US20130218757A1 (en) * 2012-02-16 2013-08-22 Dharani Ramanathan Payments using a recipient photograph
JP5791557B2 (ja) * 2012-03-29 2015-10-07 Kddi株式会社 連絡操作支援システム、連絡操作支援装置および連絡操作方法
JP5395205B2 (ja) * 2012-04-09 2014-01-22 株式会社コナミデジタルエンタテインメント ゲーム制御装置、ゲーム制御方法、プログラム、ゲームシステム
US20150046320A1 (en) * 2013-08-07 2015-02-12 Tiply, Inc. Service productivity and guest management system
CN103457943B (zh) * 2013-08-27 2016-10-26 小米科技有限责任公司 数值转移方法、终端、服务器及系统
US10037082B2 (en) * 2013-09-17 2018-07-31 Paypal, Inc. Physical interaction dependent transactions
US20150095228A1 (en) * 2013-10-01 2015-04-02 Libo Su Capturing images for financial transactions
CN103699988A (zh) * 2013-11-26 2014-04-02 小米科技有限责任公司 数值转移方法、终端、服务器及系统
CN103824180A (zh) * 2014-02-18 2014-05-28 小米科技有限责任公司 数值扣除方法、终端、服务器及系统
US10043184B2 (en) * 2014-05-30 2018-08-07 Paypal, Inc. Systems and methods for implementing transactions based on facial recognition
US9864982B2 (en) * 2014-10-31 2018-01-09 The Toronto-Dominion Bank Image recognition-based payment requests

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101520924A (zh) * 2008-12-31 2009-09-02 上海序参量科技发展有限公司 利用二维条码实现的金融交易终端、系统及其实现方法
US20120084200A1 (en) * 2010-10-01 2012-04-05 Michel Triana Systems and methods for completing a financial transaction
CN103076879A (zh) * 2012-12-28 2013-05-01 中兴通讯股份有限公司 基于人脸信息的多媒体交互方法及装置及终端
CN103824068A (zh) * 2014-03-19 2014-05-28 上海看看智能科技有限公司 人脸支付认证系统及方法
CN104901994A (zh) * 2014-10-22 2015-09-09 腾讯科技(深圳)有限公司 网络系统中用户的属性数值转移方法、装置及系统

Also Published As

Publication number Publication date
US20190043030A1 (en) 2019-02-07
JP6359173B2 (ja) 2018-07-18
US10417620B2 (en) 2019-09-17
US10127529B2 (en) 2018-11-13
US20160335611A1 (en) 2016-11-17
CN104901994A (zh) 2015-09-09
JP2017527017A (ja) 2017-09-14
CN104901994B (zh) 2018-05-25

Similar Documents

Publication Publication Date Title
WO2016062173A1 (zh) 用户属性数值转移方法及终端
US20210150506A1 (en) Peer-to-peer payment systems and methods
US20160275486A1 (en) Device, system, and method for creating virtual credit card
US11379819B2 (en) Method and apparatus for information exchange
US20160132866A1 (en) Device, system, and method for creating virtual credit card
US20180322506A1 (en) Service processing method, apparatus, and system
US11113684B2 (en) Device, system, and method for creating virtual credit card
WO2015062412A1 (en) Method, device and system for online payment
US20160275488A1 (en) Device, system, and method for creating virtual credit card
US11803859B2 (en) Method for provisioning merchant-specific payment apparatus
US20200043067A1 (en) Resource transmission methods and apparatus
WO2019178817A1 (zh) 一种产品销量提报方法、支付方法和终端设备
US9049211B1 (en) User challenge using geography of previous login
US11381660B2 (en) Selective information sharing between users of a social network
EP3062272A1 (en) Method and apparatus for accumulating membership points
EP3543938B1 (en) Authentication of a transaction card using a multimedia file
WO2015101057A1 (en) Data processing method and related device and system
WO2018210271A1 (zh) 卡片写入方法、装置、终端、服务器及存储介质
CN112288556A (zh) 用于资源转移的方法和装置、发起资源转移的方法和装置
WO2018166097A1 (zh) 一种支付方法、终端和服务器
WO2018133231A1 (zh) 智能处理应用事件的方法与装置
US20160275487A1 (en) Device, system, and method for creating virtual credit card
US20240232901A9 (en) Method for provisioning merchant-specific payment apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15851717

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15110316

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2017503518

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 07/09/2017)

122 Ep: pct application non-entry in european phase

Ref document number: 15851717

Country of ref document: EP

Kind code of ref document: A1