CN108416595A - Information processing method and device - Google Patents
Information processing method and device Download PDFInfo
- Publication number
- CN108416595A CN108416595A CN201810259528.9A CN201810259528A CN108416595A CN 108416595 A CN108416595 A CN 108416595A CN 201810259528 A CN201810259528 A CN 201810259528A CN 108416595 A CN108416595 A CN 108416595A
- Authority
- CN
- China
- Prior art keywords
- user
- facial image
- mentioned
- information
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/38—Payment protocols; Details thereof
- G06Q20/40—Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
- G06Q20/401—Transaction verification
- G06Q20/4014—Identity check for transactions
- G06Q20/40145—Biometric identity checks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Social Psychology (AREA)
- Computer Security & Cryptography (AREA)
- Psychiatry (AREA)
- Finance (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Collating Specific Patterns (AREA)
Abstract
The embodiment of the present application discloses information processing method and device.One specific implementation mode of this method includes:The modification for account information in response to receiving user terminal transmission is asked, and the user belonged to user terminal carries out In vivo detection, to determine whether user is live body user;In response to determining that user is live body user, the facial image of user is obtained;The target facial image to match with facial image is searched in preset face image set;In response to finding target facial image, determine whether account information meets preset information modification condition;Condition is changed in response to determining that account information meets information, based on modification request, account information is changed accordingly.To be effectively utilized the In vivo detection done after receiving for the modification request of account information and the lookup to target facial image, realize to whether user is the detection of validated user, and improve the safety of account information.
Description
Technical field
The invention relates to field of computer technology, and in particular to information processing method and device.
Background technology
Currently, when user's progress fund is got, typically the identity card of user and/or account password are verified, when
After being verified, the fund of certain amount of money is transferred to the account of user.But the user for getting fund is carried out due to not corresponding
Verification, it is possible to the generation for deceiving the events such as neck, false claiming can be caused.
Invention content
The embodiment of the present application proposes information processing method and device.
In a first aspect, the embodiment of the present application provides a kind of information processing method, this method includes:In response to receiving use
The modification for account information that family end is sent is asked, and the user belonged to above-mentioned user terminal carries out In vivo detection, with determination
Whether above-mentioned user is live body user;It is live body user in response to the above-mentioned user of determination, obtains the facial image of above-mentioned user;
The target facial image to match with above-mentioned facial image is searched in preset face image set;In response to finding above-mentioned mesh
Facial image is marked, determines whether above-mentioned account information meets preset information modification condition;In response to the above-mentioned account information of determination
Meet above- mentioned information modification condition, is asked based on above-mentioned modification, above-mentioned account information is changed accordingly.
In some embodiments, the above-mentioned user belonged to above-mentioned user terminal carries out In vivo detection, including:To above-mentioned use
Family end exports the character string generated at random;Receive the lip motion video that above-mentioned user terminal is sent, wherein above-mentioned lip motion
Video is that above-mentioned user terminal reads the character in above-mentioned character string and the video recorded in response to above-mentioned user;To above-mentioned lip
Action video is analyzed, determine the lip motion that is done when above-mentioned user reads the character in above-mentioned character string whether with reading
The lip motion that should be done when the character gone out in above-mentioned character string is consistent;If consistent, it is determined that above-mentioned user is live body user.
In some embodiments, the above-mentioned user belonged to above-mentioned user terminal carries out In vivo detection, further includes:To above-mentioned
User terminal sends image capture instruction;Receive the first image of the face for showing above-mentioned user that above-mentioned user terminal is sent;Base
In above-mentioned first image, determine whether above-mentioned user is live body user.
In some embodiments, the facial image of the above-mentioned user of above-mentioned acquisition, including:It is extracted from above-mentioned first image
The human face region extracted is generated facial image by human face region.
In some embodiments, the facial image in above-mentioned face image set is associated with area information in advance;On and
The target facial image searched in preset face image set and matched with above-mentioned facial image is stated, including:It obtains above-mentioned
The address information of user;Based on address above mentioned information, facial image composition facial image is chosen from above-mentioned face image set
Group, wherein the region indicated by the area information associated by facial image in above-mentioned facial image group and address above mentioned information
Region where indicated position is the same area;Above-mentioned target facial image is searched in above-mentioned facial image group.
In some embodiments, above-mentioned to search above-mentioned target facial image in above-mentioned facial image group, including:Using pre-
First trained human face recognition model, the facial image of above-mentioned user and the facial image in above-mentioned facial image group are compared
It is right, obtain comparison result, wherein above-mentioned comparison result includes the face and above-mentioned face shown by the facial image of above-mentioned user
The face shown by facial image in image group derives from the probability of same person;It determines and whether there is in above-mentioned comparison result
Not less than the destination probability of probability threshold value;If there are above-mentioned destination probability, by above-mentioned facial image group and above-mentioned target
The corresponding facial image of probability is determined as above-mentioned target facial image.
Second aspect, the embodiment of the present application provide a kind of information processing unit, which includes:Detection unit, configuration
For the modification request for account information in response to receiving user terminal transmission, the user that above-mentioned user terminal is belonged into
Whether row In vivo detection is live body user with the above-mentioned user of determination;Acquiring unit, be configured to be in response to the above-mentioned user of determination
Live body user obtains the facial image of above-mentioned user;Searching unit, be configured in preset face image set search with
The target facial image that above-mentioned facial image matches;Determination unit is configured in response to finding above-mentioned target face figure
Picture, determines whether above-mentioned account information meets preset information modification condition;Unit is changed, is configured to above-mentioned in response to determination
Account information meets above- mentioned information modification condition, is asked based on above-mentioned modification, is changed accordingly above-mentioned account information.
In some embodiments, above-mentioned detection unit is further configured to:It is generated at random to the output of above-mentioned user terminal
Character string;Receive the lip motion video that above-mentioned user terminal is sent, wherein above-mentioned lip motion video is that above-mentioned user terminal is rung
The video that the character in above-mentioned character string should be read in above-mentioned user and recorded;Above-mentioned lip motion video is analyzed,
Determine the lip motion that is done when above-mentioned user reads the character in above-mentioned character string whether with read in above-mentioned character string
Character when the lip motion that should do it is consistent.
In some embodiments, above-mentioned detection unit is further configured to:Image Acquisition is sent to above-mentioned user terminal
Instruction;Receive the first image of the face for showing above-mentioned user that above-mentioned user terminal is sent;Based on above-mentioned first image, determine
Whether above-mentioned user is live body user.
In some embodiments, above-mentioned acquiring unit is further configured to:Face is extracted from above-mentioned first image
The human face region extracted is generated facial image by region.
In some embodiments, the facial image in above-mentioned face image set is associated with area information in advance;On and
Stating searching unit includes:Subelement is obtained, is configured to obtain the address information of above-mentioned user;Subelement is chosen, is configured to
Based on address above mentioned information, facial image composition facial image group is chosen from above-mentioned face image set, wherein above-mentioned face
The region indicated by the area information associated by facial image in image group and the position place indicated by address above mentioned information
Region be the same area;Subelement is searched, is configured to search above-mentioned target facial image in above-mentioned facial image group.
In some embodiments, above-mentioned lookup subelement is further configured to:Utilize recognition of face mould trained in advance
The facial image of above-mentioned user is compared with the facial image in above-mentioned facial image group, obtains comparison result by type,
In, above-mentioned comparison result includes the face shown by the facial image of above-mentioned user and the facial image in above-mentioned facial image group
Shown face derives from the probability of same person;It determines in above-mentioned comparison result with the presence or absence of the mesh not less than probability threshold value
Mark probability;If there are above-mentioned destination probability, by face figure in above-mentioned facial image group, corresponding with above-mentioned destination probability
As being determined as above-mentioned target facial image.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, which includes:One or more processing
Device;Storage device, for storing one or more programs;When said one or multiple programs are by said one or multiple processors
It executes so that said one or multiple processors realize the method as described in any realization method in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, on
State the method realized when program is executed by processor as described in any realization method in first aspect.
Information processing method and device provided by the embodiments of the present application, by receive user terminal transmission be directed to account
After the modification request of information, the user belonged to user terminal carries out In vivo detection, it may be determined that and whether user is live body user,
It is practised fraud using photo or video etc. to prevent disabled user.Then after determining that user is live body user, the people of user is obtained
Face image, to be searched and the matched target facial image of the facial image in preset face image set.Then it is looking into
After finding target facial image, determine whether account information meets preset information modification condition, to determine account information
When meeting information modification condition, based on modification request, account information is changed accordingly.It is being received to be effectively utilized
The In vivo detection done after being asked to above-mentioned modification and the lookup to target facial image, realize to whether user is legal
The detection of user, and improve the safety of account information.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the information processing method of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the information processing method of the application;
Fig. 4 is the flow chart according to another embodiment of the information processing method of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the information processing unit of the application;
Fig. 6 is adapted for the structural schematic diagram of the computer system of the electronic equipment for realizing the embodiment of the present application.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the exemplary system of the embodiment of the information processing method or information processing unit that can apply the application
System framework 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted by network 104 with server 105 with using terminal equipment 101,102,103, to receive or send out
Send message etc..The application of Image Acquisition class, the application of financing class, capital management class can be installed on terminal device 101,102,103
Using etc..
Terminal device 101,102,103 can be hardware, can also be software.When terminal device 101,102,103 is hard
Can be various electronic equipments when part, including but not limited to smart mobile phone, tablet computer, pocket computer on knee, desk-top meter
Calculation machine etc..When terminal device 101,102,103 is software, may be mounted in above-mentioned cited electronic equipment.It can be with
It is implemented as multiple softwares or software module (such as providing Distributed Services), single software or software mould can also be implemented as
Block.It is not specifically limited herein.
Server 105 can be to provide the server of various services.Server 105 for example can from terminal device 101,
102,103 the modification request for account information is received, and modification request is handled accordingly.
It should be noted that the information processing method that the embodiment of the present application is provided generally is executed by server 105.Accordingly
Ground, information processing unit are generally positioned in server 105.
In addition, server can be hardware, can also be software.When server is hardware, multiple clothes may be implemented into
The distributed server cluster of business device composition, can also be implemented as individual server.When server is software, may be implemented into
Multiple softwares or software module (such as providing Distributed Services), can also be implemented as single software or software module.
This is not specifically limited.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the flow 200 of one embodiment of the information processing method according to the application is shown.The letter
The flow 200 for ceasing processing method, includes the following steps:
Step 201, it is asked in response to receiving the modification for account information of user terminal transmission, user terminal is belonged to
User carry out In vivo detection, to determine whether user is live body user.
In the present embodiment, the executive agent (such as server 105 shown in FIG. 1) of information processing method can be from connecting
The user terminal (such as terminal device shown in FIG. 1 101,102,103) connect receives the modification request for account information.Moreover,
After receiving modification request, above-mentioned executive agent can carry out In vivo detection to the user that the user terminal is belonged to, with true
Whether the fixed user is live body user.It is practised fraud using photo, video etc. in this way, disabled user can be effectively prevented.
Wherein, account information for example may include account, user name, account balance and at least one of following:Gender, year
Age, address information, revenue and expenditure record.Above-mentioned modification request is such as can be modification request increaseing or decreasing account balance.
It should be noted that above-mentioned executive agent can execute following In vivo detection step determine above-mentioned user whether be
Live body user:
First, above-mentioned executive agent can export the character string generated at random to above-mentioned user terminal, so that above-mentioned user
End prompts above-mentioned user to read the character in the character string.Moreover, in the character during above-mentioned user reads the character string,
Above-mentioned user terminal can carry out video record using the image collecting device that it is connected to above-mentioned user, to obtain lip motion
Video, and the lip motion video can be sent to above-mentioned executive agent.Wherein, the character in character string can be the Chinese
Word, letter or number etc..
Then, above-mentioned executive agent can receive above-mentioned lip motion video, and analyze the lip motion video,
Determine the lip motion that is done when above-mentioned user reads the character in above-mentioned character string whether with read in above-mentioned character string
Character when the lip motion that should do it is consistent.If consistent, above-mentioned executive agent can determine that above-mentioned user is live body user.This
In, above-mentioned executive agent for example can orient mouth region in the image that above-mentioned lip motion video is included, by right
Whether mouth region carries out lip detection, should be done with when reading the character in character string with the lip motion of the above-mentioned user of determination
Lip motion it is consistent.
In some optional realization methods of the present embodiment, above-mentioned executive agent can also be sent to above-mentioned user terminal schemes
As acquisition instructions, so that above-mentioned user terminal carries out Image Acquisition to above-mentioned user, and show above-mentioned user's by collected
First image of face is sent to above-mentioned executive agent.Above-mentioned executive agent can determine above-mentioned user based on first image
Whether it is live body user.
It should be noted that above-mentioned user terminal can carry out figure using the image collecting device that it is connected to above-mentioned user
As acquisition.It, can be in addition, above-mentioned user terminal is before the image collecting device connected using it shoots above-mentioned user
Presupposed information (such as Chinese character, letter, number, geometric figure or combinations thereof) is shown in specified display area, and is somebody's turn to do in display
During presupposed information, above-mentioned user is shot.Above-mentioned executive agent can be by detecting whether above-mentioned first image shows
The presupposed information, to determine whether above-mentioned user is live body user.Because when disabled user using on terminal device photo or
Video is come when above-mentioned user being pretended to be to carry out In vivo detection, the presupposed information shown by above-mentioned user terminal generally can be reflective to the terminal
On the display screen of equipment.Therefore, to the photo or video taken pictures obtained by the first image on usually there will be and show this
The retroreflective regions of presupposed information.
Optionally, above-mentioned executive agent can be imitated by detecting the eye areas in above-mentioned first image with the presence or absence of bright pupil
It answers, to determine that the face in above-mentioned first image is derived from live body user, is also derived from photo or video etc..
It should be noted that above-mentioned executive agent can also use other biopsy methods, the present embodiment is not to work
Body detecting method does any restriction.
Step 202, in response to determining that user is live body user, the facial image of user is obtained.
In the present embodiment, above-mentioned executive agent is after determining that user that above-mentioned user terminal is belonged to is live body user, on
The facial image of the user can be obtained by stating executive agent.Here, above-mentioned executive agent can send face to above-mentioned user terminal
Image capture instruction, so that the image collecting device that above-mentioned user terminal is connected using it shoots the face of the user,
To obtain the facial image of the user, and the facial image is sent to above-mentioned executive agent.
Optionally, if above-mentioned executive agent is got during In vivo detection shows the use that above-mentioned user terminal is belonged to
First image of the face at family, above-mentioned executive agent can be by first images directly as facial image, or from first figure
Human face region is extracted as in, and by the face Area generation facial image.
Step 203, the target face figure to match with the facial image of user is searched in preset face image set
Picture.
In the present embodiment, above-mentioned executive agent can be searched and above-mentioned user terminal institute in preset face image set
The target facial image that the facial image of the user of ownership matches.As an example, above-mentioned executive agent can locally be deposited in advance
Contain it is trained after, convolutional neural networks for carrying out image characteristics extraction.But also can be previously stored with it is above-mentioned
The corresponding image feature information of facial image in face image set.Above-mentioned executive agent can be first by the face of the user
Image inputs the convolutional neural networks, obtains image feature information corresponding with the facial image.Then above-mentioned executive agent can
With as image feature information by obtained by and the facial image corresponding image characteristic information in above-mentioned face image set into
Row matching, to search the target image characteristics information to match with the image feature information of gained.If finding the target image
Characteristic information, then above-mentioned executive agent the facial image corresponding to the target image characteristics information can be determined as target face
Image.
Herein, image feature information can be the information characterized for the feature to image, and the feature of image can be with
It is the various fundamentals (such as color, lines, texture etc.) of image.In practice, convolutional neural networks (Convolutional
Neural Network, CNN) it is a kind of feedforward neural network, its artificial neuron can respond in a part of coverage area
Surrounding cells, have outstanding performance for image procossing.Therefore, it is possible to carry out image characteristics extraction using convolutional neural networks.
It should be noted that above-mentioned be used to carry out the convolutional neural networks of image characteristics extraction utilize machine learning
Method and training sample to existing depth convolutional neural networks (such as DenseBox, VGGNet, ResNet, SegNet etc.) into
Obtained from row Training.
In some optional realization methods of the present embodiment, above-mentioned executive agent can locally be previously stored with trained
Human face recognition model afterwards.Above-mentioned executive agent can utilize the people for the user that human face recognition model belongs to above-mentioned user terminal
Facial image in face image and above-mentioned face image set is compared, and obtains comparison result.Wherein, comparison result can wrap
Include the face source shown by the facial image in the face and above-mentioned face image set shown by the facial image of the user
In the probability of same person.Above-mentioned human face recognition model can be used for characterizing the corresponding pass between facial image and comparison result
System.Above-mentioned executive agent can search the destination probability not less than probability threshold value in the comparison result of gained, if finding, on
Target person can be determined as by facial image in above-mentioned face image set, corresponding with the destination probability by stating executive agent
Face image.
It should be noted that above-mentioned human face recognition model can be that those skilled in the art are pre- based on a large amount of statistics calculating
Mapping table first formulating, for characterizing the correspondence between facial image and comparison result;Can also be using Piao
Plain Bayesian model (Naive Bayesian Model, NBM), support vector machines (Support Vector Machine,
SVM), XGBoost (eXtreme Gradient Boosting) or convolutional neural networks (Convolutional Neural
Network, CNN) etc. the model that can be used for classifying be trained.
Step 204, in response to finding target facial image, determine whether account information meets preset information modification item
Part.
In the present embodiment, above-mentioned target facial image is found in response to above-mentioned executive agent, above-mentioned executive agent can
For the user belonged to using the above-mentioned user terminal of determination as validated user, above-mentioned executive agent can then determine that above-mentioned account information is
It is no to meet preset information modification condition.Here, account information can be corresponding with account classification, such as old-age pension, retired pay, public affairs
Product gold, social security or individual savings etc..Each account classification can be corresponding with information modification condition.
As an example it is supposed that account classification nest egg gold or retired pay etc. corresponding to above-mentioned account information, above-mentioned modification
Request, which is the user that the modification of increase account balance is asked and above-mentioned user terminal is belonged to, can only submit primary increase every month
The modification of account balance is asked.So information modification condition for example may include first condition and second condition.Wherein, first
It is the first gender and age not less than the first value or gender be second sex and age not less than second that part, which can be gender,
Value.Second condition can be that revenue and expenditure record is not included in the increase account balance generated in the first designated time period (such as this month)
Record.
It should be noted that the first gender and second sex can be it is following in one and it is different:Women, man
Property.In addition, the first value and second value can be mandatory age for retirement.As an example, if the first gender is male, then the first value
It can be the mandatory age for retirement (such as 60) of male;If second sex is women, then second value can be the legal of women
Retirement age (such as 55).
For another example assuming that the account classification corresponding to above-mentioned account information is individual savings, above-mentioned modification request is to reduce
The modification of account balance is asked and above-mentioned modification request includes the amount of money.So information modification condition for example may include:Account
The numerical value that remaining sum subtracts gained after the included amount of money of modification request is not less than 0.
It should be understood that information modification condition can be adjusted according to actual needs, the present embodiment is not repaiied information
The content for changing condition does any restriction.
Step 205, in response to determine account information meet information change condition, based on modification request, to account information into
The corresponding modification of row.
In the present embodiment, determine that above-mentioned account information meets information and changes condition in response to above-mentioned executive agent, it is above-mentioned
Executive agent, which can be based on above-mentioned modification, asks, and is changed accordingly above-mentioned account information.Such as by above-mentioned account information
In account balance increase Flat Amount or designated amounts (such as modification request included amount of money), or reduce Flat Amount
Or designated amounts etc..
It is a schematic diagram according to the application scenarios of the information processing method of the present embodiment with continued reference to Fig. 3, Fig. 3.
In the application scenarios of Fig. 3, the old-age pension of corresponding Flat Amount can monthly be got by enjoying old-age pension and getting the personnel of qualification.When
When user A wants to apply receiving the old-age pension, as shown in label 301, user A can utilize its all user terminal to be sent out to server
The modification for account information is sent to ask, wherein account information can be the bank account for receiving the old-age pension of user A
Account information, account information may include gender, the age, account balance, revenue and expenditure record, modification request can be increase account
The modification of remaining sum is asked.
Then, as shown in label 302, above-mentioned server can live to user A after receiving above-mentioned modification request
Physical examination is surveyed, to determine whether user A is live body user.
Later, as shown in label 303, determine that user A is live body user in response to above-mentioned server, above-mentioned server can be with
Man face image acquiring instruction is sent to above-mentioned user terminal.
Then, as shown in label 304, above-mentioned user terminal can utilize its institute after receiving man face image acquiring instruction
The facial image of the camera acquisition user A of connection, and facial image is sent to above-mentioned server.
Followed by as shown in label 305, above-mentioned server can be by the facial image of user A and preset facial image
Facial image in set is compared, to search the target to match with the facial image of user A in face image set
Facial image.
Then, as shown in label 306, in response to above-mentioned whois lookup to target facial image, above-mentioned server can be with
Above-mentioned account information is compared with preset information modification condition, whether information modification is met with the above-mentioned account information of determination
Condition.Wherein, information modification condition for example may include first condition and second condition.First condition can be that gender is first
Gender and age are not less than second value not less than the first value or gender for second sex and age.Second condition can be
Revenue and expenditure record does not include the record of the increase account balance generated within this month.
Finally, as shown in label 307, determine that above-mentioned account information meets information and changes condition in response to above-mentioned server,
Above-mentioned account balance can be increased corresponding Flat Amount by above-mentioned server.
The method that above-described embodiment of the application provides is effectively utilized the work done after receiving above-mentioned modification request
Physical examination is surveyed and the lookup to target facial image, realizes to whether user is the detection of validated user, and improve account
The safety of family information.
With further reference to Fig. 4, it illustrates the flows 400 of another embodiment of information processing method.The information processing
The flow 400 of method, includes the following steps:
Step 401, it is asked in response to receiving the modification for account information of user terminal transmission, user terminal is belonged to
User carry out In vivo detection, to determine whether user is live body user.
In the present embodiment, the executive agent (such as server 105 shown in FIG. 1) of information processing method can be from connecting
The user terminal (such as terminal device shown in FIG. 1 101,102,103) connect receives the modification request for account information.Moreover,
After receiving modification request, above-mentioned executive agent can carry out In vivo detection to the user that the user terminal is belonged to, with true
Whether the fixed user is live body user.It is practised fraud using photo, video etc. in this way, disabled user can be effectively prevented.
It should be noted that the explanation for step 401 can be referring to the phase of the step 201 in embodiment illustrated in fig. 2
It speaks on somebody's behalf bright, details are not described herein.
Step 402, in response to determining that user is live body user, the facial image of user is obtained.
In the present embodiment, above-mentioned executive agent is after determining that user that above-mentioned user terminal is belonged to is live body user, on
The facial image of the user can be obtained by stating executive agent.Here, above-mentioned executive agent can send face to above-mentioned user terminal
Image capture instruction, so that the image collecting device that above-mentioned user terminal is connected using it shoots the face of the user,
To obtain the facial image of the user, and the facial image is sent to above-mentioned executive agent.It should be noted that the present embodiment
The method of the facial image to obtaining user does not do any restriction.
Step 403, the address information of user is obtained.
In the present embodiment, above-mentioned executive agent can also obtain the address information of above-mentioned user.The address information is signified
The address shown is such as the home address or CompanyAddress that can be above-mentioned user.Moreover, the address information may be embodied in
It states in modification request, can also be included in above-mentioned account information.
Step 404, it is based on address information, facial image is chosen in preset face image set and forms facial image
Group.
In the present embodiment, above-mentioned executive agent can be based on the address information got, in preset face image set
Facial image is chosen in conjunction forms facial image group.Wherein, the facial image in face image set can be associated with area in advance
Domain information.Region indicated by area information can refer to districts under city administration, township or town etc..In addition, the face in above-mentioned facial image group
The region indicated by area information associated by image and the region where the position indicated by address above mentioned information can be same
One region.
Step 405, the target facial image to match with the facial image of user is searched in facial image group.
In the present embodiment, after above-mentioned executive agent states facial image group in composition, above-mentioned executive agent can be upper
State the target facial image searched in facial image group and matched with the facial image of above-mentioned user.It can be held in this way to avoid above-mentioned
Row main body searches target facial image directly in above-mentioned face image set, can effectively improve looking into for above-mentioned executive agent
Look for efficiency.
As an example, above-mentioned executive agent can utilize human face recognition model trained in advance first, by above-mentioned user's
Facial image is compared with the facial image in facial image group, obtains comparison result.Wherein, on comparison result may include
The face shown by the facial image in the face and above-mentioned facial image group shown by the facial image of user is stated from same
The probability of one people.Human face recognition model can be used for characterizing the correspondence between facial image and comparison result.Then, on
Stating executive agent can be compared the probability in comparison result with probability threshold value (such as 0.96 etc.), to determine comparison result
In with the presence or absence of not less than probability threshold value destination probability.If in the presence of above-mentioned executive agent can be by above-mentioned facial image group
In, corresponding with destination probability facial image be determined as target facial image.
It is based on a large amount of statistics calculating it should be noted that human face recognition model can be those skilled in the art and makes in advance
Mapping table fixed, for characterizing the correspondence between facial image and comparison result;Can also be using simple shellfish
What the model that leaf this model, support vector machines, XGBoost or convolutional neural networks etc. can be used for classifying was trained.
Step 406, in response to finding target facial image, determine whether account information meets preset information modification item
Part.
In the present embodiment, target facial image is found in response to above-mentioned executive agent, above-mentioned executive agent can be true
Whether fixed above-mentioned account information meets preset information modification condition.Here, it can eat for the explanation of step 406 and see Fig. 2
The related description of step 204 in illustrated embodiment, details are not described herein.
Step 407, in response to determine account information meet information change condition, based on modification request, to account information into
The corresponding modification of row.
In the present embodiment, determine that above-mentioned account information meets above- mentioned information and changes condition in response to above-mentioned executive agent,
Above-mentioned executive agent, which can be based on above-mentioned modification, asks, and is changed accordingly above-mentioned account information.Such as by above-mentioned account
Account balance in information increases Flat Amount or designated amounts (such as amount of money included by modification request), or reduces and fix
The amount of money or designated amounts etc..
Figure 4, it is seen that compared with the corresponding embodiments of Fig. 2, the flow of the information processing method in the present embodiment
400 highlight the step of forming facial image group and in facial image group the step of lookup target facial image.As a result, originally
Whether the scheme of embodiment description is the detection of validated user to user and improves the safety of account information realizing
While, the search efficiency for target facial image can also be improved, and shorten message processing time.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides a kind of information processing apparatus
The one embodiment set, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which specifically can be applied to respectively
In kind electronic equipment.
As shown in figure 5, the information processing unit 500 of the present embodiment includes:Detection unit 501, is searched at acquiring unit 502
Unit 503, determination unit 504 and modification unit 505.Wherein, detection unit 501 is configured in response to receiving user terminal hair
The modification for account information sent is asked, and the user belonged to user terminal carries out In vivo detection, with determine user whether be
Live body user;Acquiring unit 502 is configured to, in response to determining that user is live body user, obtain the facial image of user;It searches
Unit 503 is configured to search the target face figure to match with the facial image of user in preset face image set
Picture;Determination unit 504 is configured to, in response to finding target facial image, determine whether account information meets preset information
Modification condition;Modification unit 505 is configured to change condition in response to determining that account information meets information, is based on modification request,
Account information is changed accordingly.
In the present embodiment, in information processing unit 500:Detection unit 501, acquiring unit 502, searching unit 503, really
The specific processing of order member 504 and modification unit 505 and its caused technique effect can be respectively with reference in 2 corresponding embodiments of figure
Step 201, the related description of step 202, step 203 step 204 and step 205, details are not described herein.
In some optional realization methods of the present embodiment, above-mentioned detection unit 501 can be further configured to:To
Above-mentioned user terminal exports the character string generated at random;Receive the lip motion video that above-mentioned user terminal is sent, wherein the lip
Action video can be that above-mentioned user terminal reads the character in above-mentioned character string and the video recorded in response to above-mentioned user;It is right
The lip motion video is analyzed, and determines that the lip motion done when above-mentioned user reads the character in above-mentioned character string is
It is no consistent with lip motion that is should being done when reading the character in above-mentioned character string.If consistent, it is determined that above-mentioned user is live body
User.
In some optional realization methods of the present embodiment, above-mentioned detection unit 501 can also be further configured to:
Image capture instruction is sent to above-mentioned user terminal;Receive the first figure of the face for showing above-mentioned user that above-mentioned user terminal is sent
Picture;Based on first image, determine whether above-mentioned user is live body user.
In some optional realization methods of the present embodiment, above-mentioned acquiring unit 502 can be further configured to:From
Human face region is extracted in above-mentioned first image, the human face region extracted is generated into facial image.
In some optional realization methods of the present embodiment, the facial image in above-mentioned face image set can be advance
It is associated with area information;And above-mentioned searching unit 503 may include:Subelement (not shown) is obtained, is configured to obtain
Take the address information of above-mentioned user;Subelement (not shown) is chosen, is configured to choose from above-mentioned face image set
Facial image forms facial image group, wherein indicated by the area information associated by facial image in above-mentioned facial image group
Region and address above mentioned information indicated by position where region can be the same area;Subelement is searched (in figure not show
Go out), it is configured to search above-mentioned target facial image in above-mentioned facial image group.
In some optional realization methods of the present embodiment, above-mentioned lookup subelement can be further configured to:Profit
With human face recognition model trained in advance, the facial image in the facial image of above-mentioned user and above-mentioned facial image group is carried out
Compare, obtain comparison result, wherein above-mentioned comparison result may include face shown by the facial image of above-mentioned user with it is upper
State the probability that the face shown by the facial image in facial image group derives from same person;Determining in above-mentioned comparison result is
The no destination probability existed not less than probability threshold value;If there are above-mentioned destination probability, by it is in above-mentioned facial image group, with it is upper
It states the corresponding facial image of destination probability and is determined as above-mentioned target facial image.
The device that above-described embodiment of the application provides is effectively utilized the work done after receiving above-mentioned modification request
Physical examination is surveyed and the lookup to target facial image, realizes to whether user is the detection of validated user, and improve account
The safety of family information.
Below with reference to Fig. 6, it illustrates the computer systems 600 suitable for the electronic equipment for realizing the embodiment of the present application
Structural schematic diagram.Electronic equipment shown in Fig. 6 is only an example, to the function of the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and
Execute various actions appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
It is connected to I/O interfaces 605 with lower component:Importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 608 including hard disk etc.;
And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net executes communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 610, as needed in order to be read from thereon
Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed by communications portion 609 from network, and/or from detachable media
611 are mounted.When the computer program is executed by central processing unit (CPU) 601, executes and limited in the system of the application
Above-mentioned function.
It should be noted that computer-readable medium shown in the application can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two arbitrarily combines.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or arbitrary above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more conducting wires, just
It takes formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type and may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this application, can be any include computer readable storage medium or storage journey
The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at this
In application, computer-readable signal media may include in a base band or as the data-signal that a carrier wave part is propagated,
Wherein carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for
By instruction execution system, device either device use or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned
Any appropriate combination.
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part for a part for one module, program segment, or code of table, above-mentioned module, program segment, or code includes one or more
Executable instruction for implementing the specified logical function.It should also be noted that in some implementations as replacements, institute in box
The function of mark can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are practical
On can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it wants
It is noted that the combination of each box in block diagram or flow chart and the box in block diagram or flow chart, can use and execute rule
The dedicated hardware based systems of fixed functions or operations is realized, or can use the group of specialized hardware and computer instruction
It closes to realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet
Include detection unit, acquiring unit, searching unit, determination unit and modification unit.Wherein, the title of these units is in certain situation
Under do not constitute restriction to the unit itself, for example, detection unit is also described as the " user belonged to user terminal
Carry out the unit of In vivo detection ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in electronic equipment described in above-described embodiment;Can also be individualism, and without be incorporated the electronic equipment in.
Above computer readable medium carries one or more program, when the electronics is set by one for said one or multiple programs
It is standby when executing so that the electronic equipment includes:The modification for account information in response to receiving user terminal transmission is asked, right
The user that user terminal is belonged to carries out In vivo detection, to determine whether user is live body user;In response to determining that user is live body
User obtains the facial image of user;The mesh to match with the facial image of user is searched in preset face image set
Mark facial image;In response to finding target facial image, determine whether account information meets preset information modification condition;It rings
It should meet information in determining account information and change condition, based on modification request, account information is changed accordingly.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (14)
1. a kind of information processing method, including:
In response to receiving the modification request for account information of user terminal transmission, the user that the user terminal is belonged into
Whether row In vivo detection is live body user with the determination user;
It is live body user in response to the determination user, obtains the facial image of the user;
The target facial image to match with the facial image is searched in preset face image set;
In response to finding the target facial image, determine whether the account information meets preset information modification condition;
Meet described information in response to the determination account information and change condition, asked based on the modification, the account is believed
Breath is changed accordingly.
2. according to the method described in claim 1, wherein, the user belonged to the user terminal carries out In vivo detection,
Including:
The character string generated at random is exported to the user terminal;
Receive the lip motion video that the user terminal is sent, wherein the lip motion video be the user terminal in response to
The user reads the character in the character string and the video recorded;
The lip motion video is analyzed, determines the lip done when the user reads the character in the character string
Whether portion's action is consistent with the lip motion that should be done when reading the character in the character string;
If consistent, it is determined that the user is live body user.
3. according to the method described in claim 1, wherein, the user belonged to the user terminal carries out In vivo detection,
Further include:
Image capture instruction is sent to the user terminal;
Receive the first image of the face for showing the user that the user terminal is sent;
Based on described first image, determine whether the user is live body user.
4. according to the method described in claim 3, wherein, the facial image for obtaining the user, including:
Human face region is extracted from described first image, and the human face region extracted is generated into facial image.
5. according to the method described in claim 1, wherein, the facial image in the face image set is associated with region in advance
Information;And
The target facial image searched in preset face image set and the facial image matches, including:
Obtain the address information of the user;
Based on described address information, facial image composition facial image group is chosen from the face image set, wherein described
The region indicated by the area information associated by facial image in facial image group and the position indicated by described address information
The region at place is the same area;
The target facial image is searched in the facial image group.
6. described to search the target face figure in the facial image group according to the method described in claim 5, wherein
Picture, including:
Using human face recognition model trained in advance, by the face figure in the facial image of the user and the facial image group
As being compared, obtain comparison result, wherein the comparison result include face shown by the facial image of the user with
The face shown by facial image in the facial image group derives from the probability of same person;
It determines in the comparison result with the presence or absence of the destination probability not less than probability threshold value;
If there are the destination probability, by facial image in the facial image group, corresponding with the destination probability
It is determined as the target facial image.
7. a kind of information processing unit, including:
Detection unit is configured to ask in response to receiving the modification for account information of user terminal transmission, to the use
Whether the user that family end is belonged to carries out In vivo detection, be live body user with the determination user;
Acquiring unit is configured in response to the determination user be live body user, obtains the facial image of the user;
Searching unit is configured to search the target face to match with the facial image in preset face image set
Image;
Determination unit is configured to, in response to finding the target facial image, it is pre- to determine whether the account information meets
If information change condition;
Unit is changed, is configured to meet described information modification condition in response to the determination account information, is based on the modification
Request, changes the account information accordingly.
8. device according to claim 7, wherein the detection unit is further configured to:
The character string generated at random is exported to the user terminal;
Receive the lip motion video that the user terminal is sent, wherein the lip motion video be the user terminal in response to
The user reads the character in the character string and the video recorded;
The lip motion video is analyzed, determines the lip done when the user reads the character in the character string
Whether portion's action is consistent with the lip motion that should be done when reading the character in the character string;
If consistent, it is determined that the user is live body user.
9. device according to claim 7, wherein the detection unit is further configured to:
Image capture instruction is sent to the user terminal;
Receive the first image of the face for showing the user that the user terminal is sent;
Based on described first image, determine whether the user is live body user.
10. device according to claim 9, wherein the acquiring unit is further configured to:
Human face region is extracted from described first image, and the human face region extracted is generated into facial image.
11. device according to claim 7, wherein the facial image in the face image set is associated with area in advance
Domain information;And
The searching unit includes:
Subelement is obtained, is configured to obtain the address information of the user;
Subelement is chosen, is configured to be based on described address information, facial image composition is chosen from the face image set
Facial image group, wherein region indicated by the area information associated by facial image in the facial image group with it is described
The region where position indicated by address information is the same area;
Subelement is searched, is configured to search the target facial image in the facial image group.
12. according to the devices described in claim 11, wherein the lookup subelement is further configured to:
Using human face recognition model trained in advance, by the face figure in the facial image of the user and the facial image group
As being compared, obtain comparison result, wherein the comparison result include face shown by the facial image of the user with
The face shown by facial image in the facial image group derives from the probability of same person;
It determines in the comparison result with the presence or absence of the destination probability not less than probability threshold value;
If there are the destination probability, by facial image in the facial image group, corresponding with the destination probability
It is determined as the target facial image.
13. a kind of electronic equipment, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors so that one or more of processors are real
The now method as described in any in claim 1-6.
14. a kind of computer-readable medium, is stored thereon with computer program, wherein real when described program is executed by processor
The now method as described in any in claim 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810259528.9A CN108416595A (en) | 2018-03-27 | 2018-03-27 | Information processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810259528.9A CN108416595A (en) | 2018-03-27 | 2018-03-27 | Information processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108416595A true CN108416595A (en) | 2018-08-17 |
Family
ID=63132600
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810259528.9A Pending CN108416595A (en) | 2018-03-27 | 2018-03-27 | Information processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108416595A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109345253A (en) * | 2018-09-04 | 2019-02-15 | 阿里巴巴集团控股有限公司 | Resource transfers method, apparatus and system |
CN109934191A (en) * | 2019-03-20 | 2019-06-25 | 北京字节跳动网络技术有限公司 | Information processing method and device |
CN112491840A (en) * | 2020-11-17 | 2021-03-12 | 平安养老保险股份有限公司 | Information modification method and device, computer equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009093371A (en) * | 2007-10-05 | 2009-04-30 | Toshiba Corp | Monitoring device of biological collation device, monitoring device of passage control device, biological collation system, passage control system, monitoring method of biological collation device, and monitoring device of passage control device |
CN102789572A (en) * | 2012-06-26 | 2012-11-21 | 五邑大学 | Living body face safety certification device and living body face safety certification method |
CN105005597A (en) * | 2015-06-30 | 2015-10-28 | 广东欧珀移动通信有限公司 | Photograph sharing method and mobile terminal |
CN105117695A (en) * | 2015-08-18 | 2015-12-02 | 北京旷视科技有限公司 | Living body detecting device and method |
CN105631430A (en) * | 2015-12-30 | 2016-06-01 | 浙江宇视科技有限公司 | Matching method and apparatus for face image |
CN106302330A (en) * | 2015-05-21 | 2017-01-04 | 腾讯科技(深圳)有限公司 | Auth method, device and system |
CN106778496A (en) * | 2016-11-22 | 2017-05-31 | 重庆中科云丛科技有限公司 | Biopsy method and device |
CN107273794A (en) * | 2017-04-28 | 2017-10-20 | 北京建筑大学 | Live body discrimination method and device in a kind of face recognition process |
CN107818313A (en) * | 2017-11-20 | 2018-03-20 | 腾讯科技(深圳)有限公司 | Vivo identification method, device, storage medium and computer equipment |
-
2018
- 2018-03-27 CN CN201810259528.9A patent/CN108416595A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009093371A (en) * | 2007-10-05 | 2009-04-30 | Toshiba Corp | Monitoring device of biological collation device, monitoring device of passage control device, biological collation system, passage control system, monitoring method of biological collation device, and monitoring device of passage control device |
CN102789572A (en) * | 2012-06-26 | 2012-11-21 | 五邑大学 | Living body face safety certification device and living body face safety certification method |
CN106302330A (en) * | 2015-05-21 | 2017-01-04 | 腾讯科技(深圳)有限公司 | Auth method, device and system |
CN105005597A (en) * | 2015-06-30 | 2015-10-28 | 广东欧珀移动通信有限公司 | Photograph sharing method and mobile terminal |
CN105117695A (en) * | 2015-08-18 | 2015-12-02 | 北京旷视科技有限公司 | Living body detecting device and method |
CN105631430A (en) * | 2015-12-30 | 2016-06-01 | 浙江宇视科技有限公司 | Matching method and apparatus for face image |
CN106778496A (en) * | 2016-11-22 | 2017-05-31 | 重庆中科云丛科技有限公司 | Biopsy method and device |
CN107273794A (en) * | 2017-04-28 | 2017-10-20 | 北京建筑大学 | Live body discrimination method and device in a kind of face recognition process |
CN107818313A (en) * | 2017-11-20 | 2018-03-20 | 腾讯科技(深圳)有限公司 | Vivo identification method, device, storage medium and computer equipment |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109345253A (en) * | 2018-09-04 | 2019-02-15 | 阿里巴巴集团控股有限公司 | Resource transfers method, apparatus and system |
CN109934191A (en) * | 2019-03-20 | 2019-06-25 | 北京字节跳动网络技术有限公司 | Information processing method and device |
CN112491840A (en) * | 2020-11-17 | 2021-03-12 | 平安养老保险股份有限公司 | Information modification method and device, computer equipment and storage medium |
CN112491840B (en) * | 2020-11-17 | 2023-07-07 | 平安养老保险股份有限公司 | Information modification method, device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108494778A (en) | Identity identifying method and device | |
CN107644209A (en) | Method for detecting human face and device | |
CN108898186A (en) | Method and apparatus for extracting image | |
CN109522483A (en) | Method and apparatus for pushed information | |
CN108446387A (en) | Method and apparatus for updating face registration library | |
CN108154196A (en) | For exporting the method and apparatus of image | |
CN108491805A (en) | Identity identifying method and device | |
CN108388878A (en) | The method and apparatus of face for identification | |
CN108416326A (en) | Face identification method and device | |
CN108429816A (en) | Method and apparatus for generating information | |
CN108615159A (en) | Access control method and device based on blinkpunkt detection | |
CN108229485A (en) | For testing the method and apparatus of user interface | |
CN109241934A (en) | Method and apparatus for generating information | |
CN109389169A (en) | Method and apparatus for handling image | |
CN109887077A (en) | Method and apparatus for generating threedimensional model | |
CN108062544A (en) | For the method and apparatus of face In vivo detection | |
CN107910060A (en) | Method and apparatus for generating information | |
CN108509921A (en) | Method and apparatus for generating information | |
CN108416595A (en) | Information processing method and device | |
CN108509904A (en) | Method and apparatus for generating information | |
CN112464803A (en) | Image comparison method and device | |
CN108133197A (en) | For generating the method and apparatus of information | |
CN109285181A (en) | The method and apparatus of image for identification | |
CN108171208A (en) | Information acquisition method and device | |
CN108446659A (en) | Method and apparatus for detecting facial image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180817 |
|
RJ01 | Rejection of invention patent application after publication |