CN107729928A - Information acquisition method and device - Google Patents
Information acquisition method and device Download PDFInfo
- Publication number
- CN107729928A CN107729928A CN201710918840.XA CN201710918840A CN107729928A CN 107729928 A CN107729928 A CN 107729928A CN 201710918840 A CN201710918840 A CN 201710918840A CN 107729928 A CN107729928 A CN 107729928A
- Authority
- CN
- China
- Prior art keywords
- image
- user
- cluster
- result
- cluster result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
This application discloses information acquisition method and device.One embodiment of this method includes:The feature of face object in the image of each user in image by obtaining multiple users respectively, and the feature based on face object, cluster to the image of multiple users, obtain multiple cluster results;Determine the user belonging to each cluster result, and the cluster result obtained after being labeled to each cluster result markup information as the image of the user in cluster result markup information.The mark of the image to all users in cluster result can be completed by being labeled to cluster result by realizing, and save the expense of annotation process.
Description
Technical field
The application is related to computer realm, and in particular to field of face identification, more particularly to information acquisition method and device.
Background technology
Test to the recognition accuracy of face identification system is the key link before face identification system is run.Right
, it is necessary to be labeled to the image of the user of magnanimity when the recognition effect of face identification system is tested.At present, generally use
Mode be:The image of the user of magnanimity is labeled one by one manually, the expense of annotation process is big.
Invention information
This application provides a kind of information acquisition method and device, for solving technology existing for above-mentioned background section
Problem.
In a first aspect, this application provides information acquisition method, this method includes:In the image for obtaining multiple users respectively
Each user image in face object feature, and the feature based on face object, to the image of multiple users
Clustered, obtain multiple cluster results;The user belonging to each cluster result is determined, and will be to each cluster result
Markup information of the markup information of the cluster result obtained after being labeled as the image of the user in cluster result.
Second aspect, this application provides information acquisition device, the device includes:Processing unit, it is configured to obtain respectively
Take the feature of the face object in the image of each user in the image of multiple users, and the spy based on face object
Sign, clusters to the image of multiple users, obtains multiple cluster results;Unit is marked, is configured to determine each cluster
As a result affiliated user, and the cluster result obtained after being labeled to each cluster result markup information as poly-
The markup information of the image of user in class result.
The information acquisition method and device that the application provides, each use in the image by obtaining multiple users respectively
The feature of face object in the image at family, and the feature based on face object, cluster to the image of multiple users, obtain
To multiple cluster results;The user belonging to each cluster result is determined, and after being labeled to each cluster result
Markup information of the markup information of obtained cluster result as the image of the user in cluster result.Realize by cluster
As a result the mark of the image to all users in cluster result can be completed by being labeled, and save the expense of annotation process.
Brief description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 shows the exemplary system architecture that can apply the information acquisition method of the application;
Fig. 2 shows the flow chart of one embodiment of the information acquisition method according to the application;
Fig. 3 shows the structural representation of one embodiment of the information acquisition device according to the application;
Fig. 4 shows the structural representation of the computer system suitable for being used for the terminal for realizing the embodiment of the present application.
Embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Be easy to describe, illustrate only in accompanying drawing to about the related part of invention.
It should be noted that in the case where not conflicting, the feature in embodiment and embodiment in the application can phase
Mutually combination.Describe the application in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is refer to, it illustrates the exemplary system architecture for the information acquisition method that can apply the application.
As shown in figure 1, system architecture can include server 101, network 102, terminal 103.Network 102 can be wired
Network.Server 101 can obtain the image of multiple users, the image of each user in the image of the multiple users got
In can face object only corresponding to the face comprising user.Such as the image of multiple users that server 101 is got
The image of the user uploaded for user is taken pictures, certificate photo such as certainly.Server 101 image of multiple users can be sent to
Terminal 103, terminal 103 are labeled to the image of multiple users, obtain the markup information of the image of multiple users.Multiple users
Image and the markup information of image of user can be used for testing the recognition accuracy of face identification system.
Fig. 2 is refer to, it illustrates the flow of one embodiment of the information acquisition method according to the application.Need to illustrate
, the information acquisition method that the embodiment of the present application is provided can be by terminal (such as the terminal 103 in Fig. 1 performs).The party
Method comprises the following steps:
Step 201, the feature for the face object being based respectively in the image of each user, enters to the image of multiple users
Row cluster.
In the present embodiment, it is any user can be referred to as by face identification system progress authentication per capita.
The face of a user in the picture can be referred to as face object corresponding to the face of user.In the image of multiple users
The image of each user can only include face object corresponding to the face of a user.For example, the image of multiple users is
The certificate photo of multiple users.Face object corresponding to face comprising the user in the certificate photo of each user.
In the present embodiment, for the face object in image, when the face that a user is only included in an image
During corresponding face object, then the image can be referred to as the image of user, and correspondingly, the image of user may belong to a use
Family.
In the present embodiment, the image of multiple users can be obtained first, and the image of the multiple users got can be
Belong to different users.It is raw in order to test the recognition effect of face identification system, it is necessary to be labeled to the image of multiple users
Into the markup information of the image of multiple users, image and respective markup information using multiple users, test recognition of face system
The recognition effect of system.The markup information of the image of user includes the mark of the user belonging to the image of user, and the mark of user can
Think the user name of the user belonging to the image of user.
In the present embodiment, in order to be labeled to the image of the multiple users got, the image of multiple users is generated
Markup information, the feature of the face object in the image of each user in the image of multiple users can be obtained first,
It is then possible to the feature of the face object in the image of each user in the image based on multiple users, to multiple users
Image clustered, obtain belonging to multiple cluster results.The image of each user in one cluster result is to belong to
The image of same user.
In this some optional implementations, in the image that multiple users can be obtained by convolutional neural networks
The feature of face object in the image of each user.Can be beforehand through the image of the user comprising face object of magnanimity
Convolutional neural networks network is trained so that the convolutional neural networks after training can determine there is the multiple of discrimination
The feature of face object.People in the image of each user during the image of multiple users is obtained by convolutional neural networks
During the feature of face object, the image of multiple users can be separately input to convolutional neural networks, obtain convolutional neural networks
The characteristic vector of the feature of face object in the image of the expression user of full articulamentum output, so as to obtain multiple users'
The feature of face object in image.
In some optional implementations of the present embodiment, each user's in the image based on multiple users
The feature of face object in image, can be first using default clustering algorithm root when being clustered to the image of multiple users
According to the feature of the face object in image, image is clustered, obtains clustering sub- result.Default clustering algorithm can be
DBSCAN (Den sity-Based Spatial Clustering of Applications with Noise) algorithm.It is each
The image of user in the individual sub- result of cluster is the image for belonging to same user, and multiple sub- results of cluster may belong to same
Individual cluster result.
In other words, the image of the user belonged in the sub- result of multiple clusters of same cluster result can be to belong to same
The image of one user.Can according to the similarity between the face object in the image of the user in the sub- result of each cluster,
The sub- result of cluster for belonging to same cluster result is determined, that is, the image for the user for determining to include belongs to same user's
The sub- result of cluster of image.
For example, the similarity between face object during the image for clustering the user in sub- result two-by-two can be calculated, when
When the similarity between face object in the image for the user being belonging respectively in two sub- results of cluster is more than similarity threshold,
The image that the user in two sub- results of cluster can then be determined is the image of same user, and two cluster sub- result category
In same cluster result.
Step 202, the user belonging to each cluster result is determined, and the image of the user in cluster result is carried out
Mark.
In the present embodiment, the feature of the face object in the image according to user, after being clustered to image, is obtained
To after multiple cluster results, it may be determined that the user belonging to each cluster result.Determine the use belonging to a cluster result
Family belongs to which user belonged to equivalent to the image for determining the user in cluster result.
, can it is determined that during user belonging to each cluster result in some optional implementations of the present embodiment
Have to be found out from multiple cluster results comprising being more than similarity threshold with the similarity of the face object in registered images
Face object user image cluster result;By comprising face object and the cluster result that finds out in user
The similarity of image is more than user of the user belonging to as the cluster result found out belonging to the registered images of similarity threshold.
In the present embodiment, registered images can refer to the image of the user registered in advance in face identification system.
The image of one user carries out registration in face identification system can preserve the image of the user equivalent to face identification system
With the corresponding relation of the user belonging to the image of the user.
In the present embodiment, the user belonging to registered images can be the image institute of the user got by step 201
The user of category.
For example, before being clustered by step 201 to the image of multiple users of acquisition, can be by the plurality of user
Terminal the certificate photo of the image of user such as user is sent to server respectively, then the image of the user is sent out by server
Deliver to gate.The certificate for the user that the terminal that the face identification system operated on gate can receive different users is sent
According to face identification system can preserve the certificate photo of each user received and the corresponding relation of each affiliated user.
In the present embodiment, it is determined that user belonging to each cluster result, searches from multiple cluster results and provide
There is the poly- of the image of the user comprising the face object for being more than similarity threshold with the similarity of the face object in registered images
During class result, i.e. all registered images of image of all users registered in advance in face identification system can be calculated respectively
In each registered images and each cluster result in user image similarity.When a registered images and one
When the similarity of the image of any one user in cluster result is more than similarity threshold, then the cluster result, which is used as, has bag
The cluster knot of the image of user containing the face object for being more than similarity threshold with the similarity of the face object in registered images
Fruit, it may be determined that user of the user belonging to the registered images belonging to the image of the user gone out in the cluster result.
Because the quantity of the image of the user in cluster result is much larger than the quantity of registered images, so when one note of determination
The similarity of volume image and the face object in the image of any one user in a cluster result is more than similarity threshold
When, you can hit cluster result, that is, determine user belonging to the cluster result be comprising face object with the cluster result
A user image in face object similarity be more than similarity threshold registered images belonging to user.So as to,
The probability of hit cluster result can be increased, reduce the time of hit cluster result, that is, reduce and determine a cluster result
The time of affiliated user.
, can be right it is determined that after user belonging to cluster result in some optional implementations of the present embodiment
Cluster result is labeled, and obtains the markup information of cluster result.The markup information of cluster result includes:Belonging to cluster result
The mark of user.The mark of user belonging to cluster result can be the user name of the user belonging to cluster result.It is then possible to
Markup information using the markup information of cluster result as the image of the user in cluster result, so as to complete to cluster result
In each user image mark, generate cluster result in each user image markup information, that is, complete
To the mark of the image of all users for belonging to same user in cluster result, obtain belonging to same in cluster result
The markup information of the image of all users of user.
In the present embodiment, by being labeled to multiple cluster results, the mark letter of the image of multiple users is obtained
After breath, the markup information of the image of the image of multiple users and multiple users can be used for verifying the identification of face identification system
Accuracy rate.The image of each user can be separately input to face identification system, judge that face identification system identifies respectively
The image for the user that the markup information of user and the image of the user of input belonging to the image of the user of the input gone out input
Whether affiliated user is consistent, counts consistent number and inconsistent number, determines that the identification of face identification system is accurate
Rate.
Fig. 3 is refer to, as the realization to method shown in above-mentioned each figure, this application provides a kind of information acquisition device
One embodiment, the device embodiment are corresponding with the embodiment of the method shown in Fig. 2.
As shown in figure 3, information acquisition device includes:Processing unit 301, mark unit 302.Wherein, processing unit 301 is matched somebody with somebody
Put for obtain multiple users respectively image in each user image in face object feature, and based on use
The feature of face object in the image at family, is clustered to the image of multiple users, obtains multiple cluster results;Mark unit
302 are configured to determine the user belonging to each cluster result, and are obtained after being labeled to each cluster result
Cluster result markup information of the markup information as the image of the user in cluster result.
In some optional implementations of the present embodiment, processing unit 301 includes:Feature obtains subelement, configuration
The feature of face object in the image of each user in image for obtaining multiple users by convolutional neural networks.
In some optional implementations of the present embodiment, processing unit 301 includes:Subelement is clustered, is configured to
The image of multiple users is clustered using default clustering algorithm, obtains clustering sub- result;Based on the use clustered in sub- result
The similarity of face object in the image at family, it is determined that the sub- result of the cluster for belonging to same cluster result;Polymerization belongs to same
The sub- result of cluster of individual cluster result, obtains each cluster result.
In some optional implementations of the present embodiment, mark unit 302 includes:Determination subelement, it is configured to
Found out from multiple cluster results with comprising being more than similarity threshold with the similarity of the face object in registered images
The cluster result of the image of the user of face object;By comprising face object and the cluster result that finds out in user figure
As similarity be more than similarity threshold registered images belonging to user as the cluster result found out belonging to user.
In some optional implementations of the present embodiment, mark unit 302 includes:Image labeling subelement, configuration
For generating the markup information of cluster result, the markup information of cluster result includes:The mark of user belonging to cluster result;Will
Markup information of the markup information of cluster result as the image of the user in cluster result.
Fig. 4 shows the structural representation of the computer system suitable for being used for the terminal for realizing the embodiment of the present application.
As shown in figure 4, computer system includes CPU (CPU) 401, it can be according to being stored in read-only storage
Program in device (ROM) 402 performs from the program that storage part 408 is loaded into random access storage device (RAM) 403
Various appropriate actions and processing.In RAM403, various programs and data needed for computer system operation are also stored with.
CPU 401, ROM 402 and RAM 403 are connected with each other by bus 404.Input/output (I/O) interface 405 is also connected to always
Line 404.
I/O interfaces 405 are connected to lower component:Importation 406;Output par, c 407;Storage part including hard disk etc.
408;And the communications portion 409 of the NIC including LAN card, modem etc..Communications portion 409 is via all
Network such as internet performs communication process.Driver 410 is also according to needing to be connected to I/O interfaces 405.Detachable media 411,
Such as disk, CD, magneto-optic disk, semiconductor memory etc., it is arranged on as needed on driver 410, in order to from it
The computer program of reading is mounted into storage part 408 as needed.
Especially, the process described in embodiments herein may be implemented as computer program.For example, the application
Embodiment includes a kind of computer program product, and it includes carrying computer program on a computer-readable medium, the calculating
Machine program includes being used for the instruction of the method shown in execution flow chart.The computer program can be by communications portion 409 from net
It is downloaded and installed on network, and/or is mounted from detachable media 411.In the computer program by CPU (CPU)
During 401 execution, the above-mentioned function of being limited in the present processes is performed.
Present invention also provides a kind of terminal, the terminal can include the information acquisition device described by Fig. 3.The terminal can
To be configured with one or more processors;Memory, for storing one or more programs, it can be wrapped in one or more programs
Containing performing the instruction of the operation described in above-mentioned steps 201-202.When one or more programs are handled by one or more
When device performs so that one or more processors perform the operation described in above-mentioned steps 201-202.
Present invention also provides a kind of computer-readable medium, the computer-readable medium can be included in terminal
's;Can also be individualism, without in supplying terminal.Above computer computer-readable recording medium carries one or more program,
When one or more program is performed by terminal so that terminal:Each user in the image of multiple users is obtained respectively
Image in face object feature, and the feature of the face object in the image based on user, to the figure of multiple users
As being clustered, multiple cluster results are obtained;The user belonging to each cluster result is determined, and each cluster will be tied
Markup information of the markup information for the cluster result that fruit obtains after being labeled as the image of the user in cluster result.
It should be noted that computer-readable medium described herein can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer-readable recording medium can for example include but unlimited
In the system of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, device or device, or any combination above.Computer can
Reading the more specifically example of storage medium can include but is not limited to:Electrically connecting with one or more wires, portable meter
Calculation machine disk, hard disk, random access storage device (RAM), read-only storage (ROM), erasable programmable read only memory
(EPROM or flash memory), optical fiber, portable compact disc read-only storage (CD-ROM), light storage device, magnetic memory device or
The above-mentioned any appropriate combination of person.In this application, computer-readable recording medium can be any includes or storage program
Tangible medium, the program can be commanded execution system, device either device use or it is in connection.And in this Shen
Please in, computer-readable signal media can include in a base band or as carrier wave a part propagation data-signal, its
In carry computer-readable program code.The data-signal of this propagation can take various forms, and include but is not limited to
Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable
Any computer-readable medium beyond storage medium, the computer-readable medium can send, propagate or transmit for by
Instruction execution system, device either device use or program in connection.The journey included on computer-readable medium
Sequence code can be transmitted with any appropriate medium, be included but is not limited to:Wirelessly, electric wire, optical cable, RF etc., or it is above-mentioned
Any appropriate combination.
Flow chart and block diagram in accompanying drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
Architectural framework in the cards, function and the operation of sequence product.At this point, each square frame in flow chart or block diagram can generation
The part of one module of table, program segment or code, the part of the module, program segment or code include one or more use
In the executable instruction of logic function as defined in realization.It should also be noted that marked at some as in the realization replaced in square frame
The function of note can also be with different from the order marked in accompanying drawing generation.For example, two square frames succeedingly represented are actually
It can perform substantially in parallel, they can also be performed in the opposite order sometimes, and this is depending on involved function.Also to note
Meaning, the combination of each square frame and block diagram in block diagram and/or flow chart and/or the square frame in flow chart can be with holding
Function as defined in row or the special hardware based system of operation are realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be set within a processor, for example, can be described as:A kind of processor bag
Processing unit is included, marks unit.Wherein, the title of these units does not form the limit to the unit in itself under certain conditions
It is fixed, for example, processing unit is also described as " being used to obtain the image of each user in the image of multiple users respectively
In face object feature, and the feature based on face object clusters to the image of multiple users, obtains multiple poly-
The unit of class result ".
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to the technology that the particular combination of above-mentioned technical characteristic forms
Scheme, while should also cover in the case where not departing from the inventive concept, carried out by above-mentioned technical characteristic or its equivalent feature
Other technical schemes that any combination is closed and formed.Such as features described above have with (but not limited to) disclosed herein it is similar
The technical scheme that the technical characteristic of function is replaced mutually and formed.
Claims (12)
1. a kind of information acquisition method, it is characterised in that methods described includes:
The feature of the face object in the image of each user in the image of multiple users is obtained respectively, and based on described
The feature of face object, the image of multiple users is clustered, obtain multiple cluster results;
Determine the user belonging to each cluster result, and the cluster knot obtained after being labeled to each cluster result
Markup information of the markup information of fruit as the image of the user in the cluster result.
2. according to the method for claim 1, it is characterised in that obtain each user in the image of multiple users respectively
Image in the feature of face object include:
The feature of the face object in the image of each user in the image of multiple users is obtained by convolutional neural networks.
3. according to the method for claim 2, it is characterised in that the feature based on the face object, to multiple users'
Image is clustered, and obtaining multiple cluster results includes:
The image of multiple users is clustered using default clustering algorithm, obtains clustering sub- result;
Based on the similarity of the face object in the image for clustering the user in sub- result, it is determined that belonging to same cluster result
Cluster sub- result;
Polymerization belongs to the sub- result of cluster of same cluster result, obtains each cluster result.
4. according to the method for claim 3, it is characterised in that determine that the user belonging to each cluster result includes:
Found out from multiple cluster results with comprising being more than similarity threshold with the similarity of the face object in registered images
The cluster result of the image of the user of the face object of value;
Using the user belonging to the registered images as the cluster result found out belonging to user.
5. according to the method for claim 4, it is characterised in that what is obtained after being labeled to each cluster result is poly-
The markup information of class result includes as the markup information of the image of the user in the cluster result:
The markup information of cluster result is generated, the markup information includes:The mark of user belonging to cluster result;
Markup information using the markup information of the cluster result as the image of the user in the cluster result.
6. a kind of information acquisition device, it is characterised in that described device includes:
Processing unit, the face object for being configured to obtain respectively in the image of each user in the image of multiple users
Feature, and the feature based on the face object, cluster to the image of multiple users, obtain multiple cluster results;
Unit is marked, is configured to determine the user belonging to each cluster result, and each cluster result will be carried out
Markup information of the markup information of the cluster result obtained after mark as the image of the user in the cluster result.
7. device according to claim 6, it is characterised in that processing unit includes:
Feature obtains subelement, each user's being configured in the image of the multiple users of convolutional neural networks acquisition
The feature of face object in image.
8. device according to claim 7, it is characterised in that processing unit includes:
Subelement is clustered, is configured to cluster the image of multiple users using default clustering algorithm, obtains clustering sub- knot
Fruit;Based on the similarity of the face object in the image for clustering the user in sub- result, it is determined that belonging to same cluster result
Cluster sub- result;Polymerization belongs to the sub- result of cluster of same cluster result, obtains each cluster result.
9. device according to claim 8, it is characterised in that mark unit includes:
Determination subelement, be configured to find out from multiple cluster results with comprising with the face object in registered images
Similarity is more than the cluster result of the image of the user of the face object of similarity threshold;By the user belonging to the registered images
As the user belonging to the cluster result found out.
10. device according to claim 9, it is characterised in that mark unit includes:
Image labeling subelement, is configured to generate the markup information of cluster result, and the markup information includes:Cluster result institute
The mark of the user of category;Believe the markup information of the cluster result as the mark of the image of the user in the cluster result
Breath.
A kind of 11. terminal, it is characterised in that including:
One or more processors;
Memory, for storing one or more programs,
When one or more of programs are by one or more of computing devices so that one or more of processors
Realize the method as described in any in claim 1-5.
12. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the program is by processor
The method as described in any in claim 1-5 is realized during execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710918840.XA CN107729928B (en) | 2017-09-30 | 2017-09-30 | Information acquisition method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710918840.XA CN107729928B (en) | 2017-09-30 | 2017-09-30 | Information acquisition method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107729928A true CN107729928A (en) | 2018-02-23 |
CN107729928B CN107729928B (en) | 2021-10-22 |
Family
ID=61208528
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710918840.XA Active CN107729928B (en) | 2017-09-30 | 2017-09-30 | Information acquisition method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107729928B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108492145A (en) * | 2018-03-30 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | Man-machine interaction method and device |
CN109002843A (en) * | 2018-06-28 | 2018-12-14 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN109658572A (en) * | 2018-12-21 | 2019-04-19 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN109815788A (en) * | 2018-12-11 | 2019-05-28 | 平安科技(深圳)有限公司 | A kind of picture clustering method, device, storage medium and terminal device |
CN110826616A (en) * | 2019-10-31 | 2020-02-21 | Oppo广东移动通信有限公司 | Information processing method and device, electronic equipment and storage medium |
CN112765388A (en) * | 2021-01-29 | 2021-05-07 | 云从科技集团股份有限公司 | Target data labeling method, system, equipment and medium |
EP3905126A3 (en) * | 2021-02-26 | 2022-03-09 | Beijing Baidu Netcom Science And Technology Co. Ltd. | Image clustering method and apparatus |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103530652A (en) * | 2013-10-23 | 2014-01-22 | 北京中视广信科技有限公司 | Face clustering based video categorization method and retrieval method as well as systems thereof |
CN104252628A (en) * | 2013-06-28 | 2014-12-31 | 广州华多网络科技有限公司 | Human face image marking method and system |
CN105468760A (en) * | 2015-12-01 | 2016-04-06 | 北京奇虎科技有限公司 | Method and apparatus for labeling face images |
CN106446797A (en) * | 2016-08-31 | 2017-02-22 | 腾讯科技(深圳)有限公司 | Image clustering method and device |
CN106503656A (en) * | 2016-10-24 | 2017-03-15 | 厦门美图之家科技有限公司 | A kind of image classification method, device and computing device |
US20170262695A1 (en) * | 2016-03-09 | 2017-09-14 | International Business Machines Corporation | Face detection, representation, and recognition |
-
2017
- 2017-09-30 CN CN201710918840.XA patent/CN107729928B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104252628A (en) * | 2013-06-28 | 2014-12-31 | 广州华多网络科技有限公司 | Human face image marking method and system |
CN103530652A (en) * | 2013-10-23 | 2014-01-22 | 北京中视广信科技有限公司 | Face clustering based video categorization method and retrieval method as well as systems thereof |
CN105468760A (en) * | 2015-12-01 | 2016-04-06 | 北京奇虎科技有限公司 | Method and apparatus for labeling face images |
US20170262695A1 (en) * | 2016-03-09 | 2017-09-14 | International Business Machines Corporation | Face detection, representation, and recognition |
CN106446797A (en) * | 2016-08-31 | 2017-02-22 | 腾讯科技(深圳)有限公司 | Image clustering method and device |
CN106503656A (en) * | 2016-10-24 | 2017-03-15 | 厦门美图之家科技有限公司 | A kind of image classification method, device and computing device |
Non-Patent Citations (1)
Title |
---|
梁建英 等: "《概率统计模型与优化》", 30 June 2015 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108492145A (en) * | 2018-03-30 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | Man-machine interaction method and device |
CN109002843A (en) * | 2018-06-28 | 2018-12-14 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN109815788A (en) * | 2018-12-11 | 2019-05-28 | 平安科技(深圳)有限公司 | A kind of picture clustering method, device, storage medium and terminal device |
CN109815788B (en) * | 2018-12-11 | 2024-05-31 | 平安科技(深圳)有限公司 | Picture clustering method and device, storage medium and terminal equipment |
CN109658572A (en) * | 2018-12-21 | 2019-04-19 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
US11410001B2 (en) | 2018-12-21 | 2022-08-09 | Shanghai Sensetime Intelligent Technology Co., Ltd | Method and apparatus for object authentication using images, electronic device, and storage medium |
CN110826616A (en) * | 2019-10-31 | 2020-02-21 | Oppo广东移动通信有限公司 | Information processing method and device, electronic equipment and storage medium |
CN110826616B (en) * | 2019-10-31 | 2023-06-30 | Oppo广东移动通信有限公司 | Information processing method and device, electronic equipment and storage medium |
CN112765388A (en) * | 2021-01-29 | 2021-05-07 | 云从科技集团股份有限公司 | Target data labeling method, system, equipment and medium |
EP3905126A3 (en) * | 2021-02-26 | 2022-03-09 | Beijing Baidu Netcom Science And Technology Co. Ltd. | Image clustering method and apparatus |
US11804069B2 (en) | 2021-02-26 | 2023-10-31 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Image clustering method and apparatus, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107729928B (en) | 2021-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107729928A (en) | Information acquisition method and device | |
CN108427939B (en) | Model generation method and device | |
CN108898186B (en) | Method and device for extracting image | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
CN108491805B (en) | Identity authentication method and device | |
CN108038469A (en) | Method and apparatus for detecting human body | |
CN109447156B (en) | Method and apparatus for generating a model | |
CN107578017A (en) | Method and apparatus for generating image | |
CN107908789A (en) | Method and apparatus for generating information | |
WO2022247005A1 (en) | Method and apparatus for identifying target object in image, electronic device and storage medium | |
CN108197592B (en) | Information acquisition method and device | |
CN107832305A (en) | Method and apparatus for generating information | |
CN107958247A (en) | Method and apparatus for facial image identification | |
CN107273503A (en) | Method and apparatus for generating the parallel text of same language | |
CN108287857B (en) | Expression picture recommendation method and device | |
CN107590807A (en) | Method and apparatus for detection image quality | |
CN109034069B (en) | Method and apparatus for generating information | |
CN107919129A (en) | Method and apparatus for controlling the page | |
CN108460365B (en) | Identity authentication method and device | |
EP3893125A1 (en) | Method and apparatus for searching video segment, device, medium and computer program product | |
CN108494778A (en) | Identity identifying method and device | |
CN108960316A (en) | Method and apparatus for generating model | |
CN107910060A (en) | Method and apparatus for generating information | |
CN109214501B (en) | Method and apparatus for identifying information | |
CN107679493A (en) | Face identification method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |