WO2015053604A1 - A face retrieval method - Google Patents
A face retrieval method Download PDFInfo
- Publication number
- WO2015053604A1 WO2015053604A1 PCT/MY2013/000180 MY2013000180W WO2015053604A1 WO 2015053604 A1 WO2015053604 A1 WO 2015053604A1 MY 2013000180 W MY2013000180 W MY 2013000180W WO 2015053604 A1 WO2015053604 A1 WO 2015053604A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- query
- feature
- target
- images
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 239000000344 soap Substances 0.000 claims abstract description 11
- 239000013598 vector Substances 0.000 claims abstract description 9
- 238000005516 engineering process Methods 0.000 claims abstract description 7
- 230000001815 facial effect Effects 0.000 claims description 20
- 238000005259 measurement Methods 0.000 claims description 14
- 238000012795 verification Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 2
- 238000004590 computer program Methods 0.000 claims 3
- 230000001419 dependent effect Effects 0.000 claims 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 3
- 241001417516 Haemulidae Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
- G06F16/784—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/56—Information retrieval; Database structures therefor; File system structures therefor of still image data having vectorial format
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Definitions
- This invention relates to a the way of searching to the matched face among the datasets of faces, and more particularly it relates to Web Based Image Retrieval (WBIR) technique that incorporates facial information embedded in a feature vector.
- WBIR Web Based Image Retrieval
- the present invention provides a form of web based image retrieval that embeds dynamic facial information in a vector and sent along with a web-based request using REST and SOAP.
- the present invention seeks to provide for recognition, searching and/or retrieval of images based on facial information that has been sent.
- the WBIR provides a process for locating one or more matched faces in the images saved in the database, as well as attempting to verify an identity associated with each face based on the sent information.
- the method extracts a set of features from one or more images.
- the method provides for face verification and sent them as a server request to determine if there is any face in the selected image(s); and if so, extracting any identification or personality information from metadata associated with the image(s). This can assist to narrow down the search required for face recognition.
- a dominance factor can be assigned to at least one face, and an attempt can be made to verify at least one of the faces in the selected image, and which returns a confidence score associated with the face,
- a method of image retrieval including: defining a query image set from one or more selected images; dynamically determining a query feature set from the query image set; analysing any facial information; determining a dissimilarity measurement between at least one query feature of the query feature set and at least one target feature of a target set; and, identifying one or more matching images based on the dissimilarity measurement.
- Fig. 1 illustrates a flowchart showing a method of searching and retrieval of facial images based on the facial images' vectors using REST/SOAP technology.
- the method includes constructing a web-based 'query feature set' by identifying, determining, calculating or extracting a 'set of features' from 'one or more selected images, which define a 'query image set'.
- a 'distance' or 'dissimilarity measurement' is then determined, calculated or constructed between a 'query feature' from the query feature set and a 'target feature' from the target image set.
- the dissimilarity measurement may be obtained as a function of the weighted summation of differences or distances between the query features and the target features over all of the target images set. If there are suitable image matches, One or more identified images' are identified, obtained and/or extracted from the target image set and can be send as a reply to the server request by applying SOAP technology. The identified images may be selected based on the dissimilarity measurement over all query features, for example, by selecting images having a minimum dissimilarity measurement.
- the weighted summation uses weights in the query feature set.
- the order of sending the images in the request is ranked, for example based on the dissimilarity measurement.
- the identified images can be queued in order from least dissimilar by increasing dissimilarity.
- the query feature set may be extracted from a query image set having two or more selected images (selected by the users).
- the image(s) extracted from the target image set using the query image set can be conveniently displayed according to a relevancy ranking.
- a relevancy ranking There are several ways to rank the one or more identified images that are output or displayed.
- One possible and convenient way is to use the dissimilarity measurement described above. That is, the least dissimilar (most similar) identified images are displayed first followed by more dissimilar images up to some number of images or dissimilarity limit. Typically, for example, the twenty least dissimilar identified images might be displayed. Examples
- the retrieval matching face will be a REST request, as the information will come in via GET. As such, all the information will need to be URL-encoded during transmission; users will likely want to decode it before subjecting it to any further processing.
- Different request types should be addressed to different endpoints (URLs); if users want to use a single script to handle all requests, users can either present it to developers in that manner (all requests going to a single endpoint), or configure users' web server to map many endpoints to a single script. I would generally suggest the later as it is inline with the specification, and it allows users to make changes later without affecting the external interfaces that developers use.
- the query feature set is implied/determinable by the example images captured by the users (i.e.
- a query feature set formation module generates a 'virtual query image' as a query feature set that is derived from the users' selected image(s).
- the query feature set is comprised of query features, typically being vectors.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention relates to a method of retrieving a face in a digital image, characterized by the steps of: detecting a face in a scene of an image; extracting the features out of the face; building a feature vector for a specific number of faces to be sent; sending the features to the server using SOAP/REST technology (SET Request); verifying the process by extracting the feature of each face; building the result feature for each face, which contains different results, ranked by least dissimilarity; and sending back the result vector for each face (GET Request) to the client; whereby the results of the matching face will be sent as RESET request; whilst the client side will use the similarity values embedded in the vector to make the decision on the outcome value.
Description
A FACE RETRIEVAL METHOD
Background of the Invention
Field of the Invention
This invention relates to a the way of searching to the matched face among the datasets of faces, and more particularly it relates to Web Based Image Retrieval (WBIR) technique that incorporates facial information embedded in a feature vector. Description of Related Arts
Retrieval of images, especially facial images, from a relatively large collection of reference images remains a significant problem. It is generally considered impractical for users to simply browse a relatively large collection of images, for example thumbnail images, so as to select a desired image. Traditionally, images have been indexed by keyword(s) allowing users to search the images based on associated keywords, with the results being presented using some form of keyword-based relevancy test. Such an approach is fraught with difficulties since keyword selection and allocation generally requires human tagging, which is a time-consuming process, and many images can be described by multiple or different keywords.
There is a need for a web-based method, which addresses or at least ameliorates one or more problems inherent in the prior art. The reference in this specification to any prior publication (or information derived from the prior publication), or to any matter which is known, is not, and should not be taken as an acknowledgment or admission or any form of suggestion that the prior publication (or information derived from the prior publication) or known matter forms part of the common general knowledge in the field of endeavour to which this specification relates, and compensating for changes in a target image due to the pose change.
Summary of Invention
In a first broad form the present invention provides a form of web based image retrieval that embeds dynamic facial information in a vector and sent along with a web-based request using REST and SOAP. In a second broad form the present invention seeks to provide for recognition, searching and/or retrieval of images based on facial information that has been sent.
In a particular example form, the WBIR provides a process for locating one or more matched faces in the images saved in the database, as well as attempting to verify an identity associated with each face based on the sent information.
The method extracts a set of features from one or more images. The method provides for face verification and sent them as a server request to determine if there is any face in the selected image(s); and if so, extracting any identification or personality information from metadata associated with the image(s). This can assist to narrow down the search required for face recognition. A dominance factor can be assigned to at least one face, and an attempt can be made to verify at least one of the faces in the selected image, and which returns a confidence score associated with the face, In a further particular form there is a method of image retrieval, including: defining a query image set from one or more selected images; dynamically determining a query feature set from the query image set; analysing any facial information; determining a dissimilarity measurement between at least one query feature of the query feature set and at least one target feature of a target set; and, identifying one or more matching images based on the dissimilarity measurement.
Brief Description of the Drawings
The features of the invention will be more readily understood and appreciated from the following detailed description, accompanied by the drawings of the preferred embodiment of the present invention, in which:
Fig. 1 illustrates a flowchart showing a method of searching and retrieval of facial images based on the facial images' vectors using REST/SOAP technology.
Detailed Description of the Invention
As required, detailed embodiments of the present invention are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the invention, which may be embodied in various forms. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting but merely as a basis for claims. It should be understood that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents and alternatives falling within the scope of the present invention as defined by the appended claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words "include," "including," and "includes" mean including, but not limited to. Further, the words "a" or "an" mean "at least one" and the word "plurality" means one or more, unless otherwise mentioned. Where the abbreviations or technical terms are used, these indicate the commonly accepted meanings as known in the technical field. For ease of reference, common reference numerals will be used throughout the figures when referring to the same or similar features common to the figures. The present invention will now be described with reference to Fig. 1.
In one form there is provided a method of identifying and/or extracting one or more images, preferably facial images, from a 'target image set' saved in a remote server, being one or more target images (i.e. reference images). The method includes constructing a web-based 'query feature set' by identifying, determining, calculating or extracting a 'set of features' from 'one or more selected images, which define a 'query image set'.
A 'distance' or 'dissimilarity measurement' is then determined, calculated or constructed between a 'query feature' from the query feature set and a 'target feature' from the target image set. For example, the dissimilarity measurement
may be obtained as a function of the weighted summation of differences or distances between the query features and the target features over all of the target images set. If there are suitable image matches, One or more identified images' are identified, obtained and/or extracted from the target image set and can be send as a reply to the server request by applying SOAP technology. The identified images may be selected based on the dissimilarity measurement over all query features, for example, by selecting images having a minimum dissimilarity measurement. The weighted summation uses weights in the query feature set. The order of sending the images in the request is ranked, for example based on the dissimilarity measurement. The identified images can be queued in order from least dissimilar by increasing dissimilarity. The query feature set may be extracted from a query image set having two or more selected images (selected by the users).
The image(s) extracted from the target image set using the query image set can be conveniently displayed according to a relevancy ranking. There are several ways to rank the one or more identified images that are output or displayed. One possible and convenient way is to use the dissimilarity measurement described above. That is, the least dissimilar (most similar) identified images are displayed first followed by more dissimilar images up to some number of images or dissimilarity limit. Typically, for example, the twenty least dissimilar identified images might be displayed. Examples
An example software application implementation of the method, in which REST and SOAP technologies can be applied, is described in this section. When the query feature comes in as a SOAP request, it should first be checked to ensure that it conforms to the format specified by the users' WSDL document. In fact, most SOAP APIs use some framework that takes care of a lot of the grunt work when handling the requests. SOAP APIs use a single endpoint for all requests (as a general rule, some large APIs separate disparate functions onto different
endpoints), and as a result, users will likely either have a large script at that point, or lots of required calls executed depending on the particular call.
The retrieval matching face will be a REST request, as the information will come in via GET. As such, all the information will need to be URL-encoded during transmission; users will likely want to decode it before subjecting it to any further processing. Different request types should be addressed to different endpoints (URLs); if users want to use a single script to handle all requests, users can either present it to developers in that manner (all requests going to a single endpoint), or configure users' web server to map many endpoints to a single script. I would generally suggest the later as it is inline with the specification, and it allows users to make changes later without affecting the external interfaces that developers use. The query feature set is implied/determinable by the example images captured by the users (i.e. the one or more selected images forming the query image set). A query feature set formation module generates a 'virtual query image' as a query feature set that is derived from the users' selected image(s). The query feature set is comprised of query features, typically being vectors.
Claims
1. A method of face verification using at least one processing system, including: obtaining a set of features from a selected image; determining if there are any faces in the selected image, and if so assigning a dominance factor to at least one face; and, attempting to verify an identity of the at least one face in the selected image and returning a confidence score
2. The method wherein attempting to verify the identity of the at least one face includes extracting any identity information from metadata associated with the selected image and sent using SOAP and REST.
3. The method as claimed in claim 2, wherein the identity information is used to reduce a target image set to target images having similar identity information, and verification is performed using the reduced target image set
4. The method as claimed in claim 1 , wherein many faces are integrated in the same feature vector and sent using SOAP/REST technology to the server to retrieve the matching face
5. The method as claimed in claim 1 , wherein the matching face information for each face embedded in the vector are sent back to the client using the same technology.
6. A method of facial image retrieval, including: defining a query image set from one or more selected facial images; determining a query feature set from the query image set; determining a dissimilarity measurement between at least one query feature of the query feature set and at least one target feature of a target facial image set; and, identifying one or more
identified facial images from the target facial image set based on the dissimilarity measurement.
The method as claimed in claim 6, wherein the dissimilarity measurement uses a weighted summation of feature distances.
8. The method as claimed in claim 6, wherein the SOAP technology used to get/set from the server.
9. The method as claimed in any one of the claims 6, wherein the one or more identified facial images are sent to users in a ranking order dependent on the dissimilarity measurement.
10. The method as claimed in any one of the claims 6, wherein the query image set is obtained from two or more selected facial images.
11.A computer program product for facial image retrieval, adapted to: * define a query image set from one or more users selected facial images; determine a query feature set from the query image set; determine a dissimilarity measurement between at least one query feature of the query feature set and at least one target feature of a target facial image set; and, identify one or more identified facial images from the target facial image set based on the dissimilarity measurement.
12. The computer program product as claimed in claim 11 , wherein users can select a plurality of thumbnail images to define the query image set.
13. The computer program product as claimed in claim 11 , being a web based application.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
MYPI2013701892 | 2013-10-08 | ||
MYPI2013701892 | 2013-10-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015053604A1 true WO2015053604A1 (en) | 2015-04-16 |
Family
ID=52813364
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/MY2013/000180 WO2015053604A1 (en) | 2013-10-08 | 2013-10-08 | A face retrieval method |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2015053604A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180232566A1 (en) * | 2017-02-15 | 2018-08-16 | Cisco Technology, Inc. | Enabling face recognition in a cognitive collaboration environment |
CN110942046A (en) * | 2019-12-05 | 2020-03-31 | 腾讯云计算(北京)有限责任公司 | Image retrieval method, device, equipment and storage medium |
CN112258009A (en) * | 2020-06-12 | 2021-01-22 | 广元量知汇科技有限公司 | Intelligent government affair request processing method |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003076717A (en) * | 2001-09-04 | 2003-03-14 | Nippon Telegr & Teleph Corp <Ntt> | System and method for information retrieval, information retrieval program and recording medium |
JP2005157763A (en) * | 2003-11-26 | 2005-06-16 | Canon Inc | Search device and method for controlling search |
JP2006260405A (en) * | 2005-03-18 | 2006-09-28 | Ricoh Co Ltd | Image information updating system, image inputting device, image processing device, image updating device, image information updating method, image information updating program, and recording medium recording the program |
JP2008152396A (en) * | 2006-12-14 | 2008-07-03 | Canon Inc | Image search system and control method therefor |
JP2008234377A (en) * | 2007-03-22 | 2008-10-02 | Fujifilm Corp | Image sorting device and method, and program |
JP2011186733A (en) * | 2010-03-08 | 2011-09-22 | Hitachi Kokusai Electric Inc | Image search device |
JP2011203769A (en) * | 2010-03-24 | 2011-10-13 | Seiko Epson Corp | Image retrieval device and method, information terminal, information processing method, image retrieval system, and program |
JP2013501978A (en) * | 2009-08-07 | 2013-01-17 | グーグル インコーポレイテッド | Face recognition with social network support |
JP2013101431A (en) * | 2011-11-07 | 2013-05-23 | Hitachi Kokusai Electric Inc | Similar image search system |
-
2013
- 2013-10-08 WO PCT/MY2013/000180 patent/WO2015053604A1/en active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003076717A (en) * | 2001-09-04 | 2003-03-14 | Nippon Telegr & Teleph Corp <Ntt> | System and method for information retrieval, information retrieval program and recording medium |
JP2005157763A (en) * | 2003-11-26 | 2005-06-16 | Canon Inc | Search device and method for controlling search |
JP2006260405A (en) * | 2005-03-18 | 2006-09-28 | Ricoh Co Ltd | Image information updating system, image inputting device, image processing device, image updating device, image information updating method, image information updating program, and recording medium recording the program |
JP2008152396A (en) * | 2006-12-14 | 2008-07-03 | Canon Inc | Image search system and control method therefor |
JP2008234377A (en) * | 2007-03-22 | 2008-10-02 | Fujifilm Corp | Image sorting device and method, and program |
JP2013501978A (en) * | 2009-08-07 | 2013-01-17 | グーグル インコーポレイテッド | Face recognition with social network support |
JP2011186733A (en) * | 2010-03-08 | 2011-09-22 | Hitachi Kokusai Electric Inc | Image search device |
JP2011203769A (en) * | 2010-03-24 | 2011-10-13 | Seiko Epson Corp | Image retrieval device and method, information terminal, information processing method, image retrieval system, and program |
JP2013101431A (en) * | 2011-11-07 | 2013-05-23 | Hitachi Kokusai Electric Inc | Similar image search system |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180232566A1 (en) * | 2017-02-15 | 2018-08-16 | Cisco Technology, Inc. | Enabling face recognition in a cognitive collaboration environment |
CN110942046A (en) * | 2019-12-05 | 2020-03-31 | 腾讯云计算(北京)有限责任公司 | Image retrieval method, device, equipment and storage medium |
CN110942046B (en) * | 2019-12-05 | 2023-04-07 | 腾讯云计算(北京)有限责任公司 | Image retrieval method, device, equipment and storage medium |
CN112258009A (en) * | 2020-06-12 | 2021-01-22 | 广元量知汇科技有限公司 | Intelligent government affair request processing method |
CN112258009B (en) * | 2020-06-12 | 2021-10-26 | 新疆新创高科企业管理有限公司 | Intelligent government affair request processing method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10210179B2 (en) | Dynamic feature weighting | |
AU2017268662B2 (en) | Dense image tagging via two-stage soft topic embedding | |
US10599709B2 (en) | Object recognition device, object recognition method, and program for recognizing an object in an image based on tag information | |
JP6144839B2 (en) | Method and system for retrieving images | |
US11461386B2 (en) | Visual recognition using user tap locations | |
JP4337064B2 (en) | Information processing apparatus, information processing method, and program | |
EP2708031B1 (en) | System and method for enhancing user search results by determining a television program currently being displayed in proximity to an electronic device | |
JP5241954B2 (en) | Image search based on shape | |
US8341145B2 (en) | Face recognition using social data | |
CN108846091B (en) | Information recommendation method, device and equipment | |
EP2674884A1 (en) | Method, system and computer-readable recording medium for adding a new image and information on the new image to an image database | |
CN112673369A (en) | Visual intent trigger for visual search | |
CA2783446A1 (en) | Personalized tag ranking | |
WO2016139964A1 (en) | Region-of-interest extraction device and region-of-interest extraction method | |
US20170214756A1 (en) | Dynamic rule allocation for visitor identification | |
US20150104065A1 (en) | Apparatus and method for recognizing object in image | |
JP6173754B2 (en) | Image search system, image search apparatus, and image search method | |
US8923626B1 (en) | Image retrieval | |
JP5972837B2 (en) | Terminal identity discrimination system and terminal identity discrimination method | |
WO2015053604A1 (en) | A face retrieval method | |
US20150081477A1 (en) | Search query analysis device, search query analysis method, and computer-readable recording medium | |
JP2015166978A (en) | Image search device and image search program | |
US11506508B2 (en) | System and method using deep learning machine vision to analyze localities | |
US11842299B2 (en) | System and method using deep learning machine vision to conduct product positioning analyses | |
US20140172874A1 (en) | Intelligent analysis queue construction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13895325 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC , EPO FORM 1205A DATED 04-08-16 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13895325 Country of ref document: EP Kind code of ref document: A1 |