WO2013022226A2 - 고객 인적정보 생성방법 및 생성장치, 그 기록매체 및 포스 시스템 - Google Patents
고객 인적정보 생성방법 및 생성장치, 그 기록매체 및 포스 시스템 Download PDFInfo
- Publication number
- WO2013022226A2 WO2013022226A2 PCT/KR2012/006177 KR2012006177W WO2013022226A2 WO 2013022226 A2 WO2013022226 A2 WO 2013022226A2 KR 2012006177 W KR2012006177 W KR 2012006177W WO 2013022226 A2 WO2013022226 A2 WO 2013022226A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- customer
- personal information
- face
- age
- gender
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/446—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering using Haar-like filters, e.g. using integral image techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
- G06V10/7747—Organisation of the process, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/30—Individual registration on entry or exit not involving the use of a pass
- G07C9/32—Individual registration on entry or exit not involving the use of a pass in combination with an identity check
- G07C9/37—Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/178—Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
Definitions
- the present invention relates to a method and apparatus for generating customer personal information, a recording medium and a force system thereof.
- the facial feature in the customer's face is detected from the image extracted from the image input through the image input means provided at one position on the POS terminal side, and the personal information such as the gender and age of the customer is detected using the facial feature.
- the present invention relates to a method and apparatus for generating customer personal information, a recording medium, and a force system for generating various statistics based on personal information of a customer, such as customer gender and age-specific purchase information.
- a pos terminal is a terminal for a system or a terminal dedicated to a store, and means a device that collects, stores, and transmits data regarding a brand name, a price, and the like at a point of sale at a retail store, a supermarket, and a large sales store.
- POS terminals not only settle the amount of sales, but also collect and process various information and data necessary for retail management, so they are mainly used in most large-scale sales stores such as E-Mart and Home Plus.
- the force terminal as described above is provided with a barcode reader which is an automatic barcode reading device.
- the conventional POS terminal grasps the information based on the barcode displayed on the product packaging, and thus can be used for generating various statistical information based on the product, but based on the customer's personal information such as the age and gender of the customer It could not be utilized to generate various statistical information.
- the conventional POS terminal could not be utilized for generating information based on personal information of the customer, such as information on age-specific preferences of a specific product and information on gender preference of a specific product.
- An object of the present invention for solving the problem according to the prior art, by detecting the facial feature in the customer's face from the image extracted from the image input through the image input means provided at one position on the force terminal side, As the personal information such as gender and age of the customer is generated by using the customer personal information generation method and apparatus for generating various statistics based on the personal information of the customer such as the customer gender and age purchase information,
- the present invention provides a recording medium and a force system.
- a customer personal information generating method for a POS system consisting of a server or a network connected to the POS terminal, (a) is provided at a position on the side of the POS terminal Detecting a face region of the customer from an image extracted from an image input through an image input means; (b) detecting a facial feature point in the detected face region; And (c) estimating at least one of gender and age of the customer using the detected face region and the detected facial feature to generate human information.
- a computer-readable recording medium having recorded thereon a program for executing each step of the customer personal information generating method.
- a POS system using a customer personal information generating method.
- One embodiment according to another aspect of the present invention is a customer personal information generating device for a POS system consisting of a server or a network connected to the POS terminal, the image input through the image input means provided at a position on the POS terminal side
- a face region detection module for detecting a face region of the customer from the image extracted from the image
- a facial feature point detection module for detecting a facial feature point in the detected face area
- a personal information generation module configured to generate personal information by estimating at least one of gender and age of the customer by using the detected face region and the detected facial feature point.
- a customer personal information generating method for a customer management system consisting of a server or a server connected to the customer-facing terminal, the image input means provided at one position of the customer-facing terminal
- the facial feature of the customer is detected from the image extracted from the image input through the user to generate personal information of the customer regarding at least one of gender and age.
- One embodiment according to another aspect of the present invention is a customer personal information generating device for a customer management system consisting of a server or a server connected to the customer-facing terminal, the image input means provided at one position of the customer-facing terminal.
- the facial feature of the customer is detected from the image extracted from the image input through the user to generate personal information of the customer regarding at least one of gender and age.
- the present invention as described above, by using the POS terminal installed in retail stores, supermarkets, large sales stores, etc., as well as various statistical information based on the product to generate the personal information of the customer, based on this to generate the personal statistics information There is an advantage that it can.
- an asymmetric similar feature (harr-like feature) is used to detect the non-frontal face region, the detection reliability of the face region with respect to the non-frontal face is high, thereby increasing the tracking performance of the face region.
- FIG. 1 is a block diagram showing a schematic configuration of an apparatus for generating customer identification information for generating customer personal information according to an embodiment of the present invention.
- Figure 2a is a block diagram showing a first embodiment of the force system of the present invention.
- Figure 2b is a block diagram showing a second embodiment of the force system of the present invention.
- Figure 2c is a block diagram showing a third embodiment of the force system of the present invention.
- Figure 2d is a block diagram showing a fourth embodiment of the force system of the present invention.
- FIG. 3 is a photograph showing 28 feature points of a face in relation to the generation of customer personal information according to an embodiment of the present invention.
- Figure 4a is a first picture showing an example screen of the UI module in connection with the generation of customer personal information according to an embodiment of the present invention.
- Figure 4b is a second picture showing an example screen of the UI module in connection with the generation of customer personal information according to an embodiment of the present invention.
- FIG. 5 is a flowchart illustrating a process of a customer personal information generating method according to an embodiment of the present invention.
- FIG. 6 is a view showing the basic shape of a conventional Harr-like feaure.
- FIG. 7 is an exemplary photograph of a harr-like feaure for front face region detection in relation to customer personal information generation according to an embodiment of the present invention.
- FIG. 8 is a photograph illustrating an example of a harr-like feaure for detecting a non-frontal face region in connection with generating customer personal information according to an embodiment of the present invention.
- FIG. 9 is a diagram illustrating a newly added rectangular feaure in connection with generating customer personal information according to an embodiment of the present invention.
- FIG. 10 is an exemplary photograph of a harr-like feaure selected from FIG. 9 for detection of a non-frontal face region in relation to customer personal information generation in accordance with one embodiment of the present invention.
- FIG. 10 is an exemplary photograph of a harr-like feaure selected from FIG. 9 for detection of a non-frontal face region in relation to customer personal information generation in accordance with one embodiment of the present invention.
- 11 is a feature probability curve in a training set for a conventional Harr-like feaure and Harr-like feaure applied to the present invention.
- Figure 13 is a profile picture applied to the conventional ASM method for a low resolution or poor image quality.
- 15 is a flowchart illustrating a gender estimation process of a method for generating customer personal information according to an embodiment of the present invention.
- 16 is an exemplary photograph for defining a gender estimation face region in a gender estimation process of a method for generating customer personal information according to an embodiment of the present invention.
- 17 is a flowchart illustrating an age estimation process of a method for generating customer personal information according to an embodiment of the present invention.
- 18 is an exemplary photograph for defining an age estimation face region in an age estimation process of a method of generating customer personal information according to an embodiment of the present invention.
- the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
- FIG. 1 is a block diagram showing a schematic configuration of an apparatus for generating customer personal information for generating customer personal information according to an embodiment of the present invention.
- the customer personal information generating apparatus 1000 is installed with a program for generating personal information such as gender and age of a customer in a general computer system having a computing element such as a central processing unit, a system DB, a system memory, and an interface. And by driving it can be seen to implement the customer personal information generating method.
- the force system of the present embodiment generates the personal information such as the gender and age of the customer from the image extracted from the image input through the image input means provided at one position on the POS terminal side, such as the customer gender, age-specific purchase information, etc. It is a system that can generate various statistics based on human information.
- FIG. 2A is a block diagram showing a first embodiment of a POS system in which the customer personal information generating device of the present embodiment is implemented.
- a local operating server 10 implemented in an embedded form in one force terminal 1 is configured and integrally implemented.
- personal information obtained through the force terminal 1 may be transmitted to the local operating server 10 to be integrated and managed to generate statistical information.
- 2B is a block diagram showing a second embodiment of the POS system in which the customer personal information generating device of the present embodiment is implemented.
- the force system of the second embodiment includes a plurality of force terminals 1 and a local operating server 10 connected to each force terminal 1.
- personal information obtained through each force terminal 1 may be transmitted to the local operating server 10 to be integrated and managed to generate statistical information.
- 2C is a block diagram showing a third embodiment of the POS system in which the customer personal information generating device of the present embodiment is implemented.
- a plurality of force terminals 1 provided in each of a plurality of chain stores, a central operating server connected via a network with each force terminal 1 of each chain store 20 is configured to include.
- the personal information obtained through each force terminal 1 of each chain store is transmitted to the central operating server 20 through a network such as the Internet.
- the central operation server 20 may generate statistical information by integrally managing human information.
- FIG. 2D is a block diagram showing a fourth embodiment of the POS system in which the customer personal information generating device of the present embodiment is implemented.
- the force system of the fourth embodiment includes a force terminal 1 and a local operating server 10, the force terminal 1, or the local operating server 10 respectively provided in a plurality of chain stores. It is configured to include a central operating server 20 connected via a network.
- the personal information obtained through each force terminal 1 of each chain store is transmitted to the local operating server 10 and managed primarily.
- personal information and related purchase information transmitted and managed to the local operation server 10 are transmitted to the central operation server 20 through a network such as the Internet.
- the central operation server 20 may generate statistical information by collectively managing human information for all the chain stores.
- the POS terminal can be understood to extend to a customer-type terminal of various forms from a broader perspective.
- a terminal corresponding to a plurality of customers for example, an advertisement display terminal installed in a POS terminal corresponding to a customer's purchase of a product in a retail store, a supermarket, a large sales store, a subway, a bus stop, a building outer wall, and the like to display an advertisement screen. It may be understood to extend to the and the like.
- the terminal may be extended to a terminal corresponding to a plurality of customers.
- the content of the advertisement target product that the customer (or the potential customer) is interested in may be statistically generated as the potential purchase information instead of the purchase information.
- the customer personal information generating device 1000 of the present embodiment includes a face region detection module 110.
- the face region detection module 110 detects the face region of the customer from an image captured by an image input unit 180 provided at one position of the POS terminal, for example, an image input through a camera. .
- the detection viewing angle may be all faces in the range of -80 to +80.
- the image input means 180 may be installed, for example, on one side of the force terminal 1 toward the face of the customer.
- the image input unit 180 may be a camera capable of capturing a face of a customer located in front of the customer in real time, and more preferably, a digital camera having an image sensor attached thereto.
- the face region detection module 110 creates a YCbCr color model from the RGB color information of the extracted image.
- the face region detection module 110 separates color information and brightness information from the created color model.
- the face region detection module 110 detects a face candidate region based on the brightness information.
- the face region detection module 110 defines a quadrangular feature point model for the detected face candidate region.
- the face region detection module 110 performs a function of detecting a face region based on the training material trained by the AdaBoost learning algorithm on the quadrilateral feature point model.
- the face region detection module 110 performs a function of determining the detected face region as a valid face region when the magnitude of the result value of the AdaBoost exceeds a predetermined threshold value.
- the function of determining the detected face area as a valid face area when the magnitude of the result value of AdaBoost exceeds a predetermined threshold value is a separate face validity determination module.
- the 120 may also be configured to function separately from other functions of the face region detection module 110.
- the customer personal information generating device 1000 of the present embodiment also includes a facial feature detection module 130.
- the face feature point detection module 130 is determined to be valid in the face area detection module 110 (or, if the face validity determination module 120 is configured separately, is determined to be valid in the validity determination module 120). Facial feature point detection is performed on the areas.
- the facial feature detection module 130 may detect, for example, 28 facial feature points including a face viewing rotation angle, which can be defined for each position of an eyebrow, an eye, a nose, and a mouth.
- the feature points (0, 1, 2, 3) defining the face region, the feature points (4, 5, 6, 7, 12, 13, 14, 15), eye points defining eyebrows 22, 23, 24, 25, 26, 27, nose defining points 10, 11, 16, 17, 18, mouth defining points 8, 9, 20, 21, and 19 may be detected as facial feature points.
- the customer personal information generating device 1000 of the present embodiment also includes a gender estimation module 140.
- the gender estimating module 140 estimates the gender of the customer using the detected face region.
- the gender estimating module 140 performs a function of cutting the face region for gender estimation from the detected face region.
- the gender estimating module 140 normalizes the size of the cropped face region image.
- the gender estimating module 140 performs a function of normalizing the histogram.
- the gender estimating module 140 performs a gender estimating function by a support vector machine (SVM) using a normalized image.
- SVM support vector machine
- the estimated gender information of the customer may be stored in the gender DB 145.
- the customer personal information generating device 1000 of the present embodiment also includes an age estimation module 150.
- the age estimation module 150 estimates the age of the customer by using the detected face region.
- the age estimating module 150 cuts out an age estimating face region from the detected face region.
- the age estimation module 150 performs a function of normalizing the size of the cropped face region image.
- the age estimation module 150 performs a function of performing local illumination correction.
- the age estimating module 150 constructs an input vector from a normalized image and performs projection on a nine-body space.
- the age estimation module 150 performs a function of estimating age by using a quadratic regression.
- the estimated age information of the customer may be stored in the age DB 155.
- the gender estimating module 140 and the age estimating module 150 may be integrated to form a personal information generating module 145.
- the customer personal information generating device 1000 of the present embodiment also includes a statistics generating module 160.
- the statistical generation module 160 is based on the estimated personal information about the gender and age of the customer generated statistical information about at least one of the gender and age of the customer, the time based on the generated human information Performs a function of generating any one of the statistical information of the customer personnel information for each unit.
- the customer personal information generating apparatus 1000 of the present embodiment also displays the setting (FIG. 4A) of the image input means 180 provided on one side of the POS terminal 1, the estimated age / gender result, and the like. 4B) is provided with a UI (User Interface) module 170 to enable.
- UI User Interface
- the UI module 170 further includes image capturing means 171, face information viewing means 172, personnel statistics viewing means 173, gender statistics viewing means 174, and age statistics viewing means 175. do.
- the image capturing means 171 captures an image from an image input through the image input means 180.
- the face information viewing unit 172 may be a display screen that graphically shows a face of a customer detected through the image input unit 180.
- the user may check the face of the customer detected through the face information viewing means 172 on the screen to check whether the estimated gender or age is correct.
- the person statistics viewing unit 173 may check statistical information about the customer person information for each time zone based on the generated personal information.
- the gender statistics viewing unit 174 may check statistical information based on human information about the estimated gender of the customer.
- the statistical information may be statistical information based on gender information of a customer such as information on gender preference of a specific product.
- the age statistics viewing unit 175 may check statistical information based on human information about the estimated age of the customer.
- the statistical information may be statistical information on which age information of a customer is based, such as information on an age-specific preference of a specific product.
- the customer personal information generating apparatus 1000 of the present embodiment the face area detection module 110, face validity determination module 120, face feature detection module 130, gender estimation module 140 as described above , Age estimation module 150, statistics generation module 160, UI (User Interface) module 170, the image capturing means 171, face information viewing means 172, personnel statistics viewing means 173, gender statistics And a control module 100 for performing overall control of the viewing means 174, the age statistics viewing means 175, the image input means 180, and the purchase information input means 190.
- UI User Interface
- reference numeral 190 is a purchase information input means, for example, may be configured as a barcode reader, it may be directly connected to each server (10, 20) or may be connected through a force terminal.
- FIG. 5 is a flowchart illustrating a process of a customer personal information generating method according to an embodiment of the present invention.
- the customer personal information generating method starts from the start step (S10) of the generation process, the image capture step (S20), the face area detection step (S30), the face validity determination step (S40)
- the facial feature detection step (S50), the gender estimation step (S60), the age estimation step (S70) and the result output step (S80) is made to end step (S90).
- gender information estimated in the gender estimating step S60 may be stored in the gender DB S60 '.
- Age information estimated in the age estimation step (S70) may be stored in the age DB (S70 ').
- the image is captured in the image of the customer input through the image input means.
- Image capture in the image input through the image input means for example, by using a sample grabber (SampleGrabber) of DirectX can be achieved by capturing an image from the image input through the image input means, a preferred example
- the media type (MediaType) of the sample grabber can be set to RGB24.
- a video converter filter is automatically attached to the front of the sample grabber filter so that the image captured by the sample grabber finally becomes RGB24.
- mt.formattype FORMAT_VideoInfo
- mt.majortype MEDIATYPE_Video
- mt.subtype MEDIASUBTYPE_RGB24; // only accept 24-bit bitmaps
- the face region of the customer is detected from an image captured and extracted from an image input through an image input means provided at one position on the POS terminal side.
- a method for face detection for example, a knowledge-based method, a feature-based method, a template-matching method, an appearance-based method, and the like.
- an appearance-based method is used.
- the appearance-based method is a method of acquiring a face region and a non-face region from different images, learning the acquired regions to make a learning model, and comparing the input image and the learning model data to detect a face.
- the appearance-based method is known as a relatively high performance method for front and side face detection.
- a YCbCr color model is generated from the RGB color information of the extracted image, color information and brightness information are separated from the generated color model, and the face candidate area is determined by the brightness information. Detecting; (a2) defining a quadrilateral feature point model for the detected face candidate region, and detecting a face region based on learning data trained by the AdaBoost learning algorithm on the quadrilateral feature point model; And (a3) determining the detected face area as a valid face area when the size of the result value of AdaBoost (CF H (x) of Equation 1) exceeds a predetermined threshold value. do.
- ⁇ A value used to finely adjust the error judgment rate of the strong classifier.
- the AdaBoost learning algorithm is known as an algorithm that generates a strong classifier with high detection performance through linear combination of weak classifiers.
- frontal face images the structural features unique to the face, such as eyes, nose and mouth, are evenly distributed throughout the image and are symmetrical.
- the background area is mixed because it is not symmetrical and is concentrated in a narrow range and the face outline is not a straight line.
- a new Haar- like feature similar to the existing Haar-like features but with asymmetry is added. Includes more like features.
- Figure 6 is a basic form of the existing Harr-like feaure
- Figure 7 is an exemplary photograph of Haar-like features selected for front face area detection according to an embodiment of the present invention
- Figure 8 is a non-frontal face An example photograph of Haar-like features selected for area detection.
- FIG. 9 shows a rectangular Haar-Like feature newly added by the present embodiment
- FIG. 10 shows an example of Haar-Like features selected for non-face detection among the Haar-Like features of FIG. 9. have.
- the Haar-Like feature of the present embodiment has an asymmetric shape, structure, and shape as shown in FIG. 12, and thus reflects the structural characteristics of the non-facial face. Excellent detection effect on the face.
- FIG. 11 is a Haar-Like feature probability curve in a training set for a conventional Harr-like feaure and a Harr-like feaure applied to this embodiment.
- A) is the present case
- b) is the existing case
- the probability curve corresponding to the case of the present embodiment is concentrated in a narrower range.
- Haar-Like features added in this embodiment are effective in the face detection in view of the base classification rule.
- FIG. 12 shows the variances and probability values of Kurtosis of the probability curves of newly added Haar-Like features and existing Haar-Like features in the training set of the non-facial face.
- the har-like feature for detecting the face area further includes an asymmetric har-like feature for detecting the non-frontal face area. do.
- the validity of the detected face is determined by comparing the magnitude of the result value of AdaBoost (CF H (x) of Equation 1) with a predetermined threshold value.
- Equation 1 the size of CF H (x) can be used as an important factor for determining the validity of the face.
- This value CF H (x) is a measure of how close the detected area is to the face and can be used to determine the validity of the face by setting a predetermined threshold value.
- the predetermined threshold value may be empirically set using the learning face group.
- the facial feature point is detected in the detected face region.
- the facial feature detection step S50 is performed by searching for landmarks of an active shape model (ASM) method, and proceeds by using the AdaBoost algorithm to detect facial features.
- ASM active shape model
- the detection of the facial feature point (b1) defines the position of the current feature point as (x l , y l ), and all possible partial windows of n * n pixel size in the vicinity of the current feature point position. Classifying them into a classifier; (b2) calculating candidate positions of the feature points according to Equation 2 below; And (b3) setting (x ' l , y' l ) as a new feature point if the condition of Equation 3 is satisfied, and maintaining the position (x l , y l ) of the current feature point if not satisfied. It is configured to include.
- N pass the number of steps through which the partial window has passed
- a method for detecting a feature point of a face there are, for example, a method of individually detecting feature points and a method of simultaneously detecting a feature point in correlation.
- this embodiment uses the Active Shape Model (ASM) method, which is a preferable method for face feature detection in terms of speed and accuracy. I use it.
- ASM Active Shape Model
- the feature point search of the existing ASM is a method using a profile at the feature point, detection is stable only in high quality images.
- an image extracted from an image input through an image input means such as a camera may be obtained as a low resolution and low quality image.
- the image is improved by searching for a feature point by the AdaBoost method. Feature points can be easily detected in an image.
- FIG. 13 is a profile picture applied to an existing ASM method for an image having a low resolution or poor image quality.
- FIG. 14 is a pattern picture around each mark point used in Adaboost for mark point search of the present invention.
- facial feature points that can be defined for each position of an eyebrow, an eye, a nose, and a mouth may be detected. .
- the feature points (0, 1, 2, 3) defining the face area, the feature points (4, 5, 6, 7, 12, 13, 14, 15) defining the eye, the feature points (22, 23, 24, 25, 26, 27), the feature points defining the nose (10, 11, 16, 17, 18) and the feature points defining the mouth (8, 9, 20, 21, 19) can be detected as facial feature points. have.
- an image and facial feature point input S61
- a gender estimation face area cropping S62
- a cut out face area image size normalization S63
- a histogram normalization S64
- a method for sex estimation there are, for example, a view-based method using all of a human face and a geometric feature-based method using only geometric features of a face.
- the gender estimation is performed by a view-based gender classification method using SVM (Support Vector Machine) learning to normalize the detected face region to form a facial feature vector and predict the gender therewith.
- SVM Small Vector Machine
- the SVM method may be classified into a support vector classifier (SVC) and a support vector regression (SVR).
- SVC support vector classifier
- SVR support vector regression
- the sex estimating step (S60) specifically includes (c-a1) cutting out a face estimation region for gender estimation from the detected face region based on the detected face feature points; (c-a2) normalizing the size of the cut face sex estimation region; (c-a3) normalizing the histogram of the face region for gender estimation where the size is normalized; And (c-a4) constructing an input vector from the face region for gender estimation where the size and histogram are normalized, and estimating gender using a pre-learned SVM algorithm.
- the face region is cut out using the input image and the facial feature point. For example, as shown in FIG. 16, half of the distance between the left and right eyes is reported as 1. Calculate the area of the face you want to crop.
- the cut-out face region is normalized to 12 * 21 size.
- step (c-a3) in order to minimize the effect of the illumination effect, histogram normalization is performed, in which the histogram is the same number of pixels having each density value.
- a 252-dimensional input vector is constructed from a normalized 12 * 21 face image, and sex is estimated using a pre-trained SVM.
- the gender is estimated as a male or a female if the calculated result of the classifier of Equation 4 is greater than zero.
- y i The sex value of the i th test data, which is 1 for male and -1 for female.
- the kernel function may use a Gaussian Radial Basis Function (GRBF) defined in Equation 5 below.
- GRBF Gaussian Radial Basis Function
- the kernel function may be a polynomial kernel in addition to a Gaussian Radial Basis Function (GRBF), and preferably a Gaussian Radial Basis Function (GRBF) in consideration of identification performance.
- GRBF Gaussian Radial Basis Function
- the SVM Small Vector Machine
- the SVM is a classification method that derives the boundary of two groups in a group having two groups and is known as a learning algorithm for pattern classification and regression.
- the basic learning principle of SVMs is to find an optimal linear hyperplane with minimal predictive classification errors for invisible test samples, that is, with good generalization performance.
- the linear SVM uses a taxonomic method to find the linear function with the least order.
- Equation 2 In order to determine the learning result uniquely, the following Equation 2 is restricted.
- Equation 3 the minimum distance between the learning sample and the hyperplane is represented by the following Equation 3, so it is necessarily as shown in the following Equation 4.
- Equation 5 Since w and b must be determined to maximize the minimum distance while fully identifying the learning sample, w and b are formulated as shown in Equation 5 below.
- Equation 4 Minimizing the objective function maximizes the value of Equation 4, which is the minimum distance.
- Equation 7 the constraint is shown in Equation 7 below.
- K (x, x ') is a nonlinear kernel function
- Adaboost method may be used in the above process, considering the performance and generalization performance of the classifier, it is more preferable to use the SVM method.
- the performance is 10-15% lower than when tested by the SVM method.
- an image and facial feature point input S71
- an age estimation face area cropping S72
- a cut out face area image size normalization S73
- local illumination correction S74
- the estimation of the age may specifically include: (c-b1) cutting out an age estimation face area from the detected face area based on the detected facial feature point; (c-b2) normalizing the size of the cut age estimation face region; (c-b3) performing local illumination correction on the age estimation face region where the size is normalized; (c-b4) constructing an input vector from the size normalized and locally-illuminated age estimation face region and generating a feature vector by projecting into a nine-body space; And (c-b5) estimating age by applying quadratic regression to the generated feature vectors.
- step (c-b1) the face region is cut out using the input image and the facial feature point.
- the face region is cut out from the binocular and the entrance point to the upper (0.8), the lower (0.2), the left (0.1), and the right (0.1), respectively.
- the cut-out face region is normalized to 64 * 64 size.
- step (c-b3) in order to reduce the influence of the lighting effect, local illumination correction is performed by the following equation (6).
- I (x, y) (I (x, y) -M) / V * 10 + 127
- the standard dispersion value (V) is a characteristic value representing the degree to which a certain amount of coincidence is scattered around the average value, and mathematically, the standard dispersion V is calculated as in Equation (9).
- a 4096-dimensional input vector is constructed from a 64 * 64 face image, and a 50-dimensional feature vector is generated by projecting into a pre-learned manifold space.
- the age estimation theory assumes that the characteristics of the human aging process reflected in the face image can be expressed in patterns according to any low dimensional distribution.
- X is an input vector
- Y is a feature vector
- P is a projection matrix to Nida body trained using CEA.
- X is an m ⁇ n matrix and x i represents every face image.
- the manifold learning step is to obtain a projection matrix for representing the m-dimensional face vector as a d-dimensional face vector (aging feature vector), where d < m (d is much smaller than m).
- the image order m is much larger than the number n of images.
- m ⁇ m matrix XX T is a degenerate matrix.
- C pca is an m ⁇ m matrix.
- d matrix of eigenvectors are selected in order of eigenvalues to form matrix W PCA .
- W PCA is an m ⁇ d matrix.
- Ws denotes a relationship between face images belonging to the same age group and Wd denotes a relationship between face images belonging to different groups.
- Dist (X i , X j ) is the same as Ref. 12 below.
- the eigenvectors corresponding to the d largest eigenvalues of become CEA basis vectors.
- W CEA is the m ⁇ d matrix.
- the projective matrix P mat is defined as in Equation 15 below.
- the projection matrix P mat is used to obtain aging characteristics for each face vector X.
- step (c-b5) to estimate the age by applying the second regression is made by the following equation (7).
- b o , b 1 , and b 2 are precomputed from the learning material as follows:
- Equation 17 The second regression model is shown in Equation 17 below.
- Is the age of the i-th learning image Is the feature vector of the i-th learning image.
- N is the number of learning materials.
- the sex information of the customer estimated by the above-described process is output to the gender DB and stored, and the age information of the customer is output to the age DB and stored.
- the estimated gender information and age information may be output to the statistics generating module to generate statistics in real time.
- Embodiments of the present invention include a computer readable recording medium including program instructions for performing various computer-implemented operations.
- the computer-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination.
- the recording medium may be one specially designed and configured for the present invention, or may be known and available to those skilled in computer software.
- Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical recording media such as CD-ROMs, DVDs, magnetic-optical media such as floppy disks, and ROM, RAM, flash memory, and the like. Hardware devices specifically configured to store and execute the same program instructions are included.
- the recording medium may be a transmission medium such as an optical or metal wire, a waveguide, or the like including a carrier wave for transmitting a signal specifying a program command, a data structure, or the like.
- Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Strategic Management (AREA)
- Development Economics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Geometry (AREA)
- Economics (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Cash Registers Or Receiving Machines (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
Claims (20)
- 포스 단말과 일체 또는 네트워크 연결된 서버로 구성된 포스 시스템을 위한 고객 인적정보 생성방법으로서,(a) 상기 포스 단말 측의 일 위치에 구비된 영상입력수단을 통해 입력되는 영상에서 추출한 이미지로부터 상기 고객의 얼굴영역을 검출하는 단계;(b) 상기 검출된 얼굴영역에서 얼굴특징점을 검출하는 단계; 및(c) 상기 검출된 얼굴영역 및 상기 검출된 얼굴특징점을 이용하여 상기 고객의 성별 및 나이 중 적어도 하나의 정보를 추정하여 인적정보를 생성하는 단계;를 포함하여 구성된 것을 특징으로 하는 고객 인적정보 생성방법.
- 제1항에 있어서,상기 (a) 단계는,(a1) 상기 추출된 이미지의 RGB 색 정보로부터 YCbCr 색 모델을 작성하고, 작성된 색 모델에서 색 정보와 밝기 정보를 분리하며, 상기 밝기 정보에 의하여 얼굴후보영역을 검출하는 단계; 및(a2) 상기 검출된 얼굴후보영역에 대한 4각 특징점 모델을 정의하고, 상기 4각 특징점 모델을 AdaBoost 학습 알고리즘에 의하여 학습시킨 학습자료에 기초하여 얼굴영역을 검출하는 단계;를 포함하여 구성된 것을 특징으로 하는 고객 인적정보 생성방법.
- 제2항에 있어서,상기 (a2) 단계에서,상기 얼굴영역 검출을 위한 하 라이크 피쳐(harr-like feature)는 비정면 얼굴영역을 검출하기 위한 비대칭성의 하 라이크 피쳐(harr-like feature)를 더욱 포함하는 것을 특징으로 하는 고객 인적정보 생성방법.
- 제1항에 있어서,상기 (b) 단계는,ASM(active shape model) 방법의 특징점(landmark) 탐색에 의해 이뤄지되, AdaBoost 알고리즘을 이용하여 진행하는 것을 특징으로 하는 고객 인적정보 생성방법.
- 제5항에 있어서,상기 얼굴특징점의 검출은,(b1) 현재 특징점의 위치를 (xl, yl)라고 정의하고, 현재 특징점의 위치를 중심으로 그 근방에서 n*n 화소크기의 부분창문들을 분류기로 분류하는 단계;(b2) 하기 수학식2에 의하여 특징점의 후보위치를 계산하는 단계; 및(b3) 하기 수학식3의 조건을 만족하는 경우에는 (x'l, y'l)을 새로운 특징점으로 정하고, 만족하지 못하는 경우에는 현재 특징점의 위치(xl, yl)를 유지하는 단계;를 포함하여 구성된 것을 특징으로 하는 고객 인적정보 생성방법.[수학식2][수학식3](단, a:x축방향으로 탐색해나가는 최대근방거리b:y축방향으로 탐색해나가는 최대근방거리xdx , dy:(xl, yl)에서 (dx, dy)만큼 떨어진 점을 중심으로 하는 부분창문Nall:분류기의 총계단수Npass:부분창문이 통과된 계단수c:끝까지 통과되지 못한 부분창문의 신뢰도값을 제한하기 위한 상수값)
- 제1항에 있어서,상기 (c) 단계에서,상기 성별의 추정은,(c-a1) 상기 검출된 얼굴특징점을 기준으로 상기 검출된 얼굴영역에서 성별추정용 얼굴영역을 잘라내는 단계;(c-a2) 상기 잘라낸 성별추정용 얼굴영역의 크기를 정규화하는 단계;(c-a3) 상기 크기가 정규화된 성별추정용 얼굴영역의 히스토그램을 정규화하는 단계; 및(c-a4) 상기 크기 및 히스토그램이 정규화된 성별추정용 얼굴영역으로부터 입력벡터를 구성하고 미리 학습된 SVM 알고리즘을 이용하여 성별을 추정하는 단계;를 포함하여 구성된 것을 특징으로 하는 고객 인적정보 생성방법.
- 제1항에 있어서,상기 (c) 단계에서,상기 나이의 추정은,(c-b1) 상기 검출된 얼굴특징점을 기준으로 상기 검출된 얼굴영역에서 나이추정용 얼굴영역을 잘라내는 단계;(c-b2) 상기 잘라낸 나이추정용 얼굴영역의 크기를 정규화하는 단계;(c-b3) 상기 크기가 정규화된 나이추정용 얼굴영역의 국부적 조명보정을 하는 단계;(c-b4) 상기 크기 정규화 및 국부적 조명보정된 나이추정용 얼굴영역으로부터 입력벡터를 구성하고 나이다양체 공간으로 사영하여 특징벡터를 생성하는 단계; 및(c-b5) 상기 생성된 특징벡터에 2차회귀를 적용하여 나이를 추정하는 단계;를 포함하여 구성된 것을 특징으로 하는 고객 인적정보 생성방법.
- 제1항에 있어서,상기 (c) 단계 이후에,(d) 각 고객의 성별 및 나이, 구매 시간, 구매 제품 중 적어도 2 이상의 정보를 상호 연관시켜 통계정보를 생성하는 단계를 더욱 포함하여 구성된 것을 특징으로 하는 고객 인적정보 생성방법.
- 제9항에 있어서,상기 구매 시간 또는 구매 제품 정보는 상기 포스 단말로부터 인식된 것을 특징으로 하는 고객 인적정보 생성방법.
- 제1항 내지 제10항 중의 어느 한 항에 기재된 방법의 각 단계를 실행시키기 위한 프로그램을 기록한 컴퓨터로 읽을 수 있는 기록매체.
- 제1항 내지 제10항 중의 어느 한 항에 기재된 고객 인적정보 생성방법을 이용하는 포스 시스템.
- 제12항에 있어서,상기 포스 단말, 상기 포스 단말과 연결된 로컬 운영 서버를 포함하여 구성된 것을 특징으로 하는 포스 시스템.
- 제12항에 있어서,복수의 체인점에 각각 구비된 상기 포스 단말, 상기 포스 단말과 네트워크를 통해 연결된 중앙 운영 서버를 포함하여 구성된 것을 특징으로 하는 포스 시스템.
- 제12항에 있어서,복수의 체인점에 각각 구비된 포스 단말 및 로컬 운영 서버, 상기 포스 단말 또는 상기 로컬 운영 서버에 네트워크를 통해 연결된 중앙 운영 서버를 포함하여 구성된 것을 특징으로 하는 포스 시스템.
- 포스 단말과 일체 또는 네트워크 연결된 서버로 구성된 포스 시스템을 위한 고객 인적정보 생성장치로서,포스 단말 측의 일 위치에 구비된 영상입력수단을 통해 입력되는 영상에서 추출한 이미지로부터 상기 고객의 얼굴영역을 검출하는 얼굴영역 검출모듈;상기 검출된 얼굴영역에서 얼굴특징점을 검출하는 얼굴특징점 검출모듈; 및상기 검출된 얼굴영역 및 상기 검출된 얼굴특징점을 이용하여 상기 고객의 성별 및 나이 중 적어도 하나의 정보를 추정하여 인적정보를 생성하는 인적정보 생성모듈;을 포함하여 구성된 것을 특징으로 하는 고객 인적정보 생성장치.
- 제16항에 있어서,각 고객의 성별 및 나이, 구매 시간, 구매 제품 중 적어도 2 이상의 정보를 상호 연관시켜 통계정보를 생성하는 통계 생성모듈을 더욱 포함하여 구성된 것을 특징으로 하는 고객 인적정보 생성장치.
- 제17항에 있어서,상기 구매 시간 또는 구매 제품 정보는 상기 포스 단말로부터 인식된 것을 특징으로 하는 고객 인적정보 생성장치.
- 고객 응대형 단말과 일체 또는 네트워크 연결된 서버로 구성된 고객 관리 시스템을 위한 고객 인적정보 생성방법으로서,고객 응대형 단말의 일 위치에 구비된 영상입력수단을 통해 입력되는 영상에서 추출한 이미지로부터 상기 고객의 얼굴특징을 검출하여 성별 및 나이 중 적어도 어느 하나에 관한 고객의 인적정보를 생성하는 것을 특징으로 하는 고객 인적정보 생성방법.
- 고객 응대형 단말과 일체 또는 네트워크 연결된 서버로 구성된 고객 관리 시스템을 위한 고객 인적정보 생성장치로서,고객 응대형 단말의 일 위치에 구비된 영상입력수단을 통해 입력되는 영상에서 추출한 이미지로부터 상기 고객의 얼굴특징을 검출하여 성별 및 나이 중 적어도 어느 하나에 관한 고객의 인적정보를 생성하는 것을 특징으로 하는 고객 인적정보 생성장치.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/003,718 US20140140584A1 (en) | 2011-08-09 | 2012-08-02 | Method and apparatus for generating personal information of client, recording medium thereof, and pos systems |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020110078962A KR101216115B1 (ko) | 2011-08-09 | 2011-08-09 | 고객 인적정보 생성방법 및 생성장치, 그 기록매체 및 포스 시스템 |
KR10-2011-0078962 | 2011-08-09 |
Publications (3)
Publication Number | Publication Date |
---|---|
WO2013022226A2 true WO2013022226A2 (ko) | 2013-02-14 |
WO2013022226A3 WO2013022226A3 (ko) | 2013-04-04 |
WO2013022226A4 WO2013022226A4 (ko) | 2013-05-30 |
Family
ID=47669046
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2012/006177 WO2013022226A2 (ko) | 2011-08-09 | 2012-08-02 | 고객 인적정보 생성방법 및 생성장치, 그 기록매체 및 포스 시스템 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20140140584A1 (ko) |
KR (1) | KR101216115B1 (ko) |
WO (1) | WO2013022226A2 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105138967A (zh) * | 2015-08-05 | 2015-12-09 | 三峡大学 | 基于人眼区域活动状态的活体检测方法和装置 |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101449744B1 (ko) * | 2013-09-06 | 2014-10-15 | 한국과학기술원 | 영역 기반 특징을 이용한 얼굴 검출 장치 및 방법 |
JP6270182B2 (ja) * | 2014-07-17 | 2018-01-31 | Necソリューションイノベータ株式会社 | 属性要因分析方法、装置、およびプログラム |
US20170092150A1 (en) * | 2015-09-30 | 2017-03-30 | Sultan Hamadi Aljahdali | System and method for intelligently interacting with users by identifying their gender and age details |
CN107346408A (zh) * | 2016-05-05 | 2017-11-14 | 鸿富锦精密电子(天津)有限公司 | 基于脸部特征的年龄识别方法 |
US11199907B2 (en) | 2017-05-29 | 2021-12-14 | Abhinav Arvindkumar AGGARWAL | Method and a system for assisting in performing financial services |
US11169661B2 (en) | 2017-05-31 | 2021-11-09 | International Business Machines Corporation | Thumbnail generation for digital images |
CN109886095A (zh) * | 2019-01-08 | 2019-06-14 | 浙江新再灵科技股份有限公司 | 一种基于视觉的轻型卷积神经网络的乘客属性识别系统及方法 |
CN110659615A (zh) * | 2019-09-26 | 2020-01-07 | 上海依图信息技术有限公司 | 基于人脸识别的客群流量及结构化分析系统及方法 |
US11816668B2 (en) * | 2022-01-03 | 2023-11-14 | Bank Of America Corporation | Dynamic contactless payment based on facial recognition |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070049501A (ko) * | 2005-11-08 | 2007-05-11 | 삼성전자주식회사 | 성별을 이용한 얼굴 인식 방법 및 장치 |
JP2007257585A (ja) * | 2006-03-27 | 2007-10-04 | Fujifilm Corp | 画像処理方法および装置ならびにプログラム |
JP2010020666A (ja) * | 2008-07-14 | 2010-01-28 | Seiko Epson Corp | 広告効果計測システム、広告効果計測装置、広告効果計測装置の制御方法およびそのプログラム |
KR20110029805A (ko) * | 2009-09-16 | 2011-03-23 | 한국전자통신연구원 | 시각 기반 사용자 연령대 구분 및 추정 방법 |
-
2011
- 2011-08-09 KR KR1020110078962A patent/KR101216115B1/ko not_active IP Right Cessation
-
2012
- 2012-08-02 US US14/003,718 patent/US20140140584A1/en not_active Abandoned
- 2012-08-02 WO PCT/KR2012/006177 patent/WO2013022226A2/ko active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070049501A (ko) * | 2005-11-08 | 2007-05-11 | 삼성전자주식회사 | 성별을 이용한 얼굴 인식 방법 및 장치 |
JP2007257585A (ja) * | 2006-03-27 | 2007-10-04 | Fujifilm Corp | 画像処理方法および装置ならびにプログラム |
JP2010020666A (ja) * | 2008-07-14 | 2010-01-28 | Seiko Epson Corp | 広告効果計測システム、広告効果計測装置、広告効果計測装置の制御方法およびそのプログラム |
KR20110029805A (ko) * | 2009-09-16 | 2011-03-23 | 한국전자통신연구원 | 시각 기반 사용자 연령대 구분 및 추정 방법 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105138967A (zh) * | 2015-08-05 | 2015-12-09 | 三峡大学 | 基于人眼区域活动状态的活体检测方法和装置 |
CN105138967B (zh) * | 2015-08-05 | 2018-03-27 | 三峡大学 | 基于人眼区域活动状态的活体检测方法和装置 |
Also Published As
Publication number | Publication date |
---|---|
WO2013022226A4 (ko) | 2013-05-30 |
WO2013022226A3 (ko) | 2013-04-04 |
US20140140584A1 (en) | 2014-05-22 |
KR101216115B1 (ko) | 2012-12-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013022226A2 (ko) | 고객 인적정보 생성방법 및 생성장치, 그 기록매체 및 포스 시스템 | |
WO2013009020A2 (ko) | 시청자 얼굴 추적정보 생성방법 및 생성장치, 그 기록매체 및 3차원 디스플레이 장치 | |
WO2019085495A1 (zh) | 微表情识别方法、装置、系统及计算机可读存储介质 | |
WO2018143707A1 (ko) | 메이크업 평가 시스템 및 그의 동작 방법 | |
WO2019216593A1 (en) | Method and apparatus for pose processing | |
WO2015102361A1 (ko) | 얼굴 구성요소 거리를 이용한 홍채인식용 이미지 획득 장치 및 방법 | |
WO2018143630A1 (ko) | 상품을 추천하는 디바이스 및 방법 | |
WO2015133699A1 (ko) | 객체 식별 장치, 그 방법 및 컴퓨터 프로그램이 기록된 기록매체 | |
WO2018048054A1 (ko) | 단일 카메라 기반의 3차원 영상 해석에 기초한 가상현실 인터페이스 구현 방법, 단일 카메라 기반의 3차원 영상 해석에 기초한 가상현실 인터페이스 구현 장치 | |
WO2018062647A1 (ko) | 정규화된 메타데이터 생성 장치, 객체 가려짐 검출 장치 및 그 방법 | |
WO2020141729A1 (ko) | 신체 측정 디바이스 및 그 제어 방법 | |
WO2018155999A2 (en) | Moving robot and control method thereof | |
WO2021132851A1 (ko) | 전자 장치, 두피 케어 시스템 및 그들의 제어 방법 | |
WO2019027240A1 (en) | ELECTRONIC DEVICE AND METHOD FOR PROVIDING A RESEARCH RESULT THEREOF | |
EP3740936A1 (en) | Method and apparatus for pose processing | |
WO2017188800A1 (ko) | 이동 로봇 및 그 제어방법 | |
WO2015088179A1 (ko) | 얼굴의 키 포인트들에 대한 포지셔닝 방법 및 장치 | |
EP3440593A1 (en) | Method and apparatus for iris recognition | |
WO2019088555A1 (ko) | 전자 장치 및 이를 이용한 눈의 충혈도 판단 방법 | |
AU2020244635B2 (en) | Mobile robot control method | |
WO2020235852A1 (ko) | 특정 순간에 관한 사진 또는 동영상을 자동으로 촬영하는 디바이스 및 그 동작 방법 | |
WO2020106010A1 (ko) | 이미지 분석 시스템 및 분석 방법 | |
WO2017119578A1 (en) | Method for providing services and electronic device thereof | |
WO2021157902A1 (en) | Device-free localization robust to environmental changes | |
AU2018310111B2 (en) | Electronic device and method for providing search result thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12821548 Country of ref document: EP Kind code of ref document: A2 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14003718 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16/06/2014) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12821548 Country of ref document: EP Kind code of ref document: A2 |