KR101343875B1 - Analysis device of user cognition and method for analysis of user cognition - Google Patents

Analysis device of user cognition and method for analysis of user cognition Download PDF

Info

Publication number
KR101343875B1
KR101343875B1 KR1020110129936A KR20110129936A KR101343875B1 KR 101343875 B1 KR101343875 B1 KR 101343875B1 KR 1020110129936 A KR1020110129936 A KR 1020110129936A KR 20110129936 A KR20110129936 A KR 20110129936A KR 101343875 B1 KR101343875 B1 KR 101343875B1
Authority
KR
South Korea
Prior art keywords
user
gaze
object
image
objects
Prior art date
Application number
KR1020110129936A
Other languages
Korean (ko)
Other versions
KR20130063430A (en
Inventor
이민호
곽호완
장영민
황병훈
이상일
Original Assignee
경북대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 경북대학교 산학협력단 filed Critical 경북대학교 산학협력단
Priority to KR1020110129936A priority Critical patent/KR101343875B1/en
Priority claimed from PCT/KR2012/010025 external-priority patent/WO2013085193A1/en
Publication of KR20130063430A publication Critical patent/KR20130063430A/en
Application granted granted Critical
Publication of KR101343875B1 publication Critical patent/KR101343875B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00597Acquiring or recognising eyes, e.g. iris verification
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00624Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
    • G06K9/00664Recognising scenes such as could be captured by a camera operated by a pedestrian or robot, including objects at substantially different ranges from the camera
    • G06K9/00671Recognising scenes such as could be captured by a camera operated by a pedestrian or robot, including objects at substantially different ranges from the camera for providing information about objects in the scene to a user, e.g. as in augmented reality applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/36Image preprocessing, i.e. processing the image information without deciding about the identity of the image
    • G06K9/46Extraction of features or characteristics of the image
    • G06K9/4652Extraction of features or characteristics of the image related to colour

Abstract

A user cognitive analysis apparatus is disclosed. The user cognitive analysis apparatus may include an input unit configured to receive an image displayed to the user, a region of interest detector configured to detect a gaze path of the user for the image, and detect a region of interest of the user with respect to the image using the detected gaze path; An object recognition unit for recognizing a plurality of objects on the received image, a recognition determination unit for comparing the detected ROI and the detected plurality of objects, and determining whether the user is a plurality of objects, and a user according to the determination result It includes an output that displays the information of the object is not recognized.

Description

ANALYSIS DEVICE OF USER COGNITION AND METHOD FOR ANALYSIS OF USER COGNITION}

The present invention relates to a user cognitive analysis device and a user cognition analysis method, and more particularly, to a user cognition analysis device and a user cognition analysis method that can recognize and provide the user with the changed important information inadvertently missed.

Recently, an important research issue in intention modeling and recognition is to create a new paradigm in Human Computer Interface (HCI) and Human-Robot Interaction (HRI).

In a modern living environment, a lot of information is provided to users. Since a lot of information is provided to users indefinitely, there have been cases where the user does not recognize the important information that changes. Such a phenomenon that it is difficult to detect a change in the screen is called change blindness.

Therefore, there is a demand for a method for easily providing the user with such changed information.

Accordingly, the present invention relates to a user cognitive analysis apparatus and a user cognitive analysis method that can recognize and provide changed important information missed by users.

The user cognitive analysis apparatus according to the present invention for achieving the above object, the input unit for receiving an image displayed to the user, detects the user's gaze path for the image, the image using the detected gaze path A region of interest detector for detecting a region of interest of the user, an object recognizer that recognizes a plurality of objects on the input image, and compares the detected region of interest with the plurality of detected objects, A recognition determining unit determining whether the user is recognized, and an output unit displaying information of an object not recognized by the user according to the determination result.

In this case, the ROI detector may extract gaze feature information from the detected gaze path and detect the ROI of the user by using the extracted gaze feature information.

In this case, the gaze characteristic information may include at least one of a pupil change, a blinking eye, a gaze gaze point, a path of a gaze gaze point, a time at which gazes stay in the same area, and a number of times that gazes stay in the same area. Is preferably.

In this case, the ROI detector detects the ROI of the user based on the time of the gaze in the same area and the number of times of the gaze in the same area.

The object recognizer may include an image information extractor configured to extract at least one image information of brightness, edge, symmetry, and complementary colors of the input image, and a center-surround difference of the extracted image information. CSD processing unit for outputting at least one feature map among a brightness feature map, a directional feature map, a symmetry feature map, and a color feature map by performing normalization processing, and an independent component analysis of the output feature map. ICA processing unit to generate a protrusion map by performing a), and an extraction unit for recognizing the protrusion area on the protrusion map as an object.

The object recognizer may detect a plurality of objects in the input image by using incremental hierarchical MAX (IHMAX).

The object recognizer recognizes a plurality of objects on the input image in real time, and the recognition determiner determines a newly detected object or an object with displacement among the recognized plurality of objects, and detects the detected interest. Preferably, the newly detected object or object with displacement that is not included in the area is determined as an unmapped object.

On the other hand, the user cognition analysis method according to the present embodiment, the step of receiving an image displayed to the user, detecting the user's gaze path for the image, the user of the image using the detected gaze path Detecting a region of interest, comparing the detected region of interest with the plurality of detected objects, determining whether the user is aware of the plurality of objects, and not recognizing the user according to the determination result Displaying information of the object.

Recognizing a plurality of objects on the input image, comparing the detected region of interest with the detected plurality of objects, and determining an object that is not mapped with the detected region of interest among the plurality of objects, Displaying an unmapped object.

In this case, the detecting of the ROI may include extracting gaze feature information from the detected gaze path and detecting an ROI of the user using the extracted gaze feature information.

In this case, the gaze characteristic information may include at least one of a pupil change, a blinking eye, a gaze gaze point, a path of a gaze gaze point, a time at which gazes stay in the same area, and a number of times that gazes stay in the same area. Is preferably.

In this case, the detecting of the ROI may include detecting the ROI of the user based on a time of gaze in the same area and a number of times of gaze in the same area.

The recognizing of the object may include extracting at least one image information of brightness, edge, symmetry, and complementary colors of the input image, and performing a center-surround difference with respect to the extracted image information. CSD) and normalization to output at least one feature map among brightness feature maps, directional feature maps, symmetry feature maps, and color feature maps, and independent component analysis of the output feature maps. And generating a protrusion map by recognizing the protrusion region on the protrusion map as an object.

On the other hand, in the step of recognizing the object, it is preferable to detect a plurality of objects in the input image using Incremental hierarchical MAX (IHMAX).

The recognizing of the object may include recognizing a plurality of objects on the input image in real time, and the determining may include determining a newly detected object or an object having a displacement among the recognized plurality of objects, Preferably, the newly detected object or object with displacement not included in the detected ROI is determined as an unmapped object.

Therefore, the user cognitive analysis apparatus and the user recognition analysis method according to the present embodiment can be provided to the user by recognizing the changed important information missed by the user, can help to improve the user's cognition.

1 is a block diagram showing the configuration of a user cognitive analysis apparatus according to an embodiment of the present invention;
FIG. 2 is a view for explaining a specific form of the input unit and the imaging unit of FIG. 1;
3 is a view for explaining the operation of the user cognitive analysis apparatus according to an embodiment of the present invention;
4 is a diagram for describing an operation of an ROI detector of FIG. 1;
5 is a view for explaining a detailed configuration of the object recognition unit of FIG.
6 is a view for explaining an operation of a cognitive determining unit of FIG. 1, and
7 is a flowchart illustrating a user cognition analysis method according to an embodiment of the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The present invention will now be described in detail with reference to the accompanying drawings.

1 is a block diagram showing the configuration of a user cognitive analysis apparatus according to an embodiment of the present invention.

Referring to FIG. 1, the user cognitive analysis apparatus 100 according to the present embodiment may include an input unit 110, an imaging unit 120, an output unit 130, a storage unit 140, a region of interest detector 150, and an object. The recognition unit 160, the recognition determiner 170, and the controller 180 are included.

The input unit 110 receives an image displayed to the user. In detail, the input unit 110 may receive an image displayed on an external display device or directly capture an image of an actual environment viewed by a user using an image pickup device.

The imaging unit 120 captures the pupil of the user. In detail, the imaging unit 120 captures the pupil (specifically, the eye region) of the user. In the present embodiment, only the imaging of the user's pupil using the image capturing unit in the user cognitive analysis apparatus 100 has been described. However, in the implementation, the pupil image captured from the external eye tractor may be input. Meanwhile, in the present exemplary embodiment, only the still image is used, but the image capturing unit 120 may capture a user's pupil in a video form.

The output unit 130 displays information of an object that is not recognized by the user. In detail, the output unit 130 may display information (eg, information that there is an object not recognized by the user, location information of the corresponding object, etc.) of the object that is determined not to be mapped by the recognition determining unit 170 to be described later. I can display it. In detail, the output unit 130 may be implemented as a display device such as a monitor, and may display information of an object not recognized by the user together with the image received from the input unit 110.

The storage 140 stores the input image. In detail, the storage 140 may store an image input through the input 110. The storage 140 stores the captured image. In detail, the storage 140 may store an image captured by the imaging unit 120. In addition, the storage 140 may store the gaze feature information extracted by the ROI detector 150 to be described later and the detected ROI, and store information of an object recognized by the object recognizer 160 to be described later. The determination result of the recognition determining unit 170 to be described later may be temporarily stored.

In addition, the storage 140 may store learning information of the NN learner (or NN model). The storage unit 140 may be a memory mounted in the user recognition apparatus 100, for example, a ROM, a flash memory or an HDD, and an external HDD or a memory card connected to the user recognition apparatus 100, for example. For example, it may be a flash memory (M / S, xD, SD, etc.) or a USB memory.

Here, the NN learner receives a plurality of input items (for example, 'fixed length' and 'fixed number'), and detects a region of interest of the user using a neural network algorithm. Meanwhile, in the present embodiment, only the detection of the ROI using the NN learner has been described, but may be implemented in the form of using another learner.

The ROI detector 150 detects the gaze path of the user and detects the ROI of the user with respect to the image by using the detected gaze path. In detail, the ROI detector 150 may extract gaze feature information from the detected gaze path.

Here, the gaze characteristic information includes pupil change, blinking eyes, gaze gaze point, 'time to first fixation', 'fixation length', 'fixation count', 'observation' Information, such as 'observation length', 'observation count', 'fixation before', and 'percentage percentage'.

Here, 'time to first fixation' is the time until the user's gaze is fixed after receiving the stimulus (visual image), that is, the time from the first fixation of the user's gaze, 'Fixation length' is the time that the user's eyes stay in a certain area of interest (AOI) of the input image, and 'fixation count' is the area of interest of the input image , AOI) is the number of times the user's gaze stays, 'observation length' is the total time the user's gaze stays in a specific area (AOI), and 'observation count' is the specific area (AOI) Is the number of times the user's gaze stays again, 'fixation before' is the number of times the gaze stops before the first gaze stays within a certain area (AOI), and 'Paricipant%' is At least once within the AOI) Persistence of the users who stayed, that is, the fixed frequency of the gaze of the users for the specific area (AOI). A method of extracting each of the line-of-sight characteristic information from the user's line of sight is widely known, and a detailed description of the method of extracting each line-of-sight characteristic information is omitted.

The ROI detector 150 may detect the ROI of the user based on the time that the gaze stays and the number of times the gaze stays in the same area.

For example, the region of interest of the user has a long time for the user's gaze or a number of times that the gaze stays. Accordingly, the ROI 150 detects the 'fixed length' (that is, the time the gaze stays in the same area) and the 'fixed number' (ie, the number of times the gaze stays in the same area) of the extracted gaze feature information. The learner may detect a region of interest of the user.

The object recognizer 160 recognizes a plurality of objects on the input image. In detail, the object recognizer 160 may detect a plurality of objects in an image input through the input unit 110 using incremental hierarchical MAX (IHMAX). A detailed configuration and operation of the object recognizer 160 will be described later with reference to FIG. 5.

Here, Incremental hierarchical MAX (IHMAX) is an algorithm that can extract objects in an image, which mimics the human visual information processing mechanism, and is an object model that can gradually learn a large amount of object information from complex real-world images. For example, recognition can be performed on objects that have not been learned.

The recognition determiner 170 compares the detected ROI with the detected plurality of objects and determines an object that is not mapped to the detected ROI among the plurality of objects. In detail, the recognition determiner 170 determines a newly detected object or an object having a displacement among the plurality of objects recognized by the object recognizer 160, and newly detects the object or the displacement that is not included in the detected ROI. An object with can be determined as an unmapped object (that is, an object not recognized by the user). The recognition determiner 170 may determine that the object is not recognized by the user using the Semantic network correlation. Semantic network correlation will be described later with reference to FIG. 6.

The controller 180 controls each component in the user cognitive analysis apparatus 100. In detail, the controller 180 controls the imaging unit 120 to capture the eyeball of the user, and detects the gaze path and the ROI of the user on the captured eye image. The controller 180 may control the object recognizer 160 to recognize the object of the input image, and determine whether the user has recognized the recognized object. Can be controlled.

The output unit 130 may be controlled to provide the user with information about an object that the user does not recognize.

Therefore, the user cognitive analysis apparatus according to the present embodiment can be provided to the user by recognizing changed important information missed by the user, can help to improve the user's cognition.

In the present embodiment, only an operation of displaying an object not recognized by the user through the output unit 130 has been described. However, in an implementation, information on an object not recognized by the user may be stored in the storage 140 or may be stored through a printing apparatus. It may be implemented in the form of printing, voice output, or transmitted to a specific device.

FIG. 2 is a diagram for describing a specific form of the input unit and the imaging unit of FIG. 1.

Specifically, referring to FIG. 2A, an eyeglass type interface device is disclosed. When the interface device in the form of glasses is applied to the user cognitive analysis device, the input unit 110 may be implemented as an external camera that captures an area corresponding to the user's gaze, and the imaging unit 120 captures the pupil of the user. It may be implemented with a gaze camera.

Referring to FIG. 2B, an eye tractor is disclosed. When the eye tractor is applied to the user cognitive analysis apparatus 100, the input unit 110 may receive an image displayed on the eye tractor and display the received image on the eye tractor. And the camera for imaging the pupil of the user of the child tractor may be the imaging unit 120.

3 is a view for explaining the operation of the user cognitive analysis apparatus according to an embodiment of the present invention.

Referring to FIG. 3, first, an image displayed to a user or an image of a real life staring at a user is input through the input unit 110. The ROI detector 150 detects the gaze path of the user.

When the gaze path is detected, the ROI 150 detects pupil changes, blinking eyes, gaze points, 'time to first fixation', 'fixation length', and ' Eye feature information such as 'fixation count', 'observation length', 'observation count', 'fixation before' and 'Paricipant%' Can be.

When the gaze feature information is extracted, the ROI detector 150 may detect an ROI of the user using a NN (Nearest neighbors) learner.

In addition, the object recognizer 170 recognizes a plurality of objects in the image input through the input unit 110, and the recognition determiner 180 determines whether the user is a plurality of objects recognized using the detected ROI. You can judge.

4 is a diagram for describing an operation of an ROI detector of FIG. 1.

In detail, the gaze path detected by the ROI detector needs to be calibrated according to the size of the monitor or the size of the external image. That is, in order to map the location of the user's pupil and the area of the real object, the user may detect the ROI by performing a mapping process as shown in FIG. 4.

FIG. 5 is a diagram for describing a detailed configuration of the object recognition unit of FIG. 1.

Referring to FIG. 5, the area detector 160 includes an image information extractor 161, a CSD processor 162, an ICA processor 163, and an extractor 164.

The image information extractor 161 extracts image information about brightness I, edge E, and complementary colors RG and BY of the input image. In detail, the image information extractor 161 may include brightness, edge, symmetry, and complementary colors of the input image based on R (Red), G (Green), and B (Blue) values of the image input from the input unit 110. At least one of the image information may be extracted.

The CSD processor 162 may generate a brightness feature map, a directional feature map, a symmetry feature map, and a color feature map by performing a center-surround difference (CSD) and normalization process on the extracted image information. .

The ICA processing unit 163 generates an Salient Map (SM) by performing independent component analysis on the output feature map.

The extraction unit 164 recognizes the protrusion area on the protrusion map as an object. In detail, the extractor 164 performs reinforcement processing or suppression processing on the plurality of protrusion points included in the protrusion map output from the ICA processing unit 163 to give priority to the plurality of protrusion points. It is possible to assign a priority and to detect an area having a certain priority as an object.

FIG. 6 is a diagram for describing an operation of the recognition determiner of FIG. 1.

According to the hypothesis of "memory withdrawal explanation" in cognitive psychology, a person tries to solve a problem by remembering a specific example related to the problem in order to solve a problem associated with a familiar situation. This means retrieving relevant information from memory that would soon lead to problem solving.

Accordingly, the apparatus for analyzing a user's cognition according to the present exemplary embodiment may improve the user's cognition by presenting relevant information using a probabilistic semantic network based on a human's insufficient memory capacity and the ability to withdraw the memory. Help was available.

In detail, the objects selected by the user's gaze information may have one or more semantic relatedness, and the semantic relation may be obtained by measuring semantic similarity between words. Semantic similarity between objects may be represented as shown in FIG. 6.

Referring to FIG. 6, a node includes an object node and a function / action node. Edge properties connecting two nodes have probability values that represent semantic similarities, including time and space, which can be obtained through Latent Semantic Analysis (LSA).

The semantic network for cognitive improvement is a form that expresses semantic similarity between object nodes and function / action nodes as a network. A function node is a node for retrieving candidate objects that are highly related to recognized objects. It should be selected, which is set as the most relevant function / action node between objects to reduce the ambiguity of the context of the objects.

Therefore, in the present embodiment, it is possible to discriminate and analyze how different from information recorded in an existing database through correlation analysis between object semantics of the semantic network between objects selected by gaze.

In addition, the information change from the existing information can be recognized by using the object recognition model and the information of the Semantic network for the important information change elsewhere where the gaze does not stay.

7 is a flowchart illustrating a user cognition analysis method according to an embodiment of the present invention.

Referring to FIG. 7, first, an image displayed to a user is received (S710). In detail, an image displayed on the display device may be input or a captured image of an area viewed by the user may be input.

The user's gaze path for the image is detected. In detail, the pupil of the user is photographed (S720), the gaze path of the user is detected using the captured pupil image (S730), the user feature information is extracted from the detected gaze path, and the extracted user feature information is used. In step S740, the ROI of the user is detected. Here, the gaze characteristic information includes pupil change, blinking eyes, gaze gaze point, 'time to first fixation', 'fixation length', 'fixation count', 'observation' Length (observation length), 'observation count', 'fixation before', 'Paricipant%'.

The plurality of objects on the input image are recognized (S750). Specifically, a plurality of objects in the input image may be detected using incremental hierarchical MAX (IHMAX).

In operation S760, it is determined whether the user is aware of the plurality of objects by comparing the detected ROI and the detected plurality of objects. Specifically, a new detected object or an object having a displacement is determined from among the recognized plurality of objects, and a new detected object that is not included in the detected ROI or an object having a displacement is not mapped (that is, recognized by the user). Can not be determined).

In operation S770, information about the object not recognized by the user is displayed according to the determination result.

Therefore, the user cognitive analysis method according to the present embodiment can be provided to the user by recognizing the changed important information missed by the user, can help to improve the user's cognition. The user cognitive analysis method as shown in FIG. 8 may be executed on a user cognitive analysis apparatus having the configuration of FIG. 1 or may be executed on a user cognitive analysis apparatus having another configuration.

Although the above has been illustrated and described with respect to preferred embodiments of the present invention, the present invention is not limited to the above-described specific embodiments, and the present invention belongs to the present invention without departing from the gist of the present invention as claimed in the claims. Anyone of ordinary skill in the art can make various modifications, and such changes are within the scope of the claims.

100: user cognitive analysis device 110: input unit
120: imaging unit 130: output unit
140: storage unit 150: region of interest detection unit
160: object recognition unit 170: recognition determination unit
180:

Claims (14)

  1. In the user cognitive analysis device,
    An input unit to receive an image displayed to the user;
    Detects the gaze path of the user with respect to the image, and detects the ROI of the user with respect to the image based on the time of the gaze in the same area among the detected gaze paths, and the number of times the gaze in the same area stays. A region of interest detector;
    An object recognition unit recognizing a plurality of objects on the input image;
    A cognitive determination unit comparing the detected region of interest with the recognized plurality of objects and determining whether the user is aware of the plurality of objects; And
    And an output unit configured to display information of an object not recognized by the user according to the determination result.
  2. The method of claim 1,
    The region of interest detection unit,
    And extracting gaze feature information from the detected gaze path, and detecting a region of interest of the user using the extracted gaze feature information.
  3. 3. The method of claim 2,
    The visual-
    User cognitive analysis device, characterized in that the user's pupil changes, blinking eyes, gaze gaze point, gaze gaze path, time of gaze stay in the same area, the number of times the gaze stays in the same area .
  4. delete
  5. The method of claim 1,
    The object recognition unit,
    An image information extracting unit extracting at least one image information of brightness, edge, symmetry, and complementary colors of the input image;
    A center-surround difference (CSD) and a normalization process on the extracted image information to output at least one feature map of a brightness feature map, a direction feature map, a symmetric feature map, and a color feature map A CSD processing unit;
    An ICA processing unit for performing an independent component analysis on the output feature map to generate a protrusion map; And
    And an extracting unit for recognizing the protruding region on the protruding map as an object.
  6. The method according to claim 1,
    The object recognition unit,
    And detecting a plurality of objects in the input image using an incremental hierarchical MAX (IHMAX).
  7. The method of claim 1,
    The object recognition unit,
    Recognizing a plurality of objects on the input image in real time,
    The recognition determining unit,
    Determine a newly detected object or an object having a displacement among the recognized plurality of objects, and determine the new detected object or an object having a displacement as an unmapped object not included in the detected ROI. User cognitive analysis device.
  8. In the user cognitive analysis method of the user cognitive analysis device,
    Receiving an image displayed to a user;
    Detecting a user's gaze path for the image;
    Detecting a region of interest of the user with respect to the image based on a time of gaze in the same region of the detected gaze paths and a number of times of gaze in the same region;
    Recognizing a plurality of objects on the received image;
    Comparing the detected region of interest with the recognized plurality of objects to determine whether the user is aware of the plurality of objects; And
    And displaying information about an object that is not recognized by the user according to the determination result.
  9. 9. The method of claim 8,
    Detecting the region of interest,
    And extracting gaze feature information from the detected gaze path, and detecting a region of interest of the user using the extracted gaze feature information.
  10. 10. The method of claim 9,
    The visual-
    User cognition analysis method characterized in that the information of at least one of the pupil changes, blinking eyes, gaze gaze point, gaze gaze path, the time the gaze stays in the same area, the number of times the gaze stays in the same area .
  11. delete
  12. 9. The method of claim 8,
    Recognizing the object,
    Extracting at least one image information of brightness, edge, symmetry, and complementary color for the input image;
    A center-surround difference (CSD) and a normalization process on the extracted image information to output at least one feature map of a brightness feature map, a direction feature map, a symmetric feature map, and a color feature map step;
    Performing an independent component analysis on the output feature map to generate a protrusion map;
    Recognizing a protrusion area on the protrusion map as an object; User recognition analysis method comprising a.
  13. 9. The method of claim 8,
    Recognizing the object,
    And detecting a plurality of objects in the input image by using incremental hierarchical MAX (IHMAX).
  14. 9. The method of claim 8,
    Recognizing the object,
    Recognizing a plurality of objects on the input image in real time,
    The determining step,
    Determine a newly detected object or an object having a displacement among the recognized plurality of objects, and determine the new detected object or an object having a displacement as an unmapped object not included in the detected ROI. User cognitive analysis method.
KR1020110129936A 2011-12-06 2011-12-06 Analysis device of user cognition and method for analysis of user cognition KR101343875B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110129936A KR101343875B1 (en) 2011-12-06 2011-12-06 Analysis device of user cognition and method for analysis of user cognition

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR1020110129936A KR101343875B1 (en) 2011-12-06 2011-12-06 Analysis device of user cognition and method for analysis of user cognition
PCT/KR2012/010025 WO2013085193A1 (en) 2011-12-06 2012-11-26 Apparatus and method for enhancing user recognition
US14/363,203 US9489574B2 (en) 2011-12-06 2012-11-26 Apparatus and method for enhancing user recognition

Publications (2)

Publication Number Publication Date
KR20130063430A KR20130063430A (en) 2013-06-14
KR101343875B1 true KR101343875B1 (en) 2013-12-23

Family

ID=48860789

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110129936A KR101343875B1 (en) 2011-12-06 2011-12-06 Analysis device of user cognition and method for analysis of user cognition

Country Status (1)

Country Link
KR (1) KR101343875B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160026565A (en) 2014-09-01 2016-03-09 상명대학교서울산학협력단 method for 3-D eye-gage tracking

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101517538B1 (en) * 2013-12-31 2015-05-15 전남대학교산학협력단 Apparatus and method for detecting importance region using centroid weight mask map and storage medium recording program therefor

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001194161A (en) 2000-01-11 2001-07-19 Alpine Electronics Inc Device for presenting information about detection of movement of line of sight
JP2002083400A (en) * 2000-09-06 2002-03-22 Honda Motor Co Ltd On-vehicle information processor for judging compatibility of view area of driver
JP2008082822A (en) 2006-09-27 2008-04-10 Denso It Laboratory Inc Watching object detector and watching object detection method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001194161A (en) 2000-01-11 2001-07-19 Alpine Electronics Inc Device for presenting information about detection of movement of line of sight
JP2002083400A (en) * 2000-09-06 2002-03-22 Honda Motor Co Ltd On-vehicle information processor for judging compatibility of view area of driver
JP2008082822A (en) 2006-09-27 2008-04-10 Denso It Laboratory Inc Watching object detector and watching object detection method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160026565A (en) 2014-09-01 2016-03-09 상명대학교서울산학협력단 method for 3-D eye-gage tracking

Also Published As

Publication number Publication date
KR20130063430A (en) 2013-06-14

Similar Documents

Publication Publication Date Title
US10402632B2 (en) Pose-aligned networks for deep attribute modeling
JP5008269B2 (en) Information processing apparatus and information processing method
CN102147856B (en) Image recognition apparatus and its control method
Shreve et al. Macro-and micro-expression spotting in long videos using spatio-temporal strain
CN102365645B (en) Organizing digital images by correlating faces
Papadopoulos et al. Training object class detectors from eye tracking data
CN101271517B (en) Face region detecting device and method
JP2005227957A (en) Optimal face image recording device and optimal face image recording method
JP5247480B2 (en) Object identification device and object identification method
JP2017503276A (en) Apparatus and method for acquiring iris recognition image using face component distance
Krishna et al. A wearable face recognition system for individuals with visual impairments
KR100947990B1 (en) Gaze Tracking Apparatus and Method using Difference Image Entropy
WO2012105196A1 (en) Interest estimation device and interest estimation method
Toet et al. Towards cognitive image fusion
TW201322050A (en) Electronic device and read guiding method thereof
US20160062456A1 (en) Method and apparatus for live user recognition
Li et al. Learning to predict gaze in egocentric video
JP5227911B2 (en) Surveillance video retrieval device and surveillance system
US20150230773A1 (en) Apparatus and method for lesion detection
KR20120069922A (en) Face recognition apparatus and method thereof
CN106951867B (en) Face identification method, device, system and equipment based on convolutional neural networks
JP2011134114A (en) Pattern recognition method and pattern recognition apparatus
JP2008192100A (en) Eyelid detector and program
KR20120045667A (en) Apparatus and method for generating screen for transmitting call using collage
JP5793353B2 (en) Face image search system and face image search method

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20161123

Year of fee payment: 4

FPAY Annual fee payment

Payment date: 20171110

Year of fee payment: 5

FPAY Annual fee payment

Payment date: 20181126

Year of fee payment: 6

FPAY Annual fee payment

Payment date: 20191204

Year of fee payment: 7