CN110334588A - Kinship recognition methods and the device of network are paid attention to based on local feature - Google Patents

Kinship recognition methods and the device of network are paid attention to based on local feature Download PDF

Info

Publication number
CN110334588A
CN110334588A CN201910434461.2A CN201910434461A CN110334588A CN 110334588 A CN110334588 A CN 110334588A CN 201910434461 A CN201910434461 A CN 201910434461A CN 110334588 A CN110334588 A CN 110334588A
Authority
CN
China
Prior art keywords
local feature
result
feature
network
kinship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910434461.2A
Other languages
Chinese (zh)
Inventor
闫海滨
王仕伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201910434461.2A priority Critical patent/CN110334588A/en
Publication of CN110334588A publication Critical patent/CN110334588A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention discloses a kind of kinship recognition methods and devices that network is paid attention to based on local feature.The described method includes: obtaining multiple character images;Notice that network extracts local feature from each character image respectively using local feature trained in advance, and validity feature enhancing processing is carried out to the local feature;According to treated, local feature identifies the kinship of the multiple character image, to improve the recognition accuracy of kinship.

Description

Kinship recognition methods and the device of network are paid attention to based on local feature
Technical field
The present invention relates to technical field of computer vision, particularly relate to a kind of relatives pass that network is paid attention to based on local feature It is recognition methods and device.
Background technique
The verifying of kinship has practical use abundant, and if social relationships are investigated, Missing Persons' investigation, discovery is imitated Etc..Therefore, concern of the technology relevant to kinship increasingly by global researcher.However, this is also a Xiang Fei Often with challenging task.Different from Generic face identification, kinship verifying faces more difficulties.In one group of relatives' object Between, their age is often widely different or even gender is also different.In other words, the conventional method of facial similitude is assessed Relatives' verifying is had little effect.Therefore, kinship verifying needs more explore.
Biologist and psychologist's studies have shown that science of heredity band about face organ information.Two blood relationships are closed The people of system is entire facial similar usually in certain facials.Therefore, in the prior art using entire facial characteristics into The identification of row kinship causes recognition accuracy low.
Summary of the invention
In view of this, it is an object of the invention to propose a kind of kinship identification side for paying attention to network based on local feature Method and device can be improved the recognition accuracy of kinship.
Based on the above-mentioned purpose kinship recognition methods provided by the invention for paying attention to network based on local feature, comprising:
Obtain multiple character images;
Notice that network extracts local feature from each character image respectively using local feature trained in advance, and to institute It states local feature and carries out validity feature enhancing processing;
According to treated, local feature identifies the kinship of the multiple character image.
Further, the local feature notices that network includes the first convolutional layer set gradually, the first attention network knot Structure, the first pond layer, the second convolutional layer, the second attention network structure, the second pond layer, third convolutional layer, third pay attention to network Structure and full articulamentum.
Further, the first attention network structure, the second attention network structure and the third pay attention to network Structure includes the maximum pond layer set gradually, Volume Four lamination and up-sampling layer.
Further, described that the network extraction office from each character image respectively is paid attention to using local feature trained in advance Portion's feature, and validity feature enhancing processing is carried out to the local feature, it specifically includes:
The data of the multiple character image are input to first convolutional layer, and export first partial feature;
The first partial feature is input to described first and notices that network structure carries out Feature Mapping, and exports first and reflects Penetrate result;
First mapping result is sequentially input to first pond layer, second convolutional layer, and exports second Local feature;
Second local feature is input to described second and notices that network structure carries out Feature Mapping, and exports second and reflects Penetrate result;
Second mapping result is sequentially input to second pond layer, the third convolutional layer, and exports third Local feature;
The third local feature is input to the third and notices that network structure carries out Feature Mapping, and exports third and reflects Penetrate result;
The third mapping result is input to the full articulamentum, and exports the 4th local feature, the 4th part Feature is validity feature enhancing treated local feature.
Further, each calculation formula for paying attention to network structure and carrying out Feature Mapping are as follows:
P (x)=(1+F (x)) * C (x);
Wherein, C (x) is the data of convolutional layer output, and F (x) is activation primitive, and P (x) is mapping result.
Further, before the multiple character images of acquisition, further includes:
It establishes local feature and pays attention to network;
Obtain multiple groups character image sample;Every group of character image sample includes the naked image pattern of same character image With part overlaid image pattern;
The multiple groups character image sample is input to the local feature and pays attention to network, to pay attention to the local feature Network is trained, and is exported sample and covered result and kinship result;
Result is covered according to the sample and kinship result calculates final loss result, and the final loss is tied Fruit feeds back to the local feature and pays attention to network.
Further, described that result and the final loss result of kinship result calculating are covered according to the sample, specifically Include:
The first-loss result that the sample covers result is calculated using cross entropy loss function;
The second loss result of the kinship result is calculated using the cross entropy loss function;
The final loss result is calculated according to the first-loss result and second loss result.
Further, the cross entropy loss function are as follows:
Loss (x, class)=- x [class]+log [∑jexp(x[j])];
Wherein, x is that sample covers result or kinship as a result, class is the label of lineup's object image sample, Loss (x, class) is loss result.
Further, the calculation formula of the final loss result are as follows:
Loss=Losslp+λ*Lossks
Wherein, Loss is final loss result, LosslpFor first-loss as a result, LossksFor the second loss result, λ is Weight.
The embodiment of the present invention also provides a kind of social relationships identification device based on semantically enhancement network, can be realized above-mentioned All processes of social relationships recognition methods based on semantically enhancement network, described device include:
Image collection module, for obtaining multiple character images;
Characteristic extracting module, for noticing that network is mentioned from each character image respectively using local feature trained in advance Local feature is taken, and validity feature enhancing processing is carried out to the local feature;And
Identification module, for local feature to know the kinship of the multiple character image according to treated Not.
From the above it can be seen that the kinship recognition methods provided by the invention for paying attention to network based on local feature And device, it can notice that network extracts local feature from multiple character images respectively using local feature trained in advance, into And validity feature enhancing processing is carried out to local feature, to improve the identification weight of local feature, and then according to treated office Portion's feature carries out kinship identification, to make full use of the local similarity of relatives' image to promote the recognition performance of kinship, Effectively improve the recognition accuracy of kinship.
Detailed description of the invention
Fig. 1 is that the process of the kinship recognition methods provided in an embodiment of the present invention that network is paid attention to based on local feature is shown It is intended to;
Fig. 2 notices that part is special in the kinship recognition methods of network based on local feature to be provided in an embodiment of the present invention Sign pays attention to the schematic diagram of network;
Fig. 3 is to pay attention to net in the kinship recognition methods provided in an embodiment of the present invention for paying attention to network based on local feature The schematic diagram of network structure;
Fig. 4 pays attention to Beijing National Sports Training Center in the kinship recognition methods of network based on local feature to be provided in an embodiment of the present invention Portion's feature pays attention to the schematic diagram of network;
Fig. 5 is that the structure of the kinship identification device provided in an embodiment of the present invention that network is paid attention to based on local feature is shown It is intended to.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with specific embodiment, and reference Attached drawing, the present invention is described in more detail.
It is the kinship recognition methods provided in an embodiment of the present invention that network is paid attention to based on local feature referring to Fig. 1 Flow diagram, which comprises
S1, multiple character images are obtained.
In the present embodiment, the image of the multiple personages of multiple character images, that is, kinship to be identified, each character image Facial image including corresponding personage.
S2, notice that network extracts local feature from each character image respectively using local feature trained in advance, and Validity feature enhancing processing is carried out to the local feature.
In the present embodiment, local feature refers to the face characteristic in character image, such as five features, may include a left side Eye, right eye, nose, the left corners of the mouth and right corners of the mouth etc..Enhancing validity feature is referred to the effective enhancing processing of local feature progress and is pressed down The influence of low-level features processed.
Specifically, as shown in Fig. 2, local feature notices that network includes convolutional neural networks and is inserted in convolutional Neural net Attention network structure between network.Convolutional neural networks generally comprise the first convolutional layer 11 set gradually, the first pond layer 13, Second convolutional layer 14, the second pond layer 16, third convolutional layer 17 and full articulamentum 19.First convolutional layer 11 and the first pond layer 13 Between inserted with first pay attention to network structure 12, between the second convolutional layer 14 and the second pond layer 16 inserted with second attention network Structure 15 pays attention to network structure 18 inserted with third between third convolutional layer 17 and full articulamentum 19.Wherein, described first pays attention to Network structure 12, the second attention network structure 15 and the third notice that network structure 18 includes the maximum set gradually Pond layer 20, Volume Four lamination 21 and up-sampling layer 22.Network structure is paid attention to for top-down structure from bottom to top, with convolution Neural network, which is combined, pays attention to network to form local feature.It is special to capture different types of part to use multiple attention network structures Reference breath.
Specifically, step S2 includes:
The data of the multiple character image are input to first convolutional layer, and export first partial feature;
The first partial feature is input to described first and notices that network structure carries out Feature Mapping, and exports first and reflects Penetrate result;
First mapping result is sequentially input to first pond layer, second convolutional layer, and exports second Local feature;
Second local feature is input to described second and notices that network structure carries out Feature Mapping, and exports second and reflects Penetrate result;
Second mapping result is sequentially input to second pond layer, the third convolutional layer, and exports third Local feature;
The third local feature is input to the third and notices that network structure carries out Feature Mapping, and exports third and reflects Penetrate result;
The third mapping result is input to the full articulamentum, and exports the 4th local feature, the 4th part Feature is validity feature enhancing treated local feature.
It is activated it should be noted that local feature notices that convolutional layer and full articulamentum in network are used as using ReLu function A pond layer is connect respectively after function, the first convolutional layer and the second convolutional layer, third convolutional layer is followed by a full articulamentum, with Network performance is improved by addition standardization layer.
In identification process, multiple character images are overlapped, for example, two character images are overlapped, each Character image is triple channel RGB image, pixel 64*64, to form 6 channel datas of a 64*64 size.As shown in Fig. 2, The data of character image are input to first convolutional layer 11 of 32 1 step-lengths of convolution kernel, and the size of each convolution kernel is 5*5*6, Therefore the first partial feature of the first convolutional layer 11 output 60*60*32.First partial feature pays attention to network structure 12 by first Feature Mapping is carried out, the first mapping result 23 is obtained and is input to the first pond layer 13, the first pond layer 13 exports 30*30*32 Data and input as the second convolutional layer 14.Before second convolutional layer 14 is using 64 convolution kernels filtering that size is 5*5*32 One layer of input obtains the second local feature of 26*26*64.Equally, the second local feature pays attention to network structure 15 by second Feature Mapping is carried out, the second mapping result 24 is obtained and is input to the second pond layer 16, the second pond layer 16 exports 13*13*64 Data, the input as third convolutional layer 17.Third convolutional layer 17 includes 128 convolution kernels, and each convolution kernel Size is 5*5*64.Third convolutional layer 17 export 9*9*128 third local feature, and by third pay attention to network structure 18 into Row Feature Mapping obtains third mapping result 25 and is input to full articulamentum 19.Full articulamentum 19 arrives the data projection of input In subspace with 512 neurons, then it is special by the 4th part in the child control of data projection to 12 neurons, is obtained Sign, the 4th local feature are through validity feature enhancing treated local feature.
Wherein, insertion pays attention to network structure after each convolutional layer, imitates fast feedforward and feedback attention process, will pay attention to Power concentrates on effective local feature.As shown in figure 3, each attention network structure includes the maximum pond layer set gradually 20, Volume Four lamination 21 and up-sampling layer 22.Data C (x) expression of convolutional layer output before each attention network structure, will C (x) is converted to the Feature Mapping between 0 and 1 by S-shaped activation primitive F (x).But the characteristic pattern phase repeatedly between 0 to 1 Multiply the weight that can reduce further feature, or even destroys the superperformance of local feature network.In order to not reduce original effect In the case of amplify effective local feature, handled using residual error method, specific formula are as follows:
P (x)=(1+F (x)) * C (x)
Wherein, C (x) is the data of convolutional layer output, and F (x) is activation primitive, and P (x) is mapping result.
S3, according to treated, local feature identifies the kinship of the multiple character image.
In the present embodiment, is obtaining after validity feature enhancing treated local feature, local feature is input to one A classifier (soft-max), for example, two character images export 12 data after local feature pays attention to network processes, classification This 12 data are classified as 6 channel, two classification results by device, to indicate that relatives' recognition result 44 and image cover result 45.Its In, first 5 groups in 6 channel, two classification results covering situation (generally five characteristic points for respectively indicating facial five characteristic points It is not covered), last group indicates that relatives' recognition result, i.e. the two character images whether there is kinship.
Further, before step S1, further includes:
It establishes local feature and pays attention to network;
Obtain multiple groups character image sample;Every group of character image sample includes the naked image pattern of same character image With part overlaid image pattern;
The multiple groups character image sample is input to the local feature and pays attention to network, to pay attention to the local feature Network is trained, and is exported sample and covered result and kinship result;
Result is covered according to the sample and kinship result calculates final loss result, and the final loss is tied Fruit feeds back to the local feature and pays attention to network.
In the present embodiment, as shown in figure 4, obtaining multiple complete personages when noticing that network is trained to local feature Image pattern (i.e. naked image pattern) 41, and a facial characteristics is randomly choosed in each complete character image sample 41 Point is covered, and covers image pattern 42 with generating portion, and the corresponding part of each completely character image sample 41 hides Lid image pattern 42 collectively forms lineup's object image sample.Wherein, face feature point includes left eye, right eye, nose, the left corners of the mouth Or the right corners of the mouth, characteristic point is covered using the pure color square of 9*9, and the color of pure color square and covering area's ambient color phase Closely.For example, covering in Fig. 4 to the left eye of complete character image sample 41, part overlaid image pattern 42 is obtained.
Multiple groups character image sample is input to local feature and pays attention to network 43, to notice that network 43 carries out to local feature Training, exports relatives' recognition result 44 and image covers result 45.Wherein, the related data of image covering result 45 is not direct The identification of kinship is participated in, but a part as loss function feeds back to local feature and pays attention to network 43, using certainly The mechanism that I supervises enables local feature notice that network 43 is preferably primarily focused on face feature point.
Specifically, described that result and the final loss result of kinship result calculating are covered according to the sample, it is specific to wrap It includes:
The first-loss result that the sample covers result is calculated using cross entropy loss function;
The second loss result of the kinship result is calculated using the cross entropy loss function;
The final loss result is calculated according to the first-loss result and second loss result.
Specifically, the cross entropy loss function are as follows:
Loss (x, class)=- x [class]+log [∑jexp(x[j])];
Wherein, x is that sample covers result or kinship as a result, class is the label of lineup's object image sample, Loss (x, class) is loss result.
Specifically, the calculation formula of the final loss result are as follows:
Loss=Losslp+λ*Lossks
Wherein, Loss is final loss result, LosslpFor first-loss as a result, LossksFor the second loss result, λ is Weight.
Intersect entropy function calculating loss it should be noted that relatives' recognition result is passed through respectively with sample covering result, Different weights is assigned again, and summation obtains final loss result.Wherein, it has been that guidance is made to verifying that sample, which covers result, With, therefore relatives' recognition result is endowed higher weight λ, it is preferable that λ 10.
The kinship recognition methods provided by the invention that network is paid attention to based on local feature, can be using training in advance Local feature notices that network extracts local feature from multiple character images respectively, and then carries out validity feature increasing to local feature Strength reason, to improve the identification weight of local feature, and then local feature carries out kinship identification according to treated, to fill Divide the recognition performance that kinship is promoted using the local similarity of relatives' image, the identification for effectively improving kinship is accurate Rate.
Correspondingly, it the present invention also provides a kind of social relationships identification device based on semantically enhancement network, can be realized State all processes of the social relationships recognition methods based on semantically enhancement network.
It is the structure of the social relationships identification device provided in an embodiment of the present invention based on semantically enhancement network referring to Fig. 5 Schematic diagram, the device include:
Image collection module 1, for obtaining multiple character images;
Characteristic extracting module 2, for paying attention to network respectively from each character image using local feature trained in advance Local feature is extracted, and validity feature enhancing processing is carried out to the local feature;And
Identification module 3, for local feature to know the kinship of the multiple character image according to treated Not.
The kinship identification device provided by the invention that network is paid attention to based on local feature, can be using training in advance Local feature notices that network extracts local feature from multiple character images respectively, and then carries out validity feature increasing to local feature Strength reason, to improve the identification weight of local feature, and then local feature carries out kinship identification according to treated, to fill Divide the recognition performance that kinship is promoted using the local similarity of relatives' image, the identification for effectively improving kinship is accurate Rate.
It should be understood by those ordinary skilled in the art that: the discussion of any of the above embodiment is exemplary only, not It is intended to imply that the scope of the present disclosure (including claim) is limited to these examples;Under thinking of the invention, above embodiments Or can also be combined between the technical characteristic in different embodiments, step can be realized with random order, and be existed such as Many other variations of the upper different aspect of the invention, for simplicity, they are not provided in details.
In addition, to simplify explanation and discussing, and in order not to obscure the invention, it can in provided attached drawing It is connect with showing or can not show with the well known power ground of integrated circuit (IC) chip and other components.Furthermore, it is possible to Device is shown in block diagram form, to avoid obscuring the invention, and this has also contemplated following facts, i.e., about this The details of the embodiment of a little block diagram arrangements be height depend on will implementing platform of the invention (that is, these details should It is completely within the scope of the understanding of those skilled in the art).Elaborating that detail (for example, circuit) is of the invention to describe In the case where exemplary embodiment, it will be apparent to those skilled in the art that can be in these no details In the case where or implement the present invention in the case that these details change.Therefore, these descriptions should be considered as explanation Property rather than it is restrictive.
Although having been incorporated with specific embodiments of the present invention, invention has been described, according to retouching for front It states, many replacements of these embodiments, modifications and variations will be apparent for those of ordinary skills.Example Such as, discussed embodiment can be used in other memory architectures (for example, dynamic ram (DRAM)).
The embodiment of the present invention be intended to cover fall into all such replacements within the broad range of appended claims, Modifications and variations.Therefore, all within the spirits and principles of the present invention, any omission, modification, equivalent replacement, the improvement made Deng should all be included in the protection scope of the present invention.

Claims (10)

1. a kind of kinship recognition methods for paying attention to network based on local feature characterized by comprising
Obtain multiple character images;
Notice that network extracts local feature from each character image respectively using local feature trained in advance, and to the office Portion's feature carries out validity feature enhancing processing;
According to treated, local feature identifies the kinship of the multiple character image.
2. the kinship recognition methods according to claim 1 for paying attention to network based on local feature, which is characterized in that institute It states local feature and notices that network includes the first convolutional layer set gradually, the first attention network structure, the first pond layer, volume Two Lamination, the second attention network structure, the second pond layer, third convolutional layer, third pay attention to network structure and full articulamentum.
3. the kinship recognition methods according to claim 2 for paying attention to network based on local feature, which is characterized in that institute It states the first attention network structure, the second attention network structure and the third and notices that network structure includes setting gradually Maximum pond layer, Volume Four lamination and up-sampling layer.
4. the kinship recognition methods according to claim 2 for paying attention to network based on local feature, which is characterized in that institute It states and notices that network extracts local feature from each character image respectively using local feature trained in advance, and to the part Feature carries out validity feature enhancing processing, specifically includes:
The data of the multiple character image are input to first convolutional layer, and export first partial feature;
The first partial feature is input to described first and notices that network structure carries out Feature Mapping, and exports the first mapping knot Fruit;
First mapping result is sequentially input to first pond layer, second convolutional layer, and exports the second part Feature;
Second local feature is input to described second and notices that network structure carries out Feature Mapping, and exports the second mapping knot Fruit;
Second mapping result is sequentially input to second pond layer, the third convolutional layer, and exports third part Feature;
The third local feature is input to the third and notices that network structure carries out Feature Mapping, and exports third mapping knot Fruit;
The third mapping result is input to the full articulamentum, and exports the 4th local feature, the 4th local feature As validity feature enhancing treated local feature.
5. the kinship recognition methods according to claim 4 for paying attention to network based on local feature, which is characterized in that every One notices that network structure carries out the calculation formula of Feature Mapping are as follows:
P (x)=(1+F (x)) * C (x);
Wherein, C (x) is the data of convolutional layer output, and F (x) is activation primitive, and P (x) is mapping result.
6. the kinship recognition methods according to claim 1 for paying attention to network based on local feature, which is characterized in that Before the multiple character images of acquisition, further includes:
It establishes local feature and pays attention to network;
Obtain multiple groups character image sample;Every group of character image sample includes naked image pattern and the portion of same character image Divide and covers image pattern;
The multiple groups character image sample is input to the local feature and pays attention to network, to pay attention to network to the local feature It is trained, and exports sample and cover result and kinship result;
Result is covered according to the sample and kinship result calculates final loss result, and the final loss result is anti- The local feature of feeding pays attention to network.
7. the kinship recognition methods according to claim 6 for paying attention to network based on local feature, which is characterized in that institute It states and result and the final loss result of kinship result calculating is covered according to the sample, specifically include:
The first-loss result that the sample covers result is calculated using cross entropy loss function;
The second loss result of the kinship result is calculated using the cross entropy loss function;
The final loss result is calculated according to the first-loss result and second loss result.
8. the kinship recognition methods according to claim 7 for paying attention to network based on local feature, which is characterized in that institute State cross entropy loss function are as follows:
Loss (x, class)=- x [class]+log [∑jexp(x[j])];
Wherein, x is that sample covers result or kinship as a result, class is the label of lineup's object image sample, Loss (x, It class) is loss result.
9. the kinship recognition methods according to claim 1 for paying attention to network based on local feature, which is characterized in that institute State the calculation formula of final loss result are as follows:
Loss=Losslp+λ*Lossks
Wherein, Loss is final loss result, LosslpFor first-loss as a result, LossksFor the second loss result, λ is weight.
10. a kind of social relationships identification device based on semantically enhancement network can be realized such as any one of claim 1 to 9 institute The social relationships recognition methods based on semantically enhancement network stated, which is characterized in that described device includes:
Image collection module, for obtaining multiple character images;
Characteristic extracting module, for paying attention to the network extraction office from each character image respectively using local feature trained in advance Portion's feature, and validity feature enhancing processing is carried out to the local feature;And
Identification module, for local feature to identify the kinship of the multiple character image according to treated.
CN201910434461.2A 2019-05-23 2019-05-23 Kinship recognition methods and the device of network are paid attention to based on local feature Pending CN110334588A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910434461.2A CN110334588A (en) 2019-05-23 2019-05-23 Kinship recognition methods and the device of network are paid attention to based on local feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910434461.2A CN110334588A (en) 2019-05-23 2019-05-23 Kinship recognition methods and the device of network are paid attention to based on local feature

Publications (1)

Publication Number Publication Date
CN110334588A true CN110334588A (en) 2019-10-15

Family

ID=68139183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910434461.2A Pending CN110334588A (en) 2019-05-23 2019-05-23 Kinship recognition methods and the device of network are paid attention to based on local feature

Country Status (1)

Country Link
CN (1) CN110334588A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668509A (en) * 2020-12-31 2021-04-16 深圳云天励飞技术股份有限公司 Training method and recognition method of social relationship recognition model and related equipment
CN113920573A (en) * 2021-11-22 2022-01-11 河海大学 Face change decoupling relativity relationship verification method based on counterstudy

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005774A (en) * 2015-07-28 2015-10-28 中国科学院自动化研究所 Face relative relation recognition method based on convolutional neural network and device thereof
US20160202756A1 (en) * 2015-01-09 2016-07-14 Microsoft Technology Licensing, Llc Gaze tracking via eye gaze model
CN108596211A (en) * 2018-03-29 2018-09-28 中山大学 It is a kind of that pedestrian's recognition methods again is blocked based on focusing study and depth e-learning
CN109543606A (en) * 2018-11-22 2019-03-29 中山大学 A kind of face identification method that attention mechanism is added
CN109784144A (en) * 2018-11-29 2019-05-21 北京邮电大学 A kind of kinship recognition methods and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160202756A1 (en) * 2015-01-09 2016-07-14 Microsoft Technology Licensing, Llc Gaze tracking via eye gaze model
CN105005774A (en) * 2015-07-28 2015-10-28 中国科学院自动化研究所 Face relative relation recognition method based on convolutional neural network and device thereof
CN108596211A (en) * 2018-03-29 2018-09-28 中山大学 It is a kind of that pedestrian's recognition methods again is blocked based on focusing study and depth e-learning
CN109543606A (en) * 2018-11-22 2019-03-29 中山大学 A kind of face identification method that attention mechanism is added
CN109784144A (en) * 2018-11-29 2019-05-21 北京邮电大学 A kind of kinship recognition methods and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FEI WANG等: "Residual Attention Network for Image Classification", 《 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668509A (en) * 2020-12-31 2021-04-16 深圳云天励飞技术股份有限公司 Training method and recognition method of social relationship recognition model and related equipment
CN112668509B (en) * 2020-12-31 2024-04-02 深圳云天励飞技术股份有限公司 Training method and recognition method of social relation recognition model and related equipment
CN113920573A (en) * 2021-11-22 2022-01-11 河海大学 Face change decoupling relativity relationship verification method based on counterstudy
CN113920573B (en) * 2021-11-22 2022-05-13 河海大学 Face change decoupling relativity relationship verification method based on counterstudy

Similar Documents

Publication Publication Date Title
CN108229490B (en) Key point detection method, neural network training method, device and electronic equipment
CN109409435B (en) Depth perception significance detection method based on convolutional neural network
Nogueira et al. Exploiting convnet diversity for flooding identification
CN110353675B (en) Electroencephalogram signal emotion recognition method and device based on picture generation
CN108345818B (en) Face living body detection method and device
Chen et al. Detection evolution with multi-order contextual co-occurrence
CN111814574B (en) Face living body detection system, terminal and storage medium applying double-branch three-dimensional convolution model
CN108710847A (en) Scene recognition method, device and electronic equipment
CN112800894A (en) Dynamic expression recognition method and system based on attention mechanism between space and time streams
TW200842733A (en) Object image detection method
WO2013063765A1 (en) Object detection using extended surf features
CN110334588A (en) Kinship recognition methods and the device of network are paid attention to based on local feature
Xiao et al. Attention-based deep neural network for driver behavior recognition
CN115082698B (en) Distraction driving behavior detection method based on multi-scale attention module
CN106971161A (en) Face In vivo detection system based on color and singular value features
CN113283338A (en) Method, device and equipment for identifying driving behavior of driver and readable storage medium
Agarwal et al. Presentation attack detection system for fake Iris: a review
Shen et al. A competitive method to vipriors object detection challenge
CN107239827A (en) A kind of spatial information learning method based on artificial neural network
Rahim et al. Dynamic hand gesture based sign word recognition using convolutional neural network with feature fusion
CN110309832A (en) A kind of object classification method based on image, system and electronic equipment
US20100268301A1 (en) Image processing algorithm for cueing salient regions
She et al. Micro-expression recognition based on multiple aggregation networks
CN115984919A (en) Micro-expression recognition method and system
US20220207261A1 (en) Method and apparatus for detecting associated objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191015