CN110472567A - A kind of face identification method and system suitable under non-cooperation scene - Google Patents

A kind of face identification method and system suitable under non-cooperation scene Download PDF

Info

Publication number
CN110472567A
CN110472567A CN201910748173.4A CN201910748173A CN110472567A CN 110472567 A CN110472567 A CN 110472567A CN 201910748173 A CN201910748173 A CN 201910748173A CN 110472567 A CN110472567 A CN 110472567A
Authority
CN
China
Prior art keywords
face
image
identification
video
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910748173.4A
Other languages
Chinese (zh)
Inventor
刘宗钰
方建勇
胡贤良
杨雅各
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuhui Excellent Health Information Technology Co Ltd
Original Assignee
Xuhui Excellent Health Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuhui Excellent Health Information Technology Co Ltd filed Critical Xuhui Excellent Health Information Technology Co Ltd
Priority to CN201910748173.4A priority Critical patent/CN110472567A/en
Publication of CN110472567A publication Critical patent/CN110472567A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to image identification technical fields, in particular a kind of face identification method and system suitable under non-cooperation scene, include the following steps, video flowing analyzing step: the video flowing of video camera shooting is obtained, video flowing is parsed into N width video frame, each width video frame further decoding is at RGB picture, image acquisition step, whether there is face in the image that detection video flowing parses, if there is face, it then calculates face coordinate and surrounds box position, if without face, or any one coordinate in M key point can not be extracted, give up current video frame, side face appraisal procedure: according to the M face key point coordinate got;Under non-cooperation scene, according to video streaming content, effectively sieved, side face optimal quality is chosen in N picture according to portrait side face score value height one is used as pre-identification object, high definition identification network is obtained by super-resolution model, discrimination can be effectively improved, reduce calculation amount.

Description

A kind of face identification method and system suitable under non-cooperation scene
Technical field
The invention belongs to image identification technical fields, and in particular to a kind of recognition of face side suitable under non-cooperation scene Method and system.
Background technique
Recognition of face is a kind of biological identification technology for carrying out identification based on facial feature information of people, it is by counting Calculation machine analyzes facial image, extracts and effective information and identifies from image automatically, face recognition technology be widely used in be safely System and human-computer interaction etc., it has also become important one of research topic in computer vision and area of pattern recognition.
The Chinese invention of Publication No. CN109657587A discloses a kind of side face method for evaluating quality for recognition of face And system, but the system is only applicable to the acquisition of the recognition of face of formula, under real-time video scene, needs to parsing The each frame arrived carries out continuous analysis identification, wastes the resource of system significantly, especially comes to the calculation lower ARM equipment of power To say even more so, non-essential analysis detection reduces server process efficiency, it requires in the case where handling less frame, It can also guarantee very high discrimination.
For face recognition technology, if facial image is collected under the conditions of frontal pose shines with desired light , satisfactory recognition result usually can be obtained, but when the posture of face and illumination condition change, even if using Outstanding face identification system is tested, and discrimination can be also decreased obviously, this is the application of face recognition technology landing instantly A great problem, therefore the present invention propose it is a kind of suitable for it is non-cooperation scene under face identification method and system.
Summary of the invention
To solve the problems mentioned above in the background art.The present invention provides a kind of people suitable under non-cooperation scene Face recognition method and system are calculated side face score, assess according to side face by extracting several face key point position coordinates Standard screen selects pre-identification object, and by SRGAN algorithm, carries out super-resolution processing to pre-identification image, obtains high definition knowledge Other image reduces calculation amount to effectively improve discrimination.
To achieve the above object, the invention provides the following technical scheme: a kind of face suitable under non-cooperation scene is known Other method, comprising the following steps:
S1, video flowing analyzing step: obtaining the video flowing of video camera shooting, video flowing be parsed into N width video frame, each Width video frame further decoding is at RGB picture;
S2, image acquisition step:
Whether there is face in the image that S21, detection video flowing parse, if there is face, calculates face coordinate and packet Enclose box position;
If S22, without face, or any one coordinate in M key point can not be extracted, give up current video frame;
S3, side face appraisal procedure: according to the M face key point coordinate got, face in each width video frame is calculated Side face degree to express face of side face score value S, side face score value S, S is more than or equal to 1;According to a preset rule choosing The RGB picture of a wherein width video frame is selected as pre-identification image.
S4, super-resolution processing step: being handled pre-identification image using super-resolution model, obtains high definition identification Image.
Preferably, the side face appraisal procedure includes:
S31, evaluation point coordinate extraction step: the left eye Angle Position coordinate of current face is extracted respectively, right eye Angle Position is sat Mark, nose position coordinates, left corners of the mouth position coordinates, right corners of the mouth position coordinates.
S32, side face score value calculate step:
2.1: the distance ds1 of calculating nose position to left eye angle and left corners of the mouth line;
2.2: the distance ds2 of calculating nose position to left corners of the mouth position;
2.3: the distance ds3 of calculating nose position to right eye angle and right corners of the mouth line;
2.4: the distance ds4 of calculating nose position to right corners of the mouth position;
2.5: side face score S is calculated according to following formula:
Wherein
Preferably, the super-resolution processing step includes:
S41, super-resolution model training: confrontation study is used for the High resolution reconstruction based on single image by SRGAN algorithm, After building network, existing high definition image data collection is handled, obtains the image data collection of low resolution, by this two A data set trains network as training set;
S42, identification image output: pre-identification image is input in trained super-resolution model, high definition is obtained Identify image.
It is a kind of suitable for it is non-cooperation scene under face identification system, including video acquiring module 1, image analysis module 2, Image zooming-out module 3, super-resolution processing module 4 and face computing module 5, the video acquiring module 1 is for obtaining camera shooting The video flowing is parsed into N width video frame, described image extraction module 3 by the video flowing of machine shooting, described image parsing module 2 For by each width video frame further decoding, at RGB picture, the super-resolution processing module 4 to be used for pre-identification image Reason obtains high definition identification image, and the face computing module 5 is for carrying out calculating analysis to face information.
Compared with prior art, the beneficial effects of the present invention are:
In the present invention, under non-cooperation scene, according to video streaming content, effectively sieved, according to portrait side face score value Height chooses one of side face optimal quality as pre-identification object in N picture, obtains high definition by super-resolution model It identifies network, discrimination can be effectively improved, reduce calculation amount.
Detailed description of the invention
Attached drawing is used to provide further understanding of the present invention, and constitutes part of specification, with reality of the invention It applies example to be used to explain the present invention together, not be construed as limiting the invention.In the accompanying drawings:
Fig. 1 is the structural diagram of the present invention;
Fig. 2 is the structural schematic diagram in the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Referring to Fig. 1, the present invention is the following technical schemes are provided: a kind of recognition of face side suitable under non-cooperation scene Method, comprising the following steps:
S1, video flowing analyzing step: obtaining the video flowing of video camera shooting, video flowing be parsed into N width video frame, each Width video frame further decoding is at RGB picture;
S2, image acquisition step:
Whether there is face in the image that S21, detection video flowing parse, if there is face, calculates face coordinate and packet Enclose box position;
If S22, without face, or any one coordinate in M key point can not be extracted, give up current video frame;
S3, side face appraisal procedure: according to the M face key point coordinate got, face in each width video frame is calculated Side face degree to express face of side face score value S, side face score value S, S is more than or equal to 1;According to a preset rule choosing The RGB picture of a wherein width video frame is selected as pre-identification image.
S4, super-resolution processing step: being handled pre-identification image using super-resolution model, obtains high definition identification Image.
Specifically, side face appraisal procedure includes:
S31, evaluation point coordinate extraction step: the left eye Angle Position coordinate of current face is extracted respectively, right eye Angle Position is sat Mark, nose position coordinates, left corners of the mouth position coordinates, right corners of the mouth position coordinates.
S32, side face score value calculate step:
2.1: the distance ds1 of calculating nose position to left eye angle and left corners of the mouth line;
2.2: the distance ds2 of calculating nose position to left corners of the mouth position;
2.3: the distance ds3 of calculating nose position to right eye angle and right corners of the mouth line;
2.4: the distance ds4 of calculating nose position to right corners of the mouth position;
2.5: side face score S is calculated according to following formula:
Wherein
The calculating process is illustrated with one of calculation procedure code below.
1) the distance ds1 that nose (Point2) arrives left eye (Point1) and the left corners of the mouth (Point3) line is calculated first:
Ds1=Point2LineDist (Point1, Point2, Point3)
Point2LineDist is exactly the distance for calculating the straight line that point Point2 to Point1 and Point3 is linked to be:
D11=Point3.y-Point1.y;
D12=Point1.x-Point3.x;
D13=Point3.x*Point1.y-Point1.x*Point3.y
Ds1=abs (d11*Point2.x+d12*Point2.y+d13)/sqrt (d11*d11+d12*d12);
2) distance of nose (Point2) to the left corners of the mouth (Point3) is then calculated to ds2
Ds2=PointDist (Point2, Point3)
PointDist is exactly the Euclidean distance calculated between point Point2 to point Point3:
Ds2=(Point2.x-Point3.x) * (Point2.x-Point3.x)+
Consider the problems of that resource occupation, the Euclidean distance of calculating do not need evolution.
3) equally calculate nose (the distance ds3 of Point2 to right eye (Point4) and the right corners of the mouth (Point5) line:
Ds3=Point2LineDist (Point2, Point4, Point5)
With the calculation method in 1).
4) then calculate nose to the right corners of the mouth distance ds4:
Ds4=PointDist (Point2, Point5)
With the calculation method in 2).
5) side face score is then calculated:
Profile_score=(ds1*ds2)/(ds3*ds4);
If profile_score < 1.0:profile_score=1/profile_score;
From obtained profile_score value, and the side face threshold value T being arranged before is compared, more than directly losing for T It abandons, meets the reservation of condition.
It selects in N frame, the smallest progress face characteristic extraction of side face score, and carries out face characteristic comparison, other frames It abandons, selects a frame to be identified in N frame, effectively save system resource.
Specifically, the super-resolution model training step includes:
Confrontation study is used for the High resolution reconstruction based on single image by SRGAN algorithm, after building network, to existing High definition image data collection handled, the image data collection of low resolution is obtained, using the two data sets as training set Training network.
Step 4.1: acquisition data, if collected training set data amount is smaller, it may be considered that using existing model into Row transfer learning;Or fusion large data collection, such as DIV2K, Yahoo MirFlickr25k.
Step 4.2: the size of low-resolution image and high-definition picture is 1:4, in a practical situation and different Surely ready-made low-resolution image is needed, low-resolution image directly can be obtained by compression high-resolution;It is original high Image in different resolution and down-sampled obtained low-resolution image constitute target data set.
Step 4.3: training high-frequency model inputs training set, using stochastic gradient descent algorithm, carries out 10000 times Training, obtains trained high-frequency model.The step of gradient descent method, is as follows:
Step 1: the range in [20000,25000], it is any to choose a value as detection deep learning network and identification The number of iterations of deep learning network, learning rate are set as 0.001.
Step 2: 32 (sizes of mini-batch, can be with sets itself) are randomly selected from low-resolution image training set A sample.
Step 3: randomly select 32 samples are input in 19 layers of good vgg network of pre-training, obtain generating image Characteristic pattern.
Step 4: image perception similarity is calculated using following generational loss function formulas
Wherein Wi,j,Hi,jIt is the dimension of characteristic pattern;φi,jIt is to export to obtain before j-th of convolutional layer, i-th of pond layer Characteristic pattern;IHR, ILRRespectively high-resolution and low-resolution image.
Step 5: using it is following confrontation loss function formula calculate generate images success " deception " differentiation and probability
Step 6: according to the following formula, the updated value of deep learning parameter is calculated:
WhereinMake a living into network, coefficient θG, lSRIt is defined as follows for perception loss function:
Step 3: judging whether to reach setting the number of iterations, otherwise holds if so, obtaining trained super-resolution network The step 2 of this step of row.
Referring to Fig. 2, the present invention is the following technical schemes are provided: a kind of recognition of face system suitable under non-cooperation scene System, it is characterised in that: including video acquiring module, image analysis module, image zooming-out module, super-resolution processing module and people Face computing module, video acquiring module are used to obtain the video flowing of video camera shooting, and video flowing is parsed into N by image analysis module Width video frame, image zooming-out module are used for each width video frame further decoding into RGB picture, super-resolution processing module for pair Pre-identification image is handled, and obtains high definition identification image, face computing module is for carrying out calculating analysis to face information.
Finally, it should be noted that the foregoing is only a preferred embodiment of the present invention, it is not intended to restrict the invention, Although the present invention is described in detail referring to the foregoing embodiments, for those skilled in the art, still may be used To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features. All within the spirits and principles of the present invention, any modification, equivalent replacement, improvement and so on should be included in of the invention Within protection scope.

Claims (4)

1. a kind of face identification method suitable under non-cooperation scene, it is characterised in that: the following steps are included:
S1, video flowing analyzing step: obtaining the video flowing of video camera shooting, and video flowing is parsed into N width video frame, each width view Frequency frame further decoding is at RGB picture;
S2, image acquisition step:
Whether there is face in the image that S21, detection video flowing parse, if there is face, calculates face coordinate and bounding box Position;
If S22, without face, or any one coordinate in M key point can not be extracted, give up current video frame;
S3, side face appraisal procedure: according to the M face key point coordinate got, the side of face in each width video frame is calculated The side face degree of face score value S, side face score value S to express face, S are more than or equal to 1;It is selected according to a preset rule In a width video frame RGB picture as pre-identification image.
S4, super-resolution processing step: pre-identification image is handled using super-resolution model, obtains high definition identification figure Picture.
2. a kind of face identification method suitable under non-cooperation scene according to claim 1, it is characterised in that: described Side face appraisal procedure includes:
S31, evaluation point coordinate extraction step: left eye Angle Position coordinate, the right eye Angle Position coordinate, nose of current face are extracted respectively Sharp position coordinates, left corners of the mouth position coordinates, right corners of the mouth position coordinates.
S32, side face score value calculate step:
2.1: the distance ds1 of calculating nose position to left eye angle and left corners of the mouth line;
2.2: the distance ds2 of calculating nose position to left corners of the mouth position;
2.3: the distance ds3 of calculating nose position to right eye angle and right corners of the mouth line;
2.4: the distance ds4 of calculating nose position to right corners of the mouth position;
2.5: side face score S is calculated according to following formula:
Wherein
3. a kind of face identification method suitable under non-cooperation scene according to claim 1, it is characterised in that: described Super-resolution processing step includes:
S41, super-resolution model training: confrontation study is used for the High resolution reconstruction based on single image by SRGAN algorithm, in structure After building up network, existing high definition image data collection is handled, the image data collection of low resolution is obtained, the two is counted Network is trained as training set according to collection;
S42, identification image output: pre-identification image is input in trained super-resolution model, the identification of high definition is obtained Image.
4. a kind of face identification system suitable under non-cooperation scene, it is characterised in that: including video acquiring module 1, image Parsing module 2, image zooming-out module 3, super-resolution processing module 4 and face computing module 5, the video acquiring module 1 are used In the video flowing for obtaining video camera shooting, the video flowing is parsed into N width video frame, the figure by described image parsing module 2 As extraction module 3 is for by each width video frame further decoding, at RGB picture, the super-resolution processing module 4 to be used to know to pre- Other image is handled, and obtains high definition identification image, the face computing module 5 is for carrying out calculating analysis to face information.
CN201910748173.4A 2019-08-14 2019-08-14 A kind of face identification method and system suitable under non-cooperation scene Pending CN110472567A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910748173.4A CN110472567A (en) 2019-08-14 2019-08-14 A kind of face identification method and system suitable under non-cooperation scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910748173.4A CN110472567A (en) 2019-08-14 2019-08-14 A kind of face identification method and system suitable under non-cooperation scene

Publications (1)

Publication Number Publication Date
CN110472567A true CN110472567A (en) 2019-11-19

Family

ID=68510797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910748173.4A Pending CN110472567A (en) 2019-08-14 2019-08-14 A kind of face identification method and system suitable under non-cooperation scene

Country Status (1)

Country Link
CN (1) CN110472567A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991310A (en) * 2019-11-27 2020-04-10 北京金山云网络技术有限公司 Portrait detection method, portrait detection device, electronic equipment and computer readable medium
CN112926464A (en) * 2021-03-01 2021-06-08 创新奇智(重庆)科技有限公司 Face living body detection method and device
CN113703977A (en) * 2021-08-30 2021-11-26 广东宏乾科技股份有限公司 Intelligent human face and human body detection and filtration device and picture output device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870811A (en) * 2014-03-06 2014-06-18 中国人民解放军国防科学技术大学 Method for quickly recognizing front face through video monitoring
CN109034013A (en) * 2018-07-10 2018-12-18 腾讯科技(深圳)有限公司 A kind of facial image recognition method, device and storage medium
CN109284738A (en) * 2018-10-25 2019-01-29 上海交通大学 Irregular face antidote and system
CN109657587A (en) * 2018-12-10 2019-04-19 南京甄视智能科技有限公司 Side face method for evaluating quality and system for recognition of face

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870811A (en) * 2014-03-06 2014-06-18 中国人民解放军国防科学技术大学 Method for quickly recognizing front face through video monitoring
CN109034013A (en) * 2018-07-10 2018-12-18 腾讯科技(深圳)有限公司 A kind of facial image recognition method, device and storage medium
CN109284738A (en) * 2018-10-25 2019-01-29 上海交通大学 Irregular face antidote and system
CN109657587A (en) * 2018-12-10 2019-04-19 南京甄视智能科技有限公司 Side face method for evaluating quality and system for recognition of face

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHRISTIAN LEDIG1, LUCAS THEIS1, FERENC HUSZ´AR1, JOSE CABALLERO1: "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial", 《ARXIV:1609.04802V1 [CS.CV]》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991310A (en) * 2019-11-27 2020-04-10 北京金山云网络技术有限公司 Portrait detection method, portrait detection device, electronic equipment and computer readable medium
CN110991310B (en) * 2019-11-27 2023-08-22 北京金山云网络技术有限公司 Portrait detection method, device, electronic equipment and computer readable medium
CN112926464A (en) * 2021-03-01 2021-06-08 创新奇智(重庆)科技有限公司 Face living body detection method and device
CN112926464B (en) * 2021-03-01 2023-08-29 创新奇智(重庆)科技有限公司 Face living body detection method and device
CN113703977A (en) * 2021-08-30 2021-11-26 广东宏乾科技股份有限公司 Intelligent human face and human body detection and filtration device and picture output device
CN113703977B (en) * 2021-08-30 2024-04-05 广东宏乾科技股份有限公司 Intelligent face and human body detection and filtration device and picture output device

Similar Documents

Publication Publication Date Title
CN108446617B (en) Side face interference resistant rapid human face detection method
WO2021208275A1 (en) Traffic video background modelling method and system
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN105740945B (en) A kind of people counting method based on video analysis
CN111770299B (en) Method and system for real-time face abstract service of intelligent video conference terminal
CN110472567A (en) A kind of face identification method and system suitable under non-cooperation scene
CN112434608B (en) Human behavior identification method and system based on double-current combined network
CN111639577A (en) Method for detecting human faces of multiple persons and recognizing expressions of multiple persons through monitoring video
CN113095263B (en) Training method and device for pedestrian re-recognition model under shielding and pedestrian re-recognition method and device under shielding
CN110032932B (en) Human body posture identification method based on video processing and decision tree set threshold
CN107944437B (en) A kind of Face detection method based on neural network and integral image
CN103295009B (en) Based on the license plate character recognition method of Stroke decomposition
CN109034247B (en) Tracking algorithm-based higher-purity face recognition sample extraction method
CN112270681B (en) Method and system for detecting and counting yellow plate pests deeply
WO2023155482A1 (en) Identification method and system for quick gathering behavior of crowd, and device and medium
CN112487981A (en) MA-YOLO dynamic gesture rapid recognition method based on two-way segmentation
CN109800756A (en) A kind of text detection recognition methods for the intensive text of Chinese historical document
CN106503651A (en) A kind of extracting method of images of gestures and system
CN109360179A (en) A kind of image interfusion method, device and readable storage medium storing program for executing
CN109241814A (en) Pedestrian detection method based on YOLO neural network
CN112488034A (en) Video processing method based on lightweight face mask detection model
CN106447695A (en) Same object determining method and device in multi-object tracking
CN106570885A (en) Background modeling method based on brightness and texture fusion threshold value
CN113762009A (en) Crowd counting method based on multi-scale feature fusion and double-attention machine mechanism
CN106529441A (en) Fuzzy boundary fragmentation-based depth motion map human body action recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191119

RJ01 Rejection of invention patent application after publication