CN112487904A - Video image processing method and system based on big data analysis - Google Patents

Video image processing method and system based on big data analysis Download PDF

Info

Publication number
CN112487904A
CN112487904A CN202011317899.1A CN202011317899A CN112487904A CN 112487904 A CN112487904 A CN 112487904A CN 202011317899 A CN202011317899 A CN 202011317899A CN 112487904 A CN112487904 A CN 112487904A
Authority
CN
China
Prior art keywords
image
face
video image
analysis
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011317899.1A
Other languages
Chinese (zh)
Inventor
杨洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Jinzhi Zhiyuan Technology Co ltd
Original Assignee
Chengdu Jinzhi Zhiyuan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Jinzhi Zhiyuan Technology Co ltd filed Critical Chengdu Jinzhi Zhiyuan Technology Co ltd
Priority to CN202011317899.1A priority Critical patent/CN112487904A/en
Publication of CN112487904A publication Critical patent/CN112487904A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a video image processing method based on big data analysis, which comprises the following steps: establishing a face recognition database and a face micro-expression database; acquiring a video image to acquire a face image; preprocessing the collected video image to eliminate background variegation in the video image; and performing emotion analysis on the emotion characteristics in the preprocessed video image to finish the identification of the video image. The invention has the advantages that: the redundant confounding color in the image is removed by preprocessing the video image such as edge extraction, histogram equalization, skin color segmentation and illumination compensation, so that the subsequent analysis and recognition of details such as facial expression and the like are more accurate, and the recognition time is further shortened.

Description

Video image processing method and system based on big data analysis
Technical Field
The invention relates to the technical field of big data analysis, in particular to a video image processing method and system based on big data analysis.
Background
The face recognition is a biological recognition technology for performing identity recognition based on face feature information of a person, and is a series of related technologies, generally called face recognition or face recognition, in which a camera or a camera is used to collect an image or a video stream containing a face, and automatically detect and track the face in the image, thereby recognizing the detected face.
Most of the existing face recognition only recognizes the general features of the face, so that a person can be judged from a video image, or whether the information of the person and a database can be successfully matched or not is judged; however, in some situations, it is necessary to recognize fine expressions of faces in a video image, and therefore, it is necessary to analyze detailed portions in the video image, but too many background colors in the video cause great trouble to the detailed analysis of the image, which results in a slow video image recognition speed and even a reduction in the overall recognition accuracy.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a video image processing method and a video image processing system based on big data analysis, and overcomes the defects of the existing face recognition method.
The purpose of the invention is realized by the following technical scheme: a video image processing method based on big data analysis, the video image processing method comprising:
establishing a face recognition database and a face micro-expression database;
acquiring a video image to acquire a face image;
preprocessing the collected video image to eliminate background variegation in the video image;
and performing emotion analysis on the emotion characteristics in the preprocessed video image to finish the identification of the video image.
Further, the pre-processing the acquired video image to eliminate the background mottle in the video image comprises:
capturing the outline of a face image in an acquired video image, separating a skin color area from a background image, intercepting the captured face image from the background of the video image, and then carrying out binarization processing;
extracting the edge of the image after binarization processing, and removing an image area with weak edge and a background area with flat change to equalize the distribution of pixel values in the image;
and removing the binarization processing effect of the image after pixel value distribution equalization, and performing illumination compensation to overcome the interference of uneven brightness on the result.
Further, the equalizing the distribution of pixel values in the image specifically includes the following:
performing histogram equalization on an input image, transforming the input image to a frequency domain by using 2D-FFT, and solving the correlation between the input image and an average face template by using an optimal adaptive correlator;
dividing the output of the filter into three parts of a face region, a possible face region and a background region according to a threshold value, carrying out local gray level equalization on an image to be detected in a window of 3 multiplied by 3, and finally outputting the background region by using an OAC filter.
Further, the performing emotion analysis on the emotion features in the preprocessed video image and completing the identification of the video image includes:
analyzing color parameters of the preprocessed image, judging the range of the face according to the color change of the image, adjusting a screenshot picture according to a feedback result of the face position, pixelating the face position image according to the face position information, and performing pixel statistics face recognition on image pixels by combining face characteristic data of a database;
analyzing and defining human face characteristic points according to the analyzed human face result and combining micro expression characteristics in a database, generating a human face model in combination, calibrating the characteristic points on the model, analyzing and adjusting the positions of the characteristic points in real time and recording the change displacement of the positions of the characteristic points;
and (4) calling micro-expression characteristics and psychological behavior characteristics in the database to analyze the emotion and emotion changes of the character according to the position change displacement of the characteristic points on the face model, and outputting an analysis result.
A video image processing system based on big data analysis comprises a database construction module, an image preprocessing module and a human face emotion analysis module;
the database construction module is used for establishing a face recognition database and a face micro-expression database;
the image preprocessing module is used for preprocessing the acquired video image to eliminate background variegates in the video image;
the face emotion analysis module is used for carrying out emotion analysis on emotion characteristics in the preprocessed video image to finish the identification of the video image.
Furthermore, the image preprocessing module comprises a human face contour intercepting and processing unit, an edge extraction and equalization processing unit and an illumination compensation unit;
the human face contour screenshot and processing unit is used for capturing the contour of a human face image in an acquired video image, separating a skin color area from a background image, intercepting the captured human face image from the background of the video image and then carrying out binarization processing;
the edge extraction and equalization processing unit is used for extracting the edge of the image after binarization processing, and equalizing the distribution of pixel values in the image after removing the image area with weak edge and the background area with flat change;
the illumination compensation unit is used for removing the binarization processing effect of the image after pixel value distribution equalization and carrying out illumination compensation to overcome the interference of uneven brightness on the result.
Furthermore, the human face emotion analysis module comprises a color analysis and pixel statistics unit, a characteristic point analysis and modeling unit and an emotion analysis unit;
the color analysis and pixel statistics unit is used for analyzing color parameters of the preprocessed image, judging the range of the face according to the color change of the image, adjusting a screenshot picture of a feedback result of the face position, pixelating the face position image according to the face position information, and performing pixel statistics face recognition on image pixels by combining face characteristic data of a database;
the characteristic point analysis and modeling unit is used for analyzing and defining human face characteristic points according to the analyzed human face result and combining micro expression characteristics in the database, generating a human face model in a combining manner, calibrating the characteristic points on the model, analyzing and adjusting the positions of the characteristic points in real time and recording the change displacement of the positions of the characteristic points;
the emotion analysis unit is used for calling micro-expression characteristics and psychological behavior characteristics in the database to analyze the emotion and emotion changes of the character according to the position change displacement of the characteristic points on the face model, and outputting an analysis result.
The invention has the following advantages: the video image processing method and the video image processing system based on big data analysis remove redundant confounding colors in the image by preprocessing the video image such as edge extraction, histogram equalization, skin color segmentation and illumination compensation, so that the subsequent analysis and identification of details such as facial expressions and the like are more accurate, and the identification time is further shortened.
Drawings
FIG. 1 is a flow chart of the present invention; .
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application. The invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the present invention relates to a video image processing method based on big data analysis, the video image processing method includes:
s1, establishing a face recognition database and a face micro-expression database;
s2, acquiring a video image to obtain a face image;
s3, preprocessing the collected video image to eliminate background variegated colors in the video image;
and S4, performing emotion analysis on the emotion characteristics in the preprocessed video image, and completing identification of the video image.
Further, the pre-processing the acquired video image to eliminate the background mottle in the video image comprises:
capturing the outline of a face image in an acquired video image, separating a skin color area from a background image, intercepting the captured face image from the background of the video image, and then carrying out binarization processing;
extracting the edge of the image after binarization processing, and removing an image area with weak edge and a background area with flat change to equalize the distribution of pixel values in the image;
and removing the binarization processing effect of the image after pixel value distribution equalization, and performing illumination compensation to overcome the interference of uneven brightness on the result.
Further, the equalizing the distribution of pixel values in the image specifically includes the following:
performing histogram equalization on an input image, transforming the input image to a frequency domain by using 2D-FFT, and solving the correlation between the input image and an average face template by using an optimal adaptive correlator;
dividing the output of the filter into three parts of a face region, a possible face region and a background region according to a threshold value, carrying out local gray level equalization on an image to be detected in a window of 3 multiplied by 3, and finally outputting the background region by using an OAC filter.
Further, the performing emotion analysis on the emotion features in the preprocessed video image and completing the identification of the video image includes:
analyzing color parameters of the preprocessed image, judging the range of the face according to the color change of the image, adjusting a screenshot picture according to a feedback result of the face position, pixelating the face position image according to the face position information, and performing pixel statistics face recognition on image pixels by combining face characteristic data of a database;
analyzing and defining human face characteristic points according to the analyzed human face result and combining micro expression characteristics in a database, generating a human face model in combination, calibrating the characteristic points on the model, analyzing and adjusting the positions of the characteristic points in real time and recording the change displacement of the positions of the characteristic points;
and (4) calling micro-expression characteristics and psychological behavior characteristics in the database to analyze the emotion and emotion changes of the character according to the position change displacement of the characteristic points on the face model, and outputting an analysis result.
The invention relates to a video image processing system based on big data analysis, which comprises a database construction module, an image preprocessing module and a human face emotion analysis module;
the database construction module is used for establishing a face recognition database and a face micro-expression database;
the image preprocessing module is used for preprocessing the acquired video image to eliminate background variegates in the video image;
the face emotion analysis module is used for carrying out emotion analysis on emotion characteristics in the preprocessed video image to finish the identification of the video image.
Furthermore, the image preprocessing module comprises a human face contour intercepting and processing unit, an edge extraction and equalization processing unit and an illumination compensation unit;
the human face contour screenshot and processing unit is used for capturing the contour of a human face image in an acquired video image, separating a skin color area from a background image, intercepting the captured human face image from the background of the video image and then carrying out binarization processing;
the edge extraction and equalization processing unit is used for extracting the edge of the image after binarization processing, and equalizing the distribution of pixel values in the image after removing the image area with weak edge and the background area with flat change;
the illumination compensation unit is used for removing the binarization processing effect of the image after pixel value distribution equalization and carrying out illumination compensation to overcome the interference of uneven brightness on the result.
Furthermore, the human face emotion analysis module comprises a color analysis and pixel statistics unit, a characteristic point analysis and modeling unit and an emotion analysis unit;
the color analysis and pixel statistics unit is used for analyzing color parameters of the preprocessed image, judging the range of the face according to the color change of the image, adjusting a screenshot picture of a feedback result of the face position, pixelating the face position image according to the face position information, and performing pixel statistics face recognition on image pixels by combining face characteristic data of a database;
the characteristic point analysis and modeling unit is used for analyzing and defining human face characteristic points according to the analyzed human face result and combining micro expression characteristics in the database, generating a human face model in a combining manner, calibrating the characteristic points on the model, analyzing and adjusting the positions of the characteristic points in real time and recording the change displacement of the positions of the characteristic points;
the emotion analysis unit is used for calling micro-expression characteristics and psychological behavior characteristics in the database to analyze the emotion and emotion changes of the character according to the position change displacement of the characteristic points on the face model, and outputting an analysis result.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. A video image processing method based on big data analysis is characterized in that: the video image processing method comprises the following steps:
establishing a face recognition database and a face micro-expression database;
acquiring a video image to acquire a face image;
preprocessing the collected video image to eliminate background variegation in the video image;
and performing emotion analysis on the emotion characteristics in the preprocessed video image to finish the identification of the video image.
2. The video image processing method based on big data analysis according to claim 1, wherein: the pre-processing of the acquired video image to eliminate background mottle in the video image comprises:
capturing the outline of a face image in an acquired video image, separating a skin color area from a background image, intercepting the captured face image from the background of the video image, and then carrying out binarization processing;
extracting the edge of the image after binarization processing, and removing an image area with weak edge and a background area with flat change to equalize the distribution of pixel values in the image;
and removing the binarization processing effect of the image after pixel value distribution equalization, and performing illumination compensation to overcome the interference of uneven brightness on the result.
3. The video image processing method based on big data analysis according to claim 2, wherein: the equalizing the distribution of pixel values in the image specifically includes the following:
performing histogram equalization on an input image, transforming the input image to a frequency domain by using 2D-FFT, and solving the correlation between the input image and an average face template by using an optimal adaptive correlator;
dividing the output of the filter into three parts of a face region, a possible face region and a background region according to a threshold value, carrying out local gray level equalization on an image to be detected in a window of 3 multiplied by 3, and finally outputting the background region by using an OAC filter.
4. The video image processing method based on big data analysis according to claim 2, wherein: the emotion feature in the preprocessed video image is subjected to emotion analysis, and the identification of the video image is completed by the emotion feature analysis method comprises the following steps:
analyzing color parameters of the preprocessed image, judging the range of the face according to the color change of the image, adjusting a screenshot picture according to a feedback result of the face position, pixelating the face position image according to the face position information, and performing pixel statistics face recognition on image pixels by combining face characteristic data of a database;
analyzing and defining human face characteristic points according to the analyzed human face result and combining micro expression characteristics in a database, generating a human face model in combination, calibrating the characteristic points on the model, analyzing and adjusting the positions of the characteristic points in real time and recording the change displacement of the positions of the characteristic points;
and (4) calling micro-expression characteristics and psychological behavior characteristics in the database to analyze the emotion and emotion changes of the character according to the position change displacement of the characteristic points on the face model, and outputting an analysis result.
5. A video image processing system based on big data analysis, characterized by: the system comprises a database construction module, an image preprocessing module and a human face emotion analysis module;
the database construction module is used for establishing a face recognition database and a face micro-expression database;
the image preprocessing module is used for preprocessing the acquired video image to eliminate background variegates in the video image;
the face emotion analysis module is used for carrying out emotion analysis on emotion characteristics in the preprocessed video image to finish the identification of the video image.
6. The video image processing system based on big data analysis according to claim 5, wherein: the image preprocessing module comprises a human face contour intercepting and processing unit, an edge extracting and equalizing processing unit and an illumination compensation unit;
the human face contour screenshot and processing unit is used for capturing the contour of a human face image in an acquired video image, separating a skin color area from a background image, intercepting the captured human face image from the background of the video image and then carrying out binarization processing;
the edge extraction and equalization processing unit is used for extracting the edge of the image after binarization processing, and equalizing the distribution of pixel values in the image after removing the image area with weak edge and the background area with flat change;
the illumination compensation unit is used for removing the binarization processing effect of the image after pixel value distribution equalization and carrying out illumination compensation to overcome the interference of uneven brightness on the result.
7. The video image processing system based on big data analysis according to claim 5, wherein: the human face emotion analysis module comprises a color analysis and pixel statistics unit, a characteristic point analysis and modeling unit and an emotion analysis unit;
the color analysis and pixel statistics unit is used for analyzing color parameters of the preprocessed image, judging the range of the face according to the color change of the image, adjusting a screenshot picture of a feedback result of the face position, pixelating the face position image according to the face position information, and performing pixel statistics face recognition on image pixels by combining face characteristic data of a database;
the characteristic point analysis and modeling unit is used for analyzing and defining human face characteristic points according to the analyzed human face result and combining micro expression characteristics in the database, generating a human face model in a combining manner, calibrating the characteristic points on the model, analyzing and adjusting the positions of the characteristic points in real time and recording the change displacement of the positions of the characteristic points;
the emotion analysis unit is used for calling micro-expression characteristics and psychological behavior characteristics in the database to analyze the emotion and emotion changes of the character according to the position change displacement of the characteristic points on the face model, and outputting an analysis result.
CN202011317899.1A 2020-11-23 2020-11-23 Video image processing method and system based on big data analysis Pending CN112487904A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011317899.1A CN112487904A (en) 2020-11-23 2020-11-23 Video image processing method and system based on big data analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011317899.1A CN112487904A (en) 2020-11-23 2020-11-23 Video image processing method and system based on big data analysis

Publications (1)

Publication Number Publication Date
CN112487904A true CN112487904A (en) 2021-03-12

Family

ID=74932718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011317899.1A Pending CN112487904A (en) 2020-11-23 2020-11-23 Video image processing method and system based on big data analysis

Country Status (1)

Country Link
CN (1) CN112487904A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850247A (en) * 2021-12-01 2021-12-28 环球数科集团有限公司 Tourism video emotion analysis system fused with text information

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102360421A (en) * 2011-10-19 2012-02-22 苏州大学 Face identification method and system based on video streaming
CN106886770A (en) * 2017-03-07 2017-06-23 佛山市融信通企业咨询服务有限公司 A kind of video communication sentiment analysis householder method
CN106909907A (en) * 2017-03-07 2017-06-30 佛山市融信通企业咨询服务有限公司 A kind of video communication sentiment analysis accessory system
CN106919924A (en) * 2017-03-07 2017-07-04 佛山市融信通企业咨询服务有限公司 A kind of mood analysis system based on the identification of people face
CN109255319A (en) * 2018-09-02 2019-01-22 珠海横琴现联盛科技发展有限公司 For the recognition of face payment information method for anti-counterfeit of still photo
WO2019184125A1 (en) * 2018-03-30 2019-10-03 平安科技(深圳)有限公司 Micro-expression-based risk identification method and device, equipment and medium
KR20200063292A (en) * 2018-11-16 2020-06-05 광운대학교 산학협력단 Emotional recognition system and method based on face images
CN111524080A (en) * 2020-04-22 2020-08-11 杭州夭灵夭智能科技有限公司 Face skin feature identification method, terminal and computer equipment
CN111611940A (en) * 2020-05-22 2020-09-01 西安佐尔电子技术有限公司 Rapid video face recognition method based on big data processing
CN111931671A (en) * 2020-08-17 2020-11-13 青岛北斗天地科技有限公司 Face recognition method for illumination compensation in underground coal mine adverse light environment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102360421A (en) * 2011-10-19 2012-02-22 苏州大学 Face identification method and system based on video streaming
CN106886770A (en) * 2017-03-07 2017-06-23 佛山市融信通企业咨询服务有限公司 A kind of video communication sentiment analysis householder method
CN106909907A (en) * 2017-03-07 2017-06-30 佛山市融信通企业咨询服务有限公司 A kind of video communication sentiment analysis accessory system
CN106919924A (en) * 2017-03-07 2017-07-04 佛山市融信通企业咨询服务有限公司 A kind of mood analysis system based on the identification of people face
WO2019184125A1 (en) * 2018-03-30 2019-10-03 平安科技(深圳)有限公司 Micro-expression-based risk identification method and device, equipment and medium
CN109255319A (en) * 2018-09-02 2019-01-22 珠海横琴现联盛科技发展有限公司 For the recognition of face payment information method for anti-counterfeit of still photo
KR20200063292A (en) * 2018-11-16 2020-06-05 광운대학교 산학협력단 Emotional recognition system and method based on face images
CN111524080A (en) * 2020-04-22 2020-08-11 杭州夭灵夭智能科技有限公司 Face skin feature identification method, terminal and computer equipment
CN111611940A (en) * 2020-05-22 2020-09-01 西安佐尔电子技术有限公司 Rapid video face recognition method based on big data processing
CN111931671A (en) * 2020-08-17 2020-11-13 青岛北斗天地科技有限公司 Face recognition method for illumination compensation in underground coal mine adverse light environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王金云;周晖杰;纪政;: "复杂背景中的人脸识别技术研究", 计算机工程, no. 08 *
董静;王万森;: "E-learning系统中情感识别的研究", 计算机工程与设计, no. 17 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850247A (en) * 2021-12-01 2021-12-28 环球数科集团有限公司 Tourism video emotion analysis system fused with text information
CN113850247B (en) * 2021-12-01 2022-02-08 环球数科集团有限公司 Tourism video emotion analysis system fused with text information

Similar Documents

Publication Publication Date Title
US8055018B2 (en) Object image detection method
CN104077579B (en) Facial expression recognition method based on expert system
CN108596140A (en) A kind of mobile terminal face identification method and system
CN107798279B (en) Face living body detection method and device
EP0883080A3 (en) Method and apparatus for detecting eye location in an image
CN111967319B (en) Living body detection method, device, equipment and storage medium based on infrared and visible light
CN113076860B (en) Bird detection system under field scene
CN106446921A (en) High-voltage power transmission line barrier identification method and apparatus
CN110765838A (en) Real-time dynamic analysis method for facial feature region for emotional state monitoring
KR20080079798A (en) Method of face detection and recognition
CN112907206B (en) Business auditing method, device and equipment based on video object identification
CN112487904A (en) Video image processing method and system based on big data analysis
Qaisar et al. Scene to text conversion and pronunciation for visually impaired people
Dharavath et al. Impact of image preprocessing on face recognition: A comparative analysis
CN111310711A (en) Face image recognition method and system based on two-dimensional singular spectrum analysis and EMD fusion
CN111325118A (en) Method for identity authentication based on video and video equipment
CN110866470A (en) Face anti-counterfeiting detection method based on random image characteristics
CN106599765B (en) Method and system for judging living body based on video-audio frequency of object continuous pronunciation
CN113657315A (en) Method, device and equipment for screening quality of face image and storage medium
CN113435248A (en) Mask face recognition base enhancement method, device, equipment and readable storage medium
CN112801002A (en) Facial expression recognition method and device based on complex scene and electronic equipment
CN112418085A (en) Facial expression recognition method under partial shielding working condition
CN106778679B (en) Specific crowd video identification method based on big data machine learning
Suthar et al. A literature survey on facial expression recognition techniques using appearance based features
Bal et al. Plate Number Recognition Using Segmented Method With Artificial Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination