CN113869160A - Comprehensive method and system for face recognition, non-sensory heart rate recognition, running posture and expression recognition - Google Patents

Comprehensive method and system for face recognition, non-sensory heart rate recognition, running posture and expression recognition Download PDF

Info

Publication number
CN113869160A
CN113869160A CN202111094118.1A CN202111094118A CN113869160A CN 113869160 A CN113869160 A CN 113869160A CN 202111094118 A CN202111094118 A CN 202111094118A CN 113869160 A CN113869160 A CN 113869160A
Authority
CN
China
Prior art keywords
person
expression
heart rate
personnel
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111094118.1A
Other languages
Chinese (zh)
Inventor
杨钰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Terminus Technology Group Co Ltd
Original Assignee
Terminus Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Terminus Technology Group Co Ltd filed Critical Terminus Technology Group Co Ltd
Priority to CN202111094118.1A priority Critical patent/CN113869160A/en
Publication of CN113869160A publication Critical patent/CN113869160A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a comprehensive method and a comprehensive system for face recognition, non-sensory heart rate recognition, running posture and expression recognition. The method comprises the following steps: arranging a plurality of cameras at different positions of a park runway, and shooting face images and whole-body images of target personnel in real time to obtain monitoring image data; arranging a millimeter wave radar near each camera, wherein the millimeter wave radar can receive and transmit detection signals; the millimeter wave radar is used for sending a plurality of detection signals to target personnel and correspondingly receiving a plurality of reflection signals; extracting the age, gender, expression type, running posture and heart rate curve chart of the person, and inputting the curve chart into a trained convolutional neural network, thereby classifying the body comfort level type of the target person; and sending corresponding information to family personnel or park management personnel corresponding to the personnel according to the body comfort level type. The state of crowds such as old man that this application can accurate, timely control park taken exercise when emergency such as the uncomfortable appears, provides timely notice or early warning for park manager, relatives.

Description

Comprehensive method and system for face recognition, non-sensory heart rate recognition, running posture and expression recognition
Technical Field
The application relates to the technical field of smart cities and smart parks, in particular to a comprehensive method and a comprehensive system for face recognition, non-sensory heart rate recognition, running gesture recognition and expression recognition.
Background
Nowadays, parks have long become the main place for the elderly to build body, run and relax. When people enjoy a nice life, the people also have inevitable physical conditions, and some old people can have uncomfortable physical conditions when doing physical exercises such as body building and running in parks due to the physical conditions of the old people, and serious consequences can be caused if the people are treated or found out untimely in serious cases.
Therefore, the construction of the smart park is urgently needed at present, and meanwhile, the problems need to be found timely, and timely special condition early warning is provided for park managers and old people.
Disclosure of Invention
In view of the above, an object of the present application is to provide a comprehensive method and system for face recognition, non-sensory heart rate recognition, running gesture recognition and expression recognition, which can find the above problems in time and provide timely special condition early warning for park managers and the elderly.
Based on the above purpose, the application provides a comprehensive method for face recognition, non-sensory heart rate recognition, running gesture recognition and expression recognition, which comprises the following steps:
arranging a plurality of cameras at different positions of a park runway, and shooting face images and whole-body images of target personnel in real time to obtain monitoring image data;
arranging a millimeter wave radar near each camera, wherein the millimeter wave radar can receive and transmit detection signals; the millimeter wave radar is used for sending a plurality of detection signals to target personnel and correspondingly receiving a plurality of reflection signals;
analyzing the face image, and predicting the age and the gender of the person through a face recognition algorithm;
analyzing the facial image, and judging the expression type of the person through an expression recognition algorithm;
analyzing the whole body image, and obtaining the running posture of the person through a skeleton algorithm;
processing according to the reflected signal to obtain a micro-motion signal of the target person; screening out heartbeat signals according to the inching signals, and drawing a heart rate curve graph;
extracting the age, gender, expression type, running posture and heart rate curve chart of the person, and inputting the curve chart into a trained convolutional neural network, thereby classifying the body comfort level type of the target person;
and sending corresponding information to family personnel or park management personnel corresponding to the personnel according to the body comfort level type.
In some embodiments, said parsing said facial image to predict age, gender of said person by a face recognition algorithm comprises:
step 1, carrying out data preprocessing on a face image data set and generating a corresponding label; step 2, performing enhancement operation on the preprocessed human face image data set, wherein the enhancement operation comprises rotation, scaling, random cutting, and luminance and chrominance conversion; step 3, carrying out training/verification/test set division on the data set after the enhancement operation; step 4, constructing a network structure, and importing a training set, a verification set and corresponding labels thereof for training; the testing stage specifically comprises the following steps: step 5, carrying out data preprocessing on the face image data set; step 6, inputting the preprocessed face image data set into the network structure constructed in the step 4, and loading model parameters corresponding to the network structure for forward propagation; step 7, taking out an output result of the network structure, and obtaining a prediction label according to a label generation rule; and 8, converting the prediction label according to the meaning of each type of label to obtain a final prediction result.
In some embodiments, the analyzing the facial image and determining the expression type of the person through an expression recognition algorithm includes:
acquiring a face image of an identified person, recording acquisition time, processing the image of the identified person by using a face recognition algorithm, and outputting a face recognition result; inputting the face recognition result into a pre-trained deep neural network and a face characteristic point model for processing to obtain an expression recognition result; the expression recognition result comprises an expression type and a predicted value thereof; taking the expression recognition result and the corresponding acquisition time as expression data, and sequentially recording the expression data into an expression database; and step four, acquiring a plurality of data from the expression database and analyzing the data to obtain the expression recognition result of the recognized person.
In some embodiments, said analyzing said whole-body image to predict a running posture of said person by a skeletal algorithm comprises:
identifying human skeletal key points in the whole-body image;
constructing an integral skeleton portrait of the human body according to the human skeleton key points;
and calculating the main bone size and shape of the person according to the whole bone portrait, and matching the main bone size and shape with data in a running posture database to obtain the running posture of the person.
In some embodiments, the processing according to the reflected signal to obtain the micro-motion signal of the target person includes:
mixing each reflection signal with a detection signal corresponding to each reflection signal to obtain a plurality of intermediate frequency signals to form an original data matrix;
carrying out Fourier transform on the original data matrix to obtain a distance matrix;
obtaining subscripts of the target personnel in the distance matrix;
acquiring an original phase signal of the target person according to the subscript of the target person in the distance matrix;
and acquiring a micro-motion signal of the target person according to the original phase signal.
In some embodiments, screening out the heartbeat signals according to the inching signals and plotting a heart rate graph comprises:
using a PE-based MEEMD filter to screen the micro-motion signal to obtain a heartbeat signal;
obtaining a heart rate estimation value through a peak detection algorithm;
and drawing a heart rate curve graph according to the heart rate estimation value.
In some embodiments, the extracting age, gender, expression type, running posture and heart rate graph of the person and inputting the extracted graph into a trained convolutional neural network to classify the body comfort type of the target person includes:
leading the characteristic flows of the age, sex, expression type, running posture and heart rate curve graphs of a large number of known people into a convolutional neural network to obtain the body comfort type of each person; taking a feature vector formed by the age, the sex, the expression type, the running posture, the feature flow of the heart rate curve graph and the body comfort level type of the known person as a training sample, and constructing a training sample set;
training an AKC model consisting of an automatic encoder model based on a fully-connected neural network and a K-means model by using a training sample set;
and inputting the real-time characteristic stream of the age, the gender, the expression type, the running posture and the heart rate curve chart of the target person to be classified into the trained AKC model to obtain the body comfort type of the target person.
Based on above-mentioned purpose, this application has still provided a face identification, noninductive rhythm of the heart discernment, running gesture, expression discernment integrated system, includes:
the system comprises a monitoring module, a control module and a monitoring module, wherein the monitoring module is used for arranging a plurality of cameras at different positions of a park runway, shooting face images and whole body images of target personnel in real time and obtaining monitoring image data;
the radar module is used for laying a millimeter wave radar near each camera, and the millimeter wave radar can receive and transmit detection signals; the millimeter wave radar is used for sending a plurality of detection signals to target personnel and correspondingly receiving a plurality of reflection signals;
the face recognition module is used for analyzing the face image and predicting the age and the gender of the person through a face recognition algorithm;
the expression recognition module is used for analyzing the facial image and judging the expression type of the person through an expression recognition algorithm;
the running posture recognition module is used for analyzing the whole body image and obtaining the running posture of the person through a skeleton algorithm;
the heart rate detection module is used for processing according to the reflected signal to acquire a micro-motion signal of the target person; screening out heartbeat signals according to the inching signals, and drawing a heart rate curve graph;
the comfort classification module is used for extracting the age, the sex, the expression type, the running posture and the heart rate curve chart of the personnel and inputting the curve chart into a trained convolutional neural network so as to classify the body comfort type of the target personnel;
and the communication module is used for sending corresponding information to family personnel or park management personnel corresponding to the personnel according to the body comfort level type.
In general, the advantages of the present application and the experience brought to the user are: the state of crowds such as old man that this application can accurate, timely control park taken exercise when emergency such as the uncomfortable appears, provides timely notice or early warning for park manager, relatives.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
Fig. 1 shows a schematic diagram of the system architecture of the present invention.
Fig. 2 shows a flow chart of a comprehensive method of face recognition, non-sensory heart rate recognition, running gesture recognition and expression recognition according to an embodiment of the invention.
Fig. 3 is a block diagram showing an integrated system of face recognition, non-sensory heart rate recognition, running posture, and expression recognition according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic diagram of a storage medium provided in an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows a schematic diagram of the system architecture of the present invention. In the embodiment of the present invention, the device includes a common millimeter wave radar, a plurality of cameras, a cloud-side controller (not shown), a sound playing apparatus (not shown), and the like. The method comprises the steps that a millimeter wave radar and a camera are arranged near a park runway, the camera monitors face and whole body images of people in real time to obtain monitoring perception data, and information such as the age, the sex, the expression and the running posture of a runner is obtained through a recognition algorithm. And after the millimeter wave radar transmits a signal and receives a reflected signal, a heart rate curve is obtained through micro-motion signal detection. The information is uniformly sent to the cloud-end controller, and judgment of body comfort is carried out. And the sound playing device can also be controlled to perform corresponding voice broadcast according to the body comfort condition of the runner.
Fig. 2 shows a flow chart of a comprehensive method of face recognition, non-sensory heart rate recognition, running gesture recognition and expression recognition according to an embodiment of the invention. As shown in fig. 2, the comprehensive method for face recognition, non-sensory heart rate recognition, running posture and expression recognition comprises the following steps:
step 101: arranging a plurality of cameras at different positions of a park runway, and shooting face images and whole-body images of target personnel in real time to obtain monitoring image data;
step 102: arranging a millimeter wave radar near each camera, wherein the millimeter wave radar can receive and transmit detection signals; the millimeter wave radar is used for sending a plurality of detection signals to target personnel and correspondingly receiving a plurality of reflection signals;
step 103: analyzing the face image, and predicting the age and the gender of the person through a face recognition algorithm, wherein the method comprises the following steps:
step 1, carrying out data preprocessing on a face image data set and generating a corresponding label; step 2, performing enhancement operation on the preprocessed human face image data set, wherein the enhancement operation comprises rotation, scaling, random cutting, and luminance and chrominance conversion; step 3, carrying out training/verification/test set division on the data set after the enhancement operation; step 4, constructing a network structure, and importing a training set, a verification set and corresponding labels thereof for training; the testing stage specifically comprises the following steps: step 5, carrying out data preprocessing on the face image data set; step 6, inputting the preprocessed face image data set into the network structure constructed in the step 4, and loading model parameters corresponding to the network structure for forward propagation; step 7, taking out an output result of the network structure, and obtaining a prediction label according to a label generation rule; and 8, converting the prediction label according to the meaning of each type of label to obtain a final prediction result.
Step 104: analyzing the facial image, and judging the expression type of the person through an expression recognition algorithm, wherein the method comprises the following steps:
acquiring a face image of an identified person, recording acquisition time, processing the image of the identified person by using a face recognition algorithm, and outputting a face recognition result; inputting the face recognition result into a pre-trained deep neural network and a face characteristic point model for processing to obtain an expression recognition result; the expression recognition result comprises an expression type and a predicted value thereof; taking the expression recognition result and the corresponding acquisition time as expression data, and sequentially recording the expression data into an expression database; and step four, acquiring a plurality of data from the expression database and analyzing the data to obtain the expression recognition result of the recognized person.
Step 105: analyzing the whole-body image, and obtaining the running posture of the person through a skeletal algorithm, wherein the steps of:
identifying human skeletal key points in the whole-body image;
constructing an integral skeleton portrait of the human body according to the human skeleton key points;
and calculating the main bone size and shape of the person according to the whole bone portrait, and matching the main bone size and shape with data in a running posture database to obtain the running posture of the person.
Step 106: processing according to the reflected signal to obtain a micro-motion signal of the target person; screening out heartbeat signals according to the inching signals, and drawing a heart rate curve graph;
the processing according to the reflected signal to obtain the micro-motion signal of the target person comprises the following steps:
mixing each reflection signal with a detection signal corresponding to each reflection signal to obtain a plurality of intermediate frequency signals to form an original data matrix;
carrying out Fourier transform on the original data matrix to obtain a distance matrix;
obtaining subscripts of the target personnel in the distance matrix;
acquiring an original phase signal of the target person according to the subscript of the target person in the distance matrix;
and acquiring a micro-motion signal of the target person according to the original phase signal.
Screening out the heartbeat signal according to the fine motion signal to draw the heart rate curve chart, include:
using a PE-based MEEMD filter to screen the micro-motion signal to obtain a heartbeat signal;
obtaining a heart rate estimation value through a peak detection algorithm;
and drawing a heart rate curve graph according to the heart rate estimation value.
Step 107: extracting age, gender, expression type, running posture and heart rate curve graph of the personnel, inputting into a trained convolutional neural network, and classifying the body comfort level type of the target personnel, wherein the method comprises the following steps:
leading the characteristic flows of the age, sex, expression type, running posture and heart rate curve graphs of a large number of known people into a convolutional neural network to obtain the body comfort type of each person; taking a feature vector formed by the age, the sex, the expression type, the running posture, the feature flow of the heart rate curve graph and the body comfort level type of the known person as a training sample, and constructing a training sample set;
training an AKC model consisting of an automatic encoder model based on a fully-connected neural network and a K-means model by using a training sample set;
and inputting the real-time characteristic stream of the age, the gender, the expression type, the running posture and the heart rate curve chart of the target person to be classified into the trained AKC model to obtain the body comfort type of the target person.
Step 108: and sending corresponding information to family personnel or park management personnel corresponding to the personnel according to the body comfort level type.
The technical effects of the present application (e.g., case of identifying 8 persons, 4 men and 4 women) are exemplified as follows:
Figure BDA0003268466780000071
the application embodiment provides a comprehensive system for face recognition, non-sensory heart rate recognition, running gesture and expression recognition, which is used for executing the comprehensive method for face recognition, non-sensory heart rate recognition, running gesture and expression recognition described in the above embodiment, and as shown in fig. 3, the system comprises:
the monitoring module 501 is used for arranging a plurality of cameras at different positions of the park runway, shooting face images and whole body images of target personnel in real time and obtaining monitoring image data;
a radar module 502, configured to arrange a millimeter-wave radar near each camera, where the millimeter-wave radar is capable of receiving and transmitting detection signals; the millimeter wave radar is used for sending a plurality of detection signals to target personnel and correspondingly receiving a plurality of reflection signals;
a face recognition module 503, configured to analyze the face image, and predict the age and gender of the person through a face recognition algorithm;
the expression recognition module 504 is configured to analyze the facial image and determine an expression type of the person through an expression recognition algorithm;
a running posture recognition module 505, configured to analyze the whole-body image, and obtain a running posture of the person through a skeletal algorithm;
a heart rate detection module 506, configured to perform processing according to the reflected signal to obtain a micro-motion signal of the target person; screening out heartbeat signals according to the inching signals, and drawing a heart rate curve graph;
a comfort classification module 506, configured to extract a graph of the age, gender, expression type, running posture and heart rate of the person, and input the graph into a trained convolutional neural network, so as to classify the body comfort type of the target person;
a communication module 508, configured to send corresponding information to family personnel or park managers corresponding to the personnel according to the body comfort level type.
The integrated system for face recognition, non-sensory heart rate recognition, running posture and expression recognition provided by the embodiment of the application and the integrated method for face recognition, non-sensory heart rate recognition, running posture and expression recognition provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the stored application program.
The embodiment of the application also provides electronic equipment corresponding to the comprehensive method for recognizing the face, the non-sensory heart rate, the running gesture and the expression, which is provided by the embodiment, so as to execute the comprehensive method for recognizing the face, the non-sensory heart rate, the running gesture and the expression. The embodiments of the present application are not limited.
Referring to fig. 4, a schematic diagram of an electronic device provided in some embodiments of the present application is shown. As shown in fig. 4, the electronic device 2 includes: the system comprises a processor 200, a memory 201, a bus 202 and a communication interface 203, wherein the processor 200, the communication interface 203 and the memory 201 are connected through the bus 202; the memory 201 stores a computer program that can be executed on the processor 200, and the processor 200 executes the computer program to perform a comprehensive method of face recognition, non-sensory heart rate recognition, running posture and expression recognition provided by any one of the foregoing embodiments of the present application.
The Memory 201 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 203 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
Bus 202 can be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 201 is configured to store a program, and the processor 200 executes the program after receiving an execution instruction, and the comprehensive method for face recognition, non-sensory heart rate recognition, running posture and expression recognition disclosed in any embodiment of the present application may be applied to the processor 200, or implemented by the processor 200.
The processor 200 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 200. The Processor 200 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 201, and the processor 200 reads the information in the memory 201 and completes the steps of the method in combination with the hardware thereof.
The electronic equipment provided by the embodiment of the application and the comprehensive method for recognizing the face, the non-sensory heart rate, the running posture and the expression provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the electronic equipment.
The embodiment of the present application further provides a computer-readable storage medium corresponding to the comprehensive method for face recognition, non-sensory heart rate recognition, running gesture and expression recognition provided in the foregoing embodiment, please refer to fig. 5, which illustrates a computer-readable storage medium being an optical disc 30 on which a computer program (i.e., a program product) is stored, where the computer program, when being executed by a processor, executes the comprehensive method for face recognition, non-sensory heart rate recognition, running gesture and expression recognition provided in any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the comprehensive method for recognizing a face, a non-sensory heart rate, a running posture and an expression provided by the embodiment of the present application are based on the same inventive concept, and have the same beneficial effects as methods adopted, operated or implemented by application programs stored in the computer-readable storage medium.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. In addition, this application is not directed to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present application as described herein, and any descriptions of specific languages are provided above to disclose the best modes of the present application.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in the creation apparatus of a virtual machine according to embodiments of the present application. The present application may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various changes or substitutions within the technical scope of the present invention, and these should be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A comprehensive method for face recognition, non-sensory heart rate recognition, running posture and expression recognition is characterized by comprising the following steps:
arranging a plurality of cameras at different positions of a park runway, and shooting face images and whole-body images of target personnel in real time to obtain monitoring image data;
arranging a millimeter wave radar near each camera, wherein the millimeter wave radar can receive and transmit detection signals; the millimeter wave radar is used for sending a plurality of detection signals to target personnel and correspondingly receiving a plurality of reflection signals;
analyzing the face image, and predicting the age and the gender of the person through a face recognition algorithm;
analyzing the facial image, and judging the expression type of the person through an expression recognition algorithm;
analyzing the whole body image, and obtaining the running posture of the person through a skeleton algorithm;
processing according to the reflected signal to obtain a micro-motion signal of the target person; screening out heartbeat signals according to the inching signals, and drawing a heart rate curve graph;
extracting the age, gender, expression type, running posture and heart rate curve chart of the person, and inputting the curve chart into a trained convolutional neural network, thereby classifying the body comfort level type of the target person;
and sending corresponding information to family personnel or park management personnel corresponding to the personnel according to the body comfort level type.
2. The method of claim 1,
analyzing the face image, and predicting the age and the gender of the person through a face recognition algorithm, wherein the method comprises the following steps:
step 1, carrying out data preprocessing on a face image data set and generating a corresponding label; step 2, performing enhancement operation on the preprocessed human face image data set, wherein the enhancement operation comprises rotation, scaling, random cutting, and luminance and chrominance conversion; step 3, carrying out training/verification/test set division on the data set after the enhancement operation; step 4, constructing a network structure, and importing a training set, a verification set and corresponding labels thereof for training; the testing stage specifically comprises the following steps: step 5, carrying out data preprocessing on the face image data set; step 6, inputting the preprocessed face image data set into the network structure constructed in the step 4, and loading model parameters corresponding to the network structure for forward propagation; step 7, taking out an output result of the network structure, and obtaining a prediction label according to a label generation rule; and 8, converting the prediction label according to the meaning of each type of label to obtain a final prediction result.
3. The method of claim 1,
analyzing the facial image, and judging the expression type of the person through an expression recognition algorithm, wherein the method comprises the following steps:
acquiring a face image of an identified person, recording acquisition time, processing the image of the identified person by using a face recognition algorithm, and outputting a face recognition result; inputting the face recognition result into a pre-trained deep neural network and a face characteristic point model for processing to obtain an expression recognition result; the expression recognition result comprises an expression type and a predicted value thereof; taking the expression recognition result and the corresponding acquisition time as expression data, and sequentially recording the expression data into an expression database; and step four, acquiring a plurality of data from the expression database and analyzing the data to obtain the expression recognition result of the recognized person.
4. The method of claim 1,
analyzing the whole-body image, and predicting the running posture of the person through a skeletal algorithm, wherein the steps comprise:
identifying human skeletal key points in the whole-body image;
constructing an integral skeleton portrait of the human body according to the human skeleton key points;
and calculating the main bone size and shape of the person according to the whole bone portrait, and matching the main bone size and shape with data in a running posture database to obtain the running posture of the person.
5. The method of claim 1,
the processing according to the reflected signal to obtain the micro-motion signal of the target person comprises the following steps:
mixing each reflection signal with a detection signal corresponding to each reflection signal to obtain a plurality of intermediate frequency signals to form an original data matrix;
carrying out Fourier transform on the original data matrix to obtain a distance matrix;
obtaining subscripts of the target personnel in the distance matrix;
acquiring an original phase signal of the target person according to the subscript of the target person in the distance matrix;
and acquiring a micro-motion signal of the target person according to the original phase signal.
6. The method of claim 1,
screening out the heartbeat signal according to the fine motion signal to draw the heart rate curve chart, include:
using a PE-based MEEMD filter to screen the micro-motion signal to obtain a heartbeat signal;
obtaining a heart rate estimation value through a peak detection algorithm;
and drawing a heart rate curve graph according to the heart rate estimation value.
7. The method according to any one of claims 1 to 6,
the extraction age, sex, expression type, the gesture of running, the heart rate graph of personnel, input in the convolutional neural network that trains well to the target person's health comfort type is categorised, includes:
leading the characteristic flows of the age, sex, expression type, running posture and heart rate curve graphs of a large number of known people into a convolutional neural network to obtain the body comfort type of each person; taking a feature vector formed by the age, the sex, the expression type, the running posture, the feature flow of the heart rate curve graph and the body comfort level type of the known person as a training sample, and constructing a training sample set;
training an AKC model consisting of an automatic encoder model based on a fully-connected neural network and a K-means model by using a training sample set;
and inputting the real-time characteristic stream of the age, the gender, the expression type, the running posture and the heart rate curve chart of the target person to be classified into the trained AKC model to obtain the body comfort type of the target person.
8. The utility model provides a face identification, noninductive heart rate discernment, running gesture, expression discernment integrated system which characterized in that includes:
the system comprises a monitoring module, a control module and a monitoring module, wherein the monitoring module is used for arranging a plurality of cameras at different positions of a park runway, shooting face images and whole body images of target personnel in real time and obtaining monitoring image data;
the radar module is used for laying a millimeter wave radar near each camera, and the millimeter wave radar can receive and transmit detection signals; the millimeter wave radar is used for sending a plurality of detection signals to target personnel and correspondingly receiving a plurality of reflection signals;
the face recognition module is used for analyzing the face image and predicting the age and the gender of the person through a face recognition algorithm;
the expression recognition module is used for analyzing the facial image and judging the expression type of the person through an expression recognition algorithm;
the running posture recognition module is used for analyzing the whole body image and obtaining the running posture of the person through a skeleton algorithm;
the heart rate detection module is used for processing according to the reflected signal to acquire a micro-motion signal of the target person; screening out heartbeat signals according to the inching signals, and drawing a heart rate curve graph;
the comfort classification module is used for extracting the age, the sex, the expression type, the running posture and the heart rate curve chart of the personnel and inputting the curve chart into a trained convolutional neural network so as to classify the body comfort type of the target personnel;
and the communication module is used for sending corresponding information to family personnel or park management personnel corresponding to the personnel according to the body comfort level type.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program is executed by a processor to implement the method according to any of claims 1-7.
CN202111094118.1A 2021-09-17 2021-09-17 Comprehensive method and system for face recognition, non-sensory heart rate recognition, running posture and expression recognition Pending CN113869160A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111094118.1A CN113869160A (en) 2021-09-17 2021-09-17 Comprehensive method and system for face recognition, non-sensory heart rate recognition, running posture and expression recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111094118.1A CN113869160A (en) 2021-09-17 2021-09-17 Comprehensive method and system for face recognition, non-sensory heart rate recognition, running posture and expression recognition

Publications (1)

Publication Number Publication Date
CN113869160A true CN113869160A (en) 2021-12-31

Family

ID=78996526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111094118.1A Pending CN113869160A (en) 2021-09-17 2021-09-17 Comprehensive method and system for face recognition, non-sensory heart rate recognition, running posture and expression recognition

Country Status (1)

Country Link
CN (1) CN113869160A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823591A (en) * 2023-05-05 2023-09-29 国政通科技有限公司 Human shape detection and privacy removal method and device based on convolutional neurons
CN117524413A (en) * 2024-01-05 2024-02-06 亿慧云智能科技(深圳)股份有限公司 Motion protection method, device, equipment and medium based on millimeter wave radar
CN118021271A (en) * 2024-01-31 2024-05-14 南京信息工程大学 Body-building monitoring device and body-building monitoring method based on virtual reality

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823591A (en) * 2023-05-05 2023-09-29 国政通科技有限公司 Human shape detection and privacy removal method and device based on convolutional neurons
CN116823591B (en) * 2023-05-05 2024-02-02 国政通科技有限公司 Human shape detection and privacy removal method and device based on convolutional neurons
CN117524413A (en) * 2024-01-05 2024-02-06 亿慧云智能科技(深圳)股份有限公司 Motion protection method, device, equipment and medium based on millimeter wave radar
CN117524413B (en) * 2024-01-05 2024-03-19 亿慧云智能科技(深圳)股份有限公司 Motion protection method, device, equipment and medium based on millimeter wave radar
CN118021271A (en) * 2024-01-31 2024-05-14 南京信息工程大学 Body-building monitoring device and body-building monitoring method based on virtual reality

Similar Documents

Publication Publication Date Title
CN113869160A (en) Comprehensive method and system for face recognition, non-sensory heart rate recognition, running posture and expression recognition
CN108875732B (en) Model training and instance segmentation method, device and system and storage medium
CN110335259B (en) Medical image identification method and device and storage medium
CN108009466B (en) Pedestrian detection method and device
CN111325745B (en) Fracture region analysis method and device, electronic equipment and readable storage medium
CN108875517B (en) Video processing method, device and system and storage medium
CN110827236B (en) Brain tissue layering method, device and computer equipment based on neural network
CN115034315B (en) Service processing method and device based on artificial intelligence, computer equipment and medium
KR102639558B1 (en) Growth analysis prediction apparatus using bone maturity distribution by interest area and method thereof
CN112330624A (en) Medical image processing method and device
US11721023B1 (en) Distinguishing a disease state from a non-disease state in an image
CN113869161A (en) Method and system for realizing intelligent runway based on face recognition and skeleton algorithm
Shankar et al. A novel discriminant feature selection–based mutual information extraction from MR brain images for Alzheimer's stages detection and prediction
Venkatesvara Rao et al. Real-time video object detection and classification using hybrid texture feature extraction
CN112767422B (en) Training method and device of image segmentation model, segmentation method and device, and equipment
CN111444849B (en) Person identification method, device, electronic equipment and computer readable storage medium
CN114360182A (en) Intelligent alarm method, device, equipment and storage medium
CN108875501A (en) Human body attribute recognition approach, device, system and storage medium
CN116052225A (en) Palmprint recognition method, electronic device, storage medium and computer program product
Nogueira et al. Heart sounds classification using images from wavelet transformation
CN113827216A (en) Method and system for sensorless heart rate monitoring based on micro-motion algorithm
CN113920580A (en) Intelligent fitness method and system based on Internet of things, non-sensory recognition and AI recognition
CN111265825A (en) Exercise training equipment and control method thereof
Arya et al. Boosting X-ray scans feature for enriched diagnosis of pediatric pneumonia using deep learning models
Ningrum et al. Early detection of infant cerebral palsy risk based on pose estimation using openpose and advanced algorithms from limited and imbalance dataset

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination