CN116933002A - Action detection method and device, terminal equipment and storage medium - Google Patents

Action detection method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN116933002A
CN116933002A CN202210339126.6A CN202210339126A CN116933002A CN 116933002 A CN116933002 A CN 116933002A CN 202210339126 A CN202210339126 A CN 202210339126A CN 116933002 A CN116933002 A CN 116933002A
Authority
CN
China
Prior art keywords
target
target signal
signal image
energy
contour vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210339126.6A
Other languages
Chinese (zh)
Inventor
王博
郑植
郭永新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Singapore Suzhou Research Institute, National University of
National University of Singapore
Original Assignee
Singapore Suzhou Research Institute, National University of
National University of Singapore
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Singapore Suzhou Research Institute, National University of, National University of Singapore filed Critical Singapore Suzhou Research Institute, National University of
Priority to CN202210339126.6A priority Critical patent/CN116933002A/en
Publication of CN116933002A publication Critical patent/CN116933002A/en
Pending legal-status Critical Current

Links

Abstract

The application relates to the technical field of signal processing, and provides a motion detection method, a motion detection device, terminal equipment and a storage medium. The action detection method comprises the following steps: acquiring radar reflection signals of a target area where a target object is located; constructing a target signal image according to the radar reflection signal, wherein the target signal image is used for representing the relationship between specified radar parameters and time of the target area; performing filtering processing on the target signal image to remove outliers and noise points in the target signal image; and finishing the action detection of the target object according to the target signal image after the filtering processing. The radar data are collected in the process, so that the privacy safety of a target object can be effectively protected; in addition, after the target signal image is acquired, filtering processing is performed to remove outliers and noise points contained in the image, so that the accuracy of motion detection can be improved.

Description

Action detection method and device, terminal equipment and storage medium
Technical Field
The present application relates to the field of signal processing technologies, and in particular, to a method and apparatus for detecting actions, a terminal device, and a storage medium.
Background
For elderly people living alone, if a fall accident occurs, serious consequences may occur due to the fact that timely rescue is not available. Therefore, the alarm and rescue method has important significance in time and accuracy after the old falls. The traditional fall detection method is mainly based on video monitoring, however, the method adopting video monitoring is influenced by factors such as image definition, light intensity and the like, and privacy safety of the old to be monitored cannot be protected.
Disclosure of Invention
In view of the above, embodiments of the present application provide a method, an apparatus, a terminal device, and a storage medium for detecting an action, which can detect more accurately whether a target object has fallen or not, and can protect privacy and security of the target object.
A first aspect of an embodiment of the present application provides an action detection method, including:
acquiring radar reflection signals of a target area where a target object is located;
constructing a target signal image according to the radar reflection signal, wherein the target signal image is used for representing the relationship between specified radar parameters and time of the target area;
performing filtering processing on the target signal image to remove outliers and noise points in the target signal image;
And finishing the action detection of the target object according to the target signal image after the filtering processing.
In the embodiment of the application, firstly, a radar reflection signal of an area where a target object is located is obtained, and a target signal image is constructed through the radar reflection signal; then, filtering processing is carried out on the target signal image, so that the outliers, noise points and other interferences contained in the target signal image are removed; finally, according to the filtered target signal image, motion detection of the target object is completed, for example, a machine learning model may be trained as a classifier, and the filtered target signal image is input into the classifier to determine whether the target object falls or not. The radar data are collected in the process, so that the privacy safety of a target object can be effectively protected; in addition, after the target signal image is acquired, filtering processing is performed to remove outliers and noise points contained in the image, so that the accuracy of motion detection can be improved.
In an implementation manner of the embodiment of the present application, the performing filtering processing on the target signal image to remove outliers and noise points in the target signal image may include:
Detecting a high-energy region in the target signal image, wherein the high-energy region is a region in the target signal image, and the energy corresponding to the appointed radar parameter is larger than a target energy threshold value;
and performing filtering processing on the target signal image to remove outliers and noise points which are located outside the high-energy region in the target signal image.
Further, the detecting the high energy region in the target signal image may include:
converting the target signal image into a target signal matrix;
calculating an energy density matrix corresponding to the target signal matrix, wherein the elements contained in the energy density matrix are energy values corresponding to the elements contained in the target signal matrix;
calculating the target energy threshold according to the energy density matrix;
determining a region where an element with an energy value larger than the target energy threshold value is located in the energy density matrix as the high-energy region;
the performing filtering processing on the target signal image to remove outliers and noise points in the target signal image that are outside the high-energy region may include:
extracting a first contour vector and a second contour vector of the high-energy region; the first contour vector is a vector formed by the maximum row label corresponding to each column in the high-energy region, and the second contour vector is a vector formed by the minimum row label corresponding to each column in the high-energy region;
For each column in the energy density matrix, setting elements with row marks larger than the first row marks in the column and elements with row marks smaller than the second row marks in the column as appointed values to obtain the energy density matrix after filtering; the first row label is the largest row label corresponding to the column of the first contour vector, and the second row label is the smallest row label corresponding to the column of the second contour vector.
Still further, after extracting the first contour vector and the second contour vector of the high energy region, it may further include:
and respectively executing filtering processing on the first contour vector and the second contour vector to remove outliers contained in the first contour vector and outliers contained in the second contour vector.
Still further, the calculating the target energy threshold according to the energy density matrix may include:
calculating the maximum value and the average value of the elements contained in the energy density matrix;
and calculating the target energy threshold according to a preset weight coefficient, the maximum value and the average value.
In an implementation manner of the embodiment of the present application, the completing the motion detection of the target object according to the target signal image after the filtering processing may include:
Inputting the energy density matrix subjected to filtering treatment into a trained first classification network for processing, and determining whether the target object has a specified action or not according to the classification result of the first classification network; the first classification network is a neural network obtained by training by taking first sample data as a training set, and the first sample data is pre-acquired sample energy density matrix data with a preset action label.
In another implementation manner of the embodiment of the present application, the target signal image is a doppler frequency-time chart, where the doppler frequency-time chart is used to represent a relationship between a doppler frequency and a time of each radar signal sampling point in the target area, and the completing the motion detection of the target object according to the target signal image after the filtering processing may include:
according to the first contour vector, the second contour vector and the target signal matrix, calculating an energy burst curve of the high-energy region;
constructing a distance-time diagram of the target area according to the radar reflection signals, wherein the distance-time diagram is used for representing the relationship between the distance and time of each radar signal sampling point in the target area;
Converting the distance-time map into a distance signal matrix;
acquiring a third contour vector corresponding to the distance signal matrix by adopting the same method as the first contour vector corresponding to the target signal matrix;
inputting the first contour vector, the second contour vector, the third contour vector and the energy burst curve into a trained second classification network for processing, and determining whether the target object generates a specified action according to a classification result of the second classification network; the second classification network is a neural network trained by taking second sample data as a training set, and the second sample data is pre-collected sample contour vector data with a preset action label and sample energy burst curve data.
A second aspect of an embodiment of the present application provides an action detecting apparatus, including:
the radar signal acquisition module is used for acquiring radar reflection signals of a target area where a target object is located;
the signal image construction module is used for constructing a target signal image according to the radar reflection signal, and the target signal image is used for representing the relationship between the specified radar parameter of the target area and time;
The signal image filtering module is used for performing filtering processing on the target signal image so as to remove outliers and noise points in the target signal image;
and the action detection module is used for completing the action detection of the target object according to the target signal image after the filtering processing.
A third aspect of the embodiments of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the action detection method as provided in the first aspect of the embodiments of the present application when the computer program is executed by the processor.
A fourth aspect of the embodiments of the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the method for detecting an action as provided in the first aspect of the embodiments of the present application.
A fifth aspect of the embodiments of the present application provides a computer program product for, when the computer program product is run on a terminal device, causing the terminal device to perform the method for detecting an action according to the first aspect of the embodiments of the present application.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for detecting actions according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a Doppler frequency versus time plot before performing contour filtering according to an embodiment of the present application;
FIG. 3 is a schematic outline of the high energy region extracted for FIG. 2;
FIG. 4 is a schematic outline of the high energy region after filtering processing is performed with respect to FIG. 3;
figure 5 is a schematic diagram of a doppler frequency versus time plot after contour filtering is performed with respect to figure 2;
FIG. 6 is a schematic diagram of a distance-time diagram provided by an embodiment of the present application;
FIG. 7 is a schematic outline of the high energy region extracted for FIG. 6;
FIG. 8 is a schematic outline of the high energy region after filtering processing is performed with respect to FIG. 7;
FIG. 9 is a schematic diagram of a classification network according to an embodiment of the present application;
fig. 10 is an operation schematic diagram of the motion detection method according to the embodiment of the present application in a practical application scenario;
FIG. 11 is a block diagram of an operation detecting device according to an embodiment of the present application;
fig. 12 is a schematic diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail. Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
The embodiment of the application provides a radar signal-based motion detection method, which can accurately detect whether a monitored target object has a specified motion such as falling or not by performing video analysis on a received radar reflection signal, performing filtering processing on an obtained radar signal image and finally judging by using a machine learning model and other modes, and well protects the privacy safety of the target object. For more specific technical implementation details of the embodiments of the present application, please refer to the method embodiments described below.
It should be understood that the implementation subject of the method embodiments of the present application is various types of terminal devices or servers, for example, mobile phones, tablet computers, wearable devices, vehicle-mounted devices, augmented reality (augmented reality, AR)/Virtual Reality (VR) devices, notebook computers, ultra-mobile personal computer, UMPC, netbooks, personal digital assistants (personal digital assistant, PDA), large screen televisions, and so on, and the specific types of the terminal devices and the servers are not limited in this embodiment of the present application.
Referring to fig. 1, an action detection method provided by an embodiment of the present application includes:
101. Acquiring radar reflection signals of a target area where a target object is located;
the target object is an object to be monitored, which may be a person (e.g., an elderly person) or a subject of an animal or the like capable of performing an action. The target area is an area where the target object is located, and by installing radar equipment in the area, a corresponding radar reflection signal can be acquired. For example, if the target object is an elderly person, a room in which the target object is located may be used as a target area, and by installing a radar at a ceiling and the like of the room, a radar sensing area is generated in the room, and when the elderly person falls in the radar sensing area, the radar device can collect corresponding radar reflection signals.
102. Constructing a target signal image according to the radar reflection signal, wherein the target signal image is used for representing the relationship between specified radar parameters and time of the target area;
after the radar device collects the radar reflected signal, certain preprocessing operation is performed on the radar reflected signal, so that radar baseband data is obtained. The terminal device (the execution body of the embodiment of the method) can acquire radar baseband data by docking the radar device, and construct a target signal image based on the radar baseband data. The target signal image is used for representing the relationship between the specified radar parameter and time in the target area, for example, if the specified radar parameter is Doppler frequency, the target signal image is a Doppler frequency-time chart used for representing the relationship between Doppler frequency and time of each radar signal sampling point in the target area; if the specified radar parameter is a distance, the target signal image is a distance-time diagram, which is used for representing the relationship between the distance and time of each radar signal sampling point in the target area. In this process, the radar device performs processing on the radar reflected signal to obtain radar baseband data, and a specific method for constructing a doppler frequency-time diagram or a range-time diagram based on the radar baseband data belongs to the prior art, and is not described herein.
103. Performing filtering processing on the target signal image to remove outliers and noise points in the target signal image;
the constructed target signal image generally contains a small amount of outliers and noise points, which form interference when motion detection is performed subsequently, so that the accuracy of the motion detection result is reduced, and therefore, filtering processing needs to be performed on the target signal image to remove the outliers and the noise points.
In an implementation manner of the embodiment of the present application, the performing filtering processing on the target signal image to remove outliers and noise points in the target signal image may include:
(1) Detecting a high-energy region in the target signal image, wherein the high-energy region is a region in the target signal image, and the energy corresponding to the appointed radar parameter is larger than a target energy threshold value;
(2) And performing filtering processing on the target signal image to remove outliers and noise points which are located outside the high-energy region in the target signal image.
In performing the filtering operation on the target signal image, it is first necessary to detect a high-energy region in the target signal image, which is a region in the target signal image where energy corresponding to a specified radar parameter is greater than a target energy threshold. Outside the high energy region of the target signal image, there are typically some outliers and noise points that can be removed by filtering. For example, if the target signal image is a doppler frequency-time chart, since the motion velocity change of the target object is continuous, the high-energy frequency component is continuous in the doppler frequency-time chart and concentrated at both ends of the 0 frequency line, and this region can be defined as a high-energy region in the doppler frequency-time chart. When filtering a doppler frequency-time plot, the main purpose is to preserve the high energy region and remove outliers and noise points outside the high energy region.
Further, the detecting the high energy region in the target signal image may include:
(1) Converting the target signal image into a target signal matrix;
(2) Calculating an energy density matrix corresponding to the target signal matrix, wherein the elements contained in the energy density matrix are energy values corresponding to the elements contained in the target signal matrix;
(3) Calculating the target energy threshold according to the energy density matrix;
(4) And determining the area where the element with the energy value larger than the target energy threshold value in the energy density matrix is located as the high-energy area.
For the convenience of operation, the target signal image can be converted into a matrix form to obtain a target signal matrix. Then, an energy density matrix corresponding to the target signal matrix is calculated, and the elements included in the energy density matrix are energy values corresponding to the elements included in the target signal matrix. Then, according to the element values contained in the energy density matrix, a target energy threshold value can be calculated, and finally, the area where the element with the energy value larger than the target energy threshold value in the energy density matrix is located is determined to be a high-energy area.
For example, assuming that the target signal image is a Doppler frequency-time plot, it can be converted into a matrix form to obtain a Doppler frequency-time matrix S ε C n*m . The doppler frequency-time matrix S is a matrix of n rows and m columns, where m represents the number of discrete time points, n represents the number of doppler frequency points, and the elements in the matrix S represent the doppler frequencies of the corresponding radar signal sampling points at the discrete time points. First, an energy density matrix P E R corresponding to a Doppler frequency-time matrix S is obtained n*m The elements in the matrix P are in one-to-one correspondence with the elements in the matrix S, and represent the energy values corresponding to each doppler frequency in the matrix S, and the method for obtaining the corresponding energy density matrix through the doppler frequency-time matrix calculation can refer to the prior art. Then, according to the values of each element in the energy density matrix P, a target energy threshold value may be calculated, specifically, a maximum value and an average value of the elements contained in the energy density matrix may be calculated, and then, according to a preset weight coefficient, the maximum value and the average value, the target energy threshold value may be calculated. For example, the target energy threshold may be calculated using the following formula:
Wherein p is th Representing a target energy threshold, a e (0, 1) being a preset weight coefficient, max { P } representing the maximum value of the elements contained in the energy density matrix P,represents the average value of the elements contained in the energy density matrix P.
After the target energy threshold is calculated, a high energy region may be determined, specifically, a region in which an element having an energy value greater than the target energy threshold is located in the energy density matrix P is determined as a high energy region. That is, the high energy region can be expressed as: r is R h ={(i,j)|p(i,j)>P th ,p(i,j)∈P}。Since the elements in the doppler frequency-time matrix S are in one-to-one correspondence with the elements in the energy density matrix P, the high-energy region calculated from the matrix P can be regarded as the high-energy region in the matrix S.
After the high energy region in the target signal image is determined, a filtering process is next required to be performed on the target signal image to remove outliers and noise points in the target signal image that are outside the high energy region. In view of this, the embodiment of the present application proposes a method for filtering a contour based on a high energy region, which may be referred to as contour filtering, and the specific implementation process includes:
(1) Extracting a first contour vector and a second contour vector of the high-energy region; the first contour vector is a vector formed by the maximum row label corresponding to each column in the high-energy region, and the second contour vector is a vector formed by the minimum row label corresponding to each column in the high-energy region;
(2) For each column in the energy density matrix, setting elements with row marks larger than the first row marks in the column and elements with row marks smaller than the second row marks in the column as appointed values to obtain the energy density matrix after filtering; the first row label is the largest row label corresponding to the column of the first contour vector, and the second row label is the smallest row label corresponding to the column of the second contour vector.
After the high energy region is detected by the method described above, a first contour vector and a second contour vector of the high energy region need to be extracted, wherein the first contour vector represents an upper boundary of the high energy region and is composed of a maximum row mark corresponding to each column in the high energy region; the second contour vector represents the lower boundary of the high energy region, consisting of the smallest row designation corresponding to each column in the high energy region.
In the above example, the high energy region R is obtained h ={(i,j)|p(i,j)>p th After P (i, j) ∈p }, a loop may be performed, letting j=1; j is less than or equal to m; j++, perform:
V max (j)=max{i|(i,j)∈R h };
V min (j)=min{i|(i,j)∈R h };
v herein max (j) I.e. the maximum row mark number corresponding to the j-th column in the high energy region, V min (j) I.e., the minimum row index corresponding to the j-th column in the high energy region, the first contour vector may be expressed as V max ={V max (1),V max (2)…V max (m), the second contour vector may be denoted as V min ={V min (1),V min (2)…V min (m)}。
The first contour vector and the second contour vector obtained in this way may have partial outliers, and in order to improve the accuracy of contour vector extraction, filtering processing may be performed on the first contour vector and the second contour vector to remove outliers contained in the first contour vector and outliers contained in the second contour vector. For example, outliers may be removed using a Hampel filter, i.e., let PC 1 =Hampel(V max ),PC 2 =Hampel(V min ). Finally, a first contour vector PC after the filtering processing is obtained 1 And a second contour vector PC 2
After extracting the first contour vector and the second contour vector, for each column in the energy density matrix, an element in the column with a row number greater than the first row number and an element in the column with a row number less than the second row number are set to a specified value (may be 0, for example), thereby obtaining the energy density matrix after the filtering process. Wherein the first row number is the largest row number corresponding to the column of the first contour vector, and the second row number is the smallest row number corresponding to the column of the second contour vector. By doing so, the elements other than the high energy region of the energy density matrix are set to invalid values, and the effect of removing outliers and noise points other than the high energy region is achieved. In the above example, one loop may be performed, letting j=1; j is less than or equal to m; j++, performing the following contour filtering process on the energy density matrix P:
P(i,j)=0,if i>PC 1 (j),ori<PC 2 (j);
Wherein, PC 1 (j) Representing the maximum row number, PC, corresponding to the j-th column, of the first contour vector 2 (j) The minimum row index corresponding to the j-th column is represented by the second contour vector. The same operation is performed on each column of the energy density matrix P, namely, the contour filtering of the energy density matrix P is completed, and the energy density matrix P after the filtering process can be obtained filtered
Illustratively, as shown in fig. 2, a schematic diagram before performing contour filtering for a doppler frequency-time diagram in which white partial areas are high-energy areas, it can be seen that the high-energy areas are substantially concentrated at both ends of a 0-frequency line. The outline diagram of the high energy region extracted for fig. 2 is fig. 3, in which the upper curve represents the first outline vector (upper boundary of the high energy region) and the lower curve represents the second outline vector (lower boundary of the high energy region). It can be seen that, in order to improve the accuracy of extracting the contour vectors, the two contour vectors in fig. 3 may be subjected to a Hampel filtering process, and the result is shown in fig. 4, that is, a contour diagram of a high-energy region after the Hampel filtering process. Finally, the contour filtering process is performed on the doppler frequency-time diagram shown in fig. 2 according to the contour of the high-energy region shown in fig. 4, that is, elements other than the contour of the high-energy region are set to 0, thereby obtaining the doppler frequency-time diagram shown in fig. 5 after the contour filtering is performed. It is apparent that by performing the contour filtering process, both the partial outliers and noise points in the doppler frequency-time diagram shown in fig. 2 are filtered out.
In the subsequent operation, on the one hand, the target signal matrix, for example, the matrix S in the above example, may be directly input as input data into the machine learning model for recognition, so as to complete the action detection. On the other hand, the energy density matrix P after filtering can also be used filtered As input data, the input data is input into a machine learning model for recognition to complete motion detection.
104. And finishing the action detection of the target object according to the target signal image after the filtering processing.
After the filtered target signal image is obtained, the detection of the motion of the target object may be completed based on the target signal image, and for example, it may be detected whether or not a specified motion such as a fall has occurred in the target object. Specifically, a machine learning model may be trained as a classifier, the types of which include, but are not limited to, support vector machines and convolutional neural networks. The classifier obtained by training can be used for judging whether the target object has specified actions, such as falling, running, jumping and the like. In addition, when the target object is detected to fall or other appointed actions, an alarm signal can be output in an appointed mode, so that the target object can be timely rescued.
In an implementation manner of the embodiment of the present application, the completing the motion detection of the target object according to the target signal image after the filtering processing may include:
inputting the energy density matrix subjected to filtering treatment into a trained first classification network for processing, and determining whether the target object has a specified action or not according to the classification result of the first classification network; the first classification network is a neural network obtained by training by taking first sample data as a training set, and the first sample data is pre-acquired sample energy density matrix data with a preset action label.
The first classification network can be trained in advance as a classifier, and the first classification network is a neural network trained by taking pre-collected sample energy density matrix data with a preset action label as a training set. For example, if the preset action tag includes: the rapid fall, the slow fall and the non-fall are 3 kinds of labels, a large amount of sample energy density matrix data of radar signals acquired by personnel when the rapid fall is executed and a large amount of sample energy density matrix data of radar signals acquired by the personnel when the slow fall is executed and a large amount of sample energy density matrix data of radar signals acquired by the personnel when the non-fall is executed are taken as sample data. Then, a part (for example, 80%) of the sample data is used as a training set, and the other part (for example, 20%) is used as a test set, and a convolutional neural network, namely, a first classification network is obtained through iterative training. After the energy density matrix after the filtering processing is input into the first classification network for processing, the prediction probability of the target object for executing the actions corresponding to the preset action labels can be output as a classification result, and whether the target object has the specified actions can be determined according to the classification result. For example, if the prediction probability of the action corresponding to the "fast fall" tag is highest, it can be determined that the "fast fall" action has occurred in the target object.
In another implementation manner of the embodiment of the present application, the target signal image is a doppler frequency-time chart, where the doppler frequency-time chart is used to represent a relationship between a doppler frequency and a time of each radar signal sampling point in the target area, and the completing the motion detection of the target object according to the target signal image after the filtering processing may include:
(1) According to the first contour vector, the second contour vector and the target signal matrix, calculating an energy burst curve of the high-energy region;
(2) Constructing a distance-time diagram of the target area according to the radar reflection signals, wherein the distance-time diagram is used for representing the relationship between the distance and time of each radar signal sampling point in the target area;
(3) Converting the distance-time map into a distance signal matrix;
(4) Acquiring a third contour vector corresponding to the distance signal matrix by adopting the same method as the first contour vector corresponding to the target signal matrix;
(5) Inputting the first contour vector, the second contour vector, the third contour vector and the energy burst curve into a trained second classification network for processing, and determining whether the target object generates a specified action according to a classification result of the second classification network; the second classification network is a neural network trained by taking second sample data as a training set, and the second sample data is pre-collected sample contour vector data with a preset action label and sample energy burst curve data.
The data size contained in the target signal image is huge, when the computing resource of the system is limited or the classifier is operated at the cloud server to perform edge computation, the data size of the operation needs to be compressed, and at the moment, the contour vector can be used as the input of the classifier for training, and the method can also be used for identifying appointed actions such as falling. Specifically, fourier transform processing may be performed on the radar baseband data, and a distance-time diagram of a target area where the target object is located is constructed, where the distance-time diagram is used to represent a relationship between a distance and a time of each radar signal sampling point in the target area. Then, the distance-time diagram is converted into a matrix form to obtain a distance signal matrix, and the contour vector corresponding to the distance signal matrix is obtained by adopting the same method as the first contour vector corresponding to the target signal matrix in the previous step, and the contour vector is represented by a third contour vector. In general terms, the energy density matrix corresponding to the distance signal matrix is first acquired, then the corresponding high energy region is determined, and then the upper boundary of the high energy region is extracted as the third contour matrix. The method of extracting the contour vector for the distance-time graph is completely similar to the method of extracting the contour vector for the doppler frequency-time graph, except that: the contour vectors extracted from the doppler frequency-time map include a contour vector (first contour vector) corresponding to the upper boundary of the high-energy region and a contour vector (second contour vector) corresponding to the lower boundary of the high-energy region, and the contour vectors extracted from the range-time map include only a contour vector (third contour vector) corresponding to the upper boundary of the high-energy region.
Illustratively, as shown in fig. 6, a schematic diagram of a distance-time diagram is shown, in which the white partial region is a high-energy region. The outline of the high energy region extracted for fig. 6 is shown in fig. 7, where only one curve in fig. 7 represents the third contour vector (upper boundary of the high energy region). Similarly, in order to improve the accuracy of extracting the contour vector, the contour vector in fig. 7 may be subjected to a filtering process such as Hampel, etc., and the result is shown in fig. 8, that is, the contour diagram of the high-energy region after the filtering process.
On the other hand, from the first contour vector, the second contour vector, and the Doppler frequency-time matrix (i.e., the target signal matrix) obtained by Doppler frequency-time map conversion, an energy burst curve (Power Burst Curve, PBC) of the high energy region can be calculated. For example, in the example described above, the energy burst curve may be calculated using the following formula:
wherein rPBC represents an energy burst curve, PC 1 Representing the first contour vector, PC 2 Representing a second contour vector, S (i, j) representing elements in the doppler frequency-time matrix S.
In training the second classification network, a similar approach to the first classification network may be used, except that the type of sample data used for training is different. Specifically, the types of sample data adopted by the second classification network are sample contour vector data with a preset action tag and sample energy burst curve data, wherein the sample contour vector data comprises two types: profile vector data corresponding to the doppler frequency-time diagram and profile vector data corresponding to the range-time diagram. After the first contour vector, the second contour vector, the third contour vector and the energy burst curve are input into the second classification network for processing, the prediction probability of the target object for executing the actions corresponding to the preset action labels can be output as a classification result, and whether the target object has the specified actions can be determined according to the classification result.
Illustratively, fig. 9 is a schematic diagram of a network architecture of a second classification network. The input data comprises three parts, namely two contour vectors corresponding to the Doppler frequency-time diagram, one contour vector corresponding to the distance-time diagram and an energy burst curve of a high-energy region. After the input three parts of data are processed by the convolution layer, corresponding feature diagrams can be obtained, and after the feature diagrams corresponding to the three parts of data are fused, the three parts of data are input to the full-connection layer for processing, so that a final action prediction result is obtained.
In the embodiment of the application, firstly, a radar reflection signal of an area where a target object is located is obtained, and a target signal image is constructed through the radar reflection signal; then, filtering processing is carried out on the target signal image, so that the outliers, noise points and other interferences contained in the target signal image are removed; finally, according to the filtered target signal image, motion detection of the target object is completed, for example, a machine learning model may be trained as a classifier, and the filtered target signal image is input into the classifier to determine whether the target object falls or not. The radar data are collected in the process, so that the privacy safety of a target object can be effectively protected; in addition, after the target signal image is acquired, filtering processing is performed to remove outliers and noise points contained in the image, so that the accuracy of motion detection can be improved.
In order to facilitate understanding of the motion detection method provided by the embodiment of the present application, the following lists an actual application scenario.
Fig. 10 is a schematic operation diagram of the motion detection method according to the embodiment of the present application in an actual application scenario. In fig. 10, the target object to be monitored is an elderly person, and the target area in which the elderly person is located is a room. A frequency modulated continuous millimeter wave radar with an operating frequency of 77GHz may be provided as a sensor on the ceiling of the room, the bandwidth of the radar may be set to 1.69GHz, the time length of each radar pulse may be 160 microseconds, and the length of each radar data frame may be 6.2 seconds.
When acquiring sample data, the volunteer can simulate the old people to execute each designed action within the coverage range of radar signals, and the radar acquires corresponding sample data. Illustratively, a certain designed action tag classification result is shown in the following table 1:
TABLE 1
In table 1, a total of 29 representative actions of 3 general classes (fast fall, slow fall, and non-fall) were designed. In the traditional laboratory research stage, volunteers are usually required to simulate fast-falling actions, but slow-falling actions are ignored, and the robustness of the classification network obtained through training can be effectively improved by introducing sample data of the slow-falling actions.
When the old people perform a certain action in the room, the radar can acquire corresponding radar reflection signals, and radar baseband data can be obtained through preprocessing. The radar baseband data may then be time-frequency analyzed by a signal processing algorithm, for example, a doppler frequency-time plot may be obtained by a short-time fourier transform process.
After obtaining the doppler frequency-time plot, filtering may be performed on the doppler frequency-time plot using the contour filtering method described above to remove outliers and noise points present in the doppler frequency-time plot, thereby obtaining a filtered doppler frequency-time plot and two contour vectors (i.e., the first contour vector and the second contour vector described above) corresponding to the doppler frequency-time plot. In addition, based on the doppler frequency-time matrix (obtained by doppler frequency-time map conversion) and the two contour vectors corresponding to the doppler frequency-time map, an energy burst curve of the high energy region can be calculated.
Similarly, fourier transform processing may be performed on the radar baseband data to obtain a distance-time map of the radar signal, and by performing contour filtering on the distance-time map, a contour vector corresponding to the distance-time map (i.e., the third contour vector described above) may be obtained.
Based on the different types of sample data collected, two different classification networks, namely the first classification network and the second classification network described above, may be trained.
In one aspect, the Doppler frequency-time matrix may be used as an input to process using a first classification network to determine if an elderly person in the room has fallen. Through analysis of experimental data, if a doppler frequency-time matrix is input in which contour filtering is not performed, the accuracy of the first classification network in identifying the falling action is 93.65%, and if a doppler frequency-time matrix is input in which contour filtering is performed, the accuracy of the first classification network in identifying the falling action is 95.88%. Therefore, the Doppler frequency-time matrix is processed by adopting contour filtering, so that the classification accuracy of the first classification network can be improved.
On the other hand, in order to reduce the consumption of computational resources, the contour vector may also be selected as an input to the classification network. Specifically, two contour vectors corresponding to the doppler frequency-time diagram, a contour vector corresponding to the distance-time diagram, and an energy burst curve of a high energy region may be used as inputs to process using a second classification network, thereby discriminating whether or not the elderly in the room falls. Through the analysis of experimental data, the accuracy of identifying the falling action by using the second classification network can reach 95.45 percent.
It should be understood that the sequence numbers of the steps in the foregoing embodiments do not mean the order of execution, and the execution order of the processes should be determined by the functions and the internal logic, and should not be construed as limiting the implementation process of the embodiments of the present application.
The above mainly describes a motion detection method, and a motion detection apparatus will be described below.
Referring to fig. 11, an embodiment of an action detecting apparatus according to an embodiment of the present application includes:
a radar signal acquisition module 201, configured to acquire a radar reflection signal of a target area where a target object is located;
a signal image construction module 202, configured to construct a target signal image according to the radar reflection signal, where the target signal image is used to represent a relationship between a specified radar parameter of the target area and time;
a signal image filtering module 203, configured to perform filtering processing on the target signal image, so as to remove outliers and noise points in the target signal image;
and the motion detection module 204 is configured to complete motion detection of the target object according to the filtered target signal image.
In one implementation manner of the embodiment of the present application, the signal image filtering module may include:
A high-energy region detection unit, configured to detect a high-energy region in the target signal image, where the high-energy region is a region in the target signal image where energy corresponding to the specified radar parameter is greater than a target energy threshold;
and the signal image filtering unit is used for performing filtering processing on the target signal image so as to remove outliers and noise points which are positioned outside the high-energy area in the target signal image.
Further, the high energy region detection unit may include:
a matrix conversion subunit, configured to convert the target signal image into a target signal matrix;
an energy density matrix calculating subunit, configured to calculate an energy density matrix corresponding to the target signal matrix, where an element included in the energy density matrix is an energy value corresponding to an element included in the target signal matrix;
an energy threshold calculation unit, configured to calculate the target energy threshold according to the energy density matrix;
a high-energy region determining subunit, configured to determine, as the high-energy region, a region where an element with an energy value greater than the target energy threshold is located in the energy density matrix;
The signal image filtering unit may include:
a contour vector extraction subunit, configured to extract a first contour vector and a second contour vector of the high-energy region; the first contour vector is a vector formed by the maximum row label corresponding to each column in the high-energy region, and the second contour vector is a vector formed by the minimum row label corresponding to each column in the high-energy region;
the contour filtering subunit is used for setting elements with row marks larger than the first row marks in the columns and elements with row marks smaller than the second row marks in the columns as appointed values for each column in the energy density matrix to obtain the energy density matrix after filtering; the first row label is the largest row label corresponding to the column of the first contour vector, and the second row label is the smallest row label corresponding to the column of the second contour vector.
Still further, the signal image filtering unit may further include:
and the contour vector filtering subunit is used for respectively executing filtering processing on the first contour vector and the second contour vector so as to remove outliers contained in the first contour vector and outliers contained in the second contour vector.
Still further, the energy threshold calculation unit may include:
an element numerical value calculating subunit, configured to calculate a maximum value and an average value of elements included in the energy density matrix;
and the energy threshold calculating subunit is used for calculating the target energy threshold according to the preset weight coefficient, the maximum value and the average value.
In one implementation of the embodiment of the present application, the action detection module may include:
the first action detection unit is used for inputting the energy density matrix after filtering processing into a trained first classification network for processing, and determining whether the target object generates a specified action or not according to the classification result of the first classification network; the first classification network is a neural network obtained by training by taking first sample data as a training set, and the first sample data is pre-acquired sample energy density matrix data with a preset action label.
In another implementation manner of the embodiment of the present application, the target signal image is a doppler frequency-time chart, where the doppler frequency-time chart is used to represent a relationship between a doppler frequency and a time of each radar signal sampling point in the target area, and the action detection module may include:
An energy burst curve calculation unit, configured to calculate an energy burst curve of the high energy region according to the first contour vector, the second contour vector, and the target signal matrix;
a distance-time diagram construction unit, configured to construct a distance-time diagram of the target area according to the radar reflection signal, where the distance-time diagram is used to represent a relationship between a distance and a time of each radar signal sampling point in the target area;
a distance signal matrix conversion unit for converting the distance-time map into a distance signal matrix;
a contour vector extraction unit, configured to acquire a third contour vector corresponding to the distance signal matrix by using the same method as that for acquiring the first contour vector corresponding to the target signal matrix;
the second action detection unit is used for inputting the first contour vector, the second contour vector, the third contour vector and the energy burst curve into a trained second classification network for processing, and determining whether the target object generates a specified action or not according to the classification result of the second classification network; the second classification network is a neural network trained by taking second sample data as a training set, and the second sample data is pre-collected sample contour vector data with a preset action label and sample energy burst curve data.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program that, when executed by a processor, implements any one of the motion detection methods as shown in fig. 1.
The embodiment of the application also provides a computer program product which, when run on a terminal device, causes the terminal device to perform any one of the action detection methods as shown in fig. 1.
Fig. 12 is a schematic diagram of a terminal device according to an embodiment of the present application. As shown in fig. 12, the terminal device 3 of this embodiment includes: a processor 30, a memory 31 and a computer program 32 stored in said memory 31 and executable on said processor 30. The processor 30, when executing the computer program 32, implements the steps of the embodiments of the respective action detection methods described above, such as steps 101 to 104 shown in fig. 1. Alternatively, the processor 30 may perform the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 201 to 204 shown in fig. 11, when executing the computer program 32.
The computer program 32 may be divided into one or more modules/units which are stored in the memory 31 and executed by the processor 30 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program 32 in the terminal device 3.
The processor 30 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may be an internal storage unit of the terminal device 3, such as a hard disk or a memory of the terminal device 3. The memory 31 may be an external storage device of the terminal device 3, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 3. Further, the memory 31 may also include both an internal storage unit and an external storage device of the terminal device 3. The memory 31 is used for storing the computer program as well as other programs and data required by the terminal device. The memory 31 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the system embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment of the present application.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium may include content that is subject to appropriate increases and decreases as required by jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is not included as electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A motion detection method, comprising:
acquiring radar reflection signals of a target area where a target object is located;
constructing a target signal image according to the radar reflection signal, wherein the target signal image is used for representing the relationship between specified radar parameters and time of the target area;
performing filtering processing on the target signal image to remove outliers and noise points in the target signal image;
and finishing the action detection of the target object according to the target signal image after the filtering processing.
2. The method of claim 1, wherein the performing a filtering process on the target signal image to remove outliers and noise points in the target signal image comprises:
Detecting a high-energy region in the target signal image, wherein the high-energy region is a region in the target signal image, and the energy corresponding to the appointed radar parameter is larger than a target energy threshold value;
and performing filtering processing on the target signal image to remove outliers and noise points which are located outside the high-energy region in the target signal image.
3. The method of claim 2, wherein the detecting the high energy region in the target signal image comprises:
converting the target signal image into a target signal matrix;
calculating an energy density matrix corresponding to the target signal matrix, wherein the elements contained in the energy density matrix are energy values corresponding to the elements contained in the target signal matrix;
calculating the target energy threshold according to the energy density matrix;
determining a region where an element with an energy value larger than the target energy threshold value is located in the energy density matrix as the high-energy region;
the filtering processing is performed on the target signal image to remove outliers and noise points in the target signal image, including:
Extracting a first contour vector and a second contour vector of the high-energy region; the first contour vector is a vector formed by the maximum row label corresponding to each column in the high-energy region, and the second contour vector is a vector formed by the minimum row label corresponding to each column in the high-energy region;
for each column in the energy density matrix, setting elements with row marks larger than the first row marks in the column and elements with row marks smaller than the second row marks in the column as appointed values to obtain the energy density matrix after filtering; the first row label is the largest row label corresponding to the column of the first contour vector, and the second row label is the smallest row label corresponding to the column of the second contour vector.
4. The method of claim 3, further comprising, after extracting the first contour vector and the second contour vector of the high energy region:
and respectively executing filtering processing on the first contour vector and the second contour vector to remove outliers contained in the first contour vector and outliers contained in the second contour vector.
5. The method of claim 3, wherein said calculating said target energy threshold from said energy density matrix comprises:
calculating the maximum value and the average value of the elements contained in the energy density matrix;
and calculating the target energy threshold according to a preset weight coefficient, the maximum value and the average value.
6. The method according to any one of claims 3 to 5, wherein the performing the motion detection of the target object based on the filtered target signal image includes:
inputting the energy density matrix subjected to filtering treatment into a trained first classification network for processing, and determining whether the target object has a specified action or not according to the classification result of the first classification network; the first classification network is a neural network obtained by training by taking first sample data as a training set, and the first sample data is pre-acquired sample energy density matrix data with a preset action label.
7. The method according to any one of claims 3 to 5, wherein the target signal image is a doppler frequency-time diagram, the doppler frequency-time diagram is used to represent a relationship between a doppler frequency and a time of each radar signal sampling point in the target area, and the performing the motion detection on the target object according to the target signal image after the filtering processing includes:
According to the first contour vector, the second contour vector and the target signal matrix, calculating an energy burst curve of the high-energy region;
constructing a distance-time diagram of the target area according to the radar reflection signals, wherein the distance-time diagram is used for representing the relationship between the distance and time of each radar signal sampling point in the target area;
converting the distance-time map into a distance signal matrix;
acquiring a third contour vector corresponding to the distance signal matrix by adopting the same method as the first contour vector corresponding to the target signal matrix;
inputting the first contour vector, the second contour vector, the third contour vector and the energy burst curve into a trained second classification network for processing, and determining whether the target object generates a specified action according to a classification result of the second classification network; the second classification network is a neural network trained by taking second sample data as a training set, and the second sample data is pre-collected sample contour vector data with a preset action label and sample energy burst curve data.
8. An action detection device, comprising:
the radar signal acquisition module is used for acquiring radar reflection signals of a target area where a target object is located;
the signal image construction module is used for constructing a target signal image according to the radar reflection signal, and the target signal image is used for representing the relationship between the specified radar parameter of the target area and time;
the signal image filtering module is used for performing filtering processing on the target signal image so as to remove outliers and noise points in the target signal image;
and the action detection module is used for completing the action detection of the target object according to the target signal image after the filtering processing.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the action detection method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the action detection method according to any one of claims 1 to 7.
CN202210339126.6A 2022-04-01 2022-04-01 Action detection method and device, terminal equipment and storage medium Pending CN116933002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210339126.6A CN116933002A (en) 2022-04-01 2022-04-01 Action detection method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210339126.6A CN116933002A (en) 2022-04-01 2022-04-01 Action detection method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116933002A true CN116933002A (en) 2023-10-24

Family

ID=88383061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210339126.6A Pending CN116933002A (en) 2022-04-01 2022-04-01 Action detection method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116933002A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117331047A (en) * 2023-12-01 2024-01-02 德心智能科技(常州)有限公司 Human behavior data analysis method and system based on millimeter wave radar

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117331047A (en) * 2023-12-01 2024-01-02 德心智能科技(常州)有限公司 Human behavior data analysis method and system based on millimeter wave radar

Similar Documents

Publication Publication Date Title
Shorfuzzaman et al. Towards the sustainable development of smart cities through mass video surveillance: A response to the COVID-19 pandemic
US20210358611A1 (en) Method for Detecting Epileptic Spike, Method for Training Network Model, and Computer Device
CN112085010B (en) Mask detection and deployment system and method based on image recognition
CN110689043A (en) Vehicle fine granularity identification method and device based on multiple attention mechanism
US20220028230A1 (en) Methods And System For Monitoring An Environment
CN112990313B (en) Hyperspectral image anomaly detection method and device, computer equipment and storage medium
CN111523362A (en) Data analysis method and device based on electronic purse net and electronic equipment
CN116933002A (en) Action detection method and device, terminal equipment and storage medium
CN111476102A (en) Safety protection method, central control equipment and computer storage medium
CN111444926A (en) Radar-based regional people counting method, device, equipment and storage medium
CN113205510A (en) Railway intrusion foreign matter detection method, device and terminal
CN116311081B (en) Medical laboratory monitoring image analysis method and system based on image recognition
CN115147618A (en) Method for generating saliency map, method and device for detecting abnormal object
CN112885014A (en) Early warning method, device, system and computer readable storage medium
Swapna et al. A regression neural network based glaucoma detection system using texture features
CN112861711A (en) Regional intrusion detection method and device, electronic equipment and storage medium
Rondón et al. Real-Time Detection and Clasification System of Biosecurity Elements Using Haar Cascade Classifier with Open Source
CN112232329A (en) Multi-core SVM training and alarming method, device and system for intrusion signal recognition
CN110334671A (en) A kind of violence infringement detection system and detection method based on Expression Recognition
KR20150120805A (en) Method and system for detecting human in range image
CN114155589B (en) Image processing method, device, equipment and storage medium
CN116895286B (en) Printer fault monitoring method and related device
AU2021106278A4 (en) Automated Anterior Cruciate Ligament (ACL) Tear Detection System
CN117764993B (en) Water quality on-line monitoring system and method based on image analysis
CN111449652B (en) Construction safety monitoring method and device based on brain wave analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination