CN112084980B - Pedestrian footstep state identification method and device - Google Patents
Pedestrian footstep state identification method and device Download PDFInfo
- Publication number
- CN112084980B CN112084980B CN202010965318.9A CN202010965318A CN112084980B CN 112084980 B CN112084980 B CN 112084980B CN 202010965318 A CN202010965318 A CN 202010965318A CN 112084980 B CN112084980 B CN 112084980B
- Authority
- CN
- China
- Prior art keywords
- connected domain
- frame
- sensor data
- data
- pixel area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 239000011159 matrix material Substances 0.000 claims abstract description 96
- 238000001514 detection method Methods 0.000 claims abstract description 50
- 238000012545 processing Methods 0.000 claims abstract description 50
- 238000004590 computer program Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 9
- 238000010586 diagram Methods 0.000 description 7
- 230000006399 behavior Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
- G06V40/25—Recognition of walking or running movements, e.g. gait recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides a pedestrian step state identification method and device, comprising the following steps: collecting an N frame data matrix; performing image binarization processing on the Nth frame data matrix to obtain a signal amplitude data matrix; carrying out connected domain processing on the signal amplitude data matrix to obtain a connected domain step detection result; extracting image features from the connected domain step detection result to obtain feature data of an N-th frame connected domain; the characteristic data of the connected domain of the N frame is compared with the characteristic data of the connected domain of the N-1 frame, the step state of the object to be detected is determined according to the comparison result, and the walking state of the pedestrian can be analyzed, so that the behavior of the pedestrian is monitored.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a pedestrian step state identification method and device.
Background
The intelligent ground is used for sensing the positions of pedestrians by using a lattice sensor network paved on the ground surface. When a pedestrian walks on the ground, the pedestrians step down at the corresponding lattice positions on the intelligent ground, and the sensors at the lattice positions are stimulated to generate signals, and the signals are sampled and transmitted to a rear-end signal processing system for detection, so that the step positions of the pedestrians are formed. And analyzing the footstep state of the pedestrian in the walking process. However, no method for analyzing the footstep state of the pedestrian exists in the prior art.
An effective solution to the above-mentioned problems has not been proposed yet.
Disclosure of Invention
Accordingly, the present invention is directed to a method and apparatus for identifying a pedestrian's step status, which can analyze the walking status of the pedestrian, thereby monitoring the behavior of the pedestrian.
In a first aspect, an embodiment of the present invention provides a method for identifying a step status of a pedestrian, where the method includes:
Collecting an N frame data matrix;
performing image binarization processing on the Nth frame data matrix to obtain a signal amplitude data matrix;
carrying out connected domain processing on the signal amplitude data matrix to obtain a connected domain step detection result;
Extracting image features from the connected domain step detection result to obtain feature data of an N-th frame connected domain;
Comparing the characteristic data of the connected domain of the N frame with the characteristic data of the connected domain of the N-1 frame, and determining the footstep state of the object to be detected according to the comparison result.
Further, the nth frame data matrix includes a plurality of sensor data, and the performing image binarization processing on the nth frame data matrix to obtain a signal amplitude data matrix includes:
Comparing the amplitude value corresponding to each sensor data with a set threshold value;
if the amplitude value is larger than the set threshold value, reserving the amplitude value corresponding to the sensor data;
If the amplitude value is smaller than the set threshold value, setting the amplitude value corresponding to the sensor data to be 0;
The sensor data retaining the amplitude value and the sensor data set to 0 are formed into the signal amplitude data matrix.
Further, the processing of the connected domain to the signal amplitude data matrix to obtain a connected domain step detection result includes repeatedly executing the following processes until the sensor data in the signal amplitude data matrix are traversed:
selecting any sensor data from the signal amplitude data matrix as current sensor data, and marking the current sensor data;
Selecting sensor data adjacent to the current sensor data position from the sensor data which are not marked yet, and marking the selected sensor data;
And forming the current sensor data and the selected sensor data into the connected domain, and taking the connected domain as a step detection result of the connected domain.
Further, the connected domain step detection result includes at least one connected domain, and the image feature extraction is performed on the connected domain step detection result to obtain feature data of the nth frame connected domain, including:
Determining the number of sensor data in the connected domain;
Obtaining the pixel area of the N frame connected domain according to the number of the sensor data;
and obtaining the average signal intensity of the N frame connected domain according to the amplitude value corresponding to the sensor data.
Further, the comparing the feature data of the connected domain of the nth frame with the feature data of the connected domain of the N-1 st frame, and determining the step status of the object to be detected according to the comparison result includes:
Comparing the pixel area and the average signal intensity of the connected domain of the N-1 frame with the pixel area and the average signal intensity of the connected domain at the same position of the N-1 frame;
If the pixel area of the connected domain of the N frame is not smaller than the pixel area of the connected domain of the N-1 frame and the average signal intensity of the connected domain of the N frame is larger than the average signal intensity of the connected domain of the N-1 frame, determining that the footstep of the object to be detected is in a falling state;
If the pixel area of the connected domain of the N frame is equal to the pixel area of the connected domain of the N-1 frame and the average signal intensity of the connected domain of the N frame is equal to the average signal intensity of the connected domain of the N-1 frame, determining that the step of the object to be detected is in a stay state;
And if the pixel area of the connected domain of the N frame is not larger than the pixel area of the connected domain of the N-1 frame and the average signal intensity of the connected domain of the N frame is smaller than the average signal intensity of the connected domain of the N-1 frame, determining that the step of the object to be detected is in a lifting state.
In a second aspect, an embodiment of the present invention provides a pedestrian step status recognition device, including:
The acquisition unit is used for acquiring an N frame data matrix;
the binarization processing unit is used for performing image binarization processing on the Nth frame data matrix to obtain a signal amplitude data matrix;
the connected domain processing unit is used for carrying out connected domain processing on the signal amplitude data matrix to obtain a connected domain step detection result;
the extraction unit is used for extracting image features of the connected domain step detection result to obtain feature data of an N-th frame connected domain;
And the comparison unit is used for comparing the characteristic data of the connected domain of the N frame with the characteristic data of the connected domain of the N-1 frame, and determining the step state of the object to be detected according to the comparison result.
Further, the nth frame data matrix includes a plurality of sensor data, and the binarization processing unit is specifically configured to:
Comparing the amplitude value corresponding to each sensor data with a set threshold value;
if the amplitude value is larger than the set threshold value, reserving the amplitude value corresponding to the sensor data;
If the amplitude value is smaller than the set threshold value, setting the amplitude value corresponding to the sensor data to be 0;
The sensor data retaining the amplitude value and the sensor data set to 0 are formed into the signal amplitude data matrix.
Further, the connected domain processing unit includes, repeatedly performing the following processing until the sensor data in the signal amplitude data matrix is traversed:
selecting any sensor data from the signal amplitude data matrix as current sensor data, and marking the current sensor data;
Selecting sensor data adjacent to the current sensor data position from the sensor data which are not marked yet, and marking the selected sensor data;
And forming the current sensor data and the selected sensor data into the connected domain, and taking the connected domain as a step detection result of the connected domain.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory, and a processor, where the memory stores a computer program executable on the processor, and where the processor implements a method as described above when executing the computer program.
In a fourth aspect, embodiments of the present invention provide a computer readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the method as described above.
The embodiment of the invention provides a pedestrian footstep state identification method and device, comprising the following steps: collecting an N frame data matrix; performing image binarization processing on the Nth frame data matrix to obtain a signal amplitude data matrix; carrying out connected domain processing on the signal amplitude data matrix to obtain a connected domain step detection result; extracting image features from the connected domain step detection result to obtain feature data of an N-th frame connected domain; the characteristic data of the connected domain of the N frame is compared with the characteristic data of the connected domain of the N-1 frame, the step state of the object to be detected is determined according to the comparison result, and the walking state of the pedestrian can be analyzed, so that the behavior of the pedestrian is monitored.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for identifying a pedestrian's footstep status according to an embodiment of the present invention;
FIG. 2 is a diagram of a 67 th frame data matrix according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram of a 16×8 signal amplitude data matrix according to a second embodiment of the present invention;
FIG. 4 is a schematic diagram of a connected domain of a 67 th frame data matrix according to a second embodiment of the present invention;
FIG. 5 is a schematic diagram of a connected domain of a 66 th frame data matrix according to a second embodiment of the present invention;
Fig. 6 is a schematic diagram of a pedestrian step status recognition device according to a third embodiment of the present invention.
Icon:
1-an acquisition unit; a 2-binarization processing unit; a 3-connected domain processing unit; 4-an extraction unit; 5-a comparison unit.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The intelligent ground is used for sensing the positions of pedestrians by using a lattice sensor network paved on the ground surface. When a pedestrian walks on the ground, the pedestrians step down at the corresponding lattice positions on the intelligent ground, and the sensors at the lattice positions are stimulated to generate signals, and the signals are sampled and transmitted to a rear-end signal processing system for detection, so that the step positions of the pedestrians are formed.
The signals of the dot matrix sensor of the intelligent ground at different moments are equivalent to a camera which continuously shoots a specific area, and the ground induction conditions in the area can be recorded sequentially along with the time. When a pedestrian walks on the ground, the pedestrian can be recorded by the dot matrix sensor on the intelligent ground, and the state of the pedestrians is obtained. According to the state of the pedestrians, the walking state of the pedestrians can be predicted, so that a basis is provided for subsequent pedestrian behavior analysis.
In order to facilitate understanding of the present embodiment, the following describes embodiments of the present invention in detail.
Embodiment one:
Fig. 1 is a flowchart of a method for identifying a pedestrian's step status according to an embodiment of the present invention.
Referring to fig. 1, the method includes the steps of:
step S101, collecting an N frame data matrix;
Here, the nth frame data matrix is an mxn smart ground lattice, where M and N are positive integers, M > 16, N > 4.
Step S102, performing image binarization processing on an N frame data matrix to obtain a signal amplitude data matrix;
here, the image binarization process is also called a 0-1 detection method, and the sensor data in the nth frame data matrix is compared with a set threshold value to obtain a signal amplitude data matrix.
Step S103, carrying out connected domain processing on the signal amplitude data matrix to obtain a connected domain step detection result;
here, the connected domain analysis is performed on the sensor data in the signal amplitude data matrix, the sensor data adjacent to each other in position are classified into the same connected domain, and the formed connected domain is used as the connected domain step detection result.
Step S104, extracting image features of the connected domain step detection result to obtain feature data of an N-th frame connected domain; wherein, the characteristic data of the N frame connected domain comprises the pixel area of the connected domain and the average signal intensity of the connected domain.
Step S105, comparing the characteristic data of the connected domain of the N frame with the characteristic data of the connected domain of the N-1 frame, and determining the step state of the object to be detected according to the comparison result.
By repeating the above steps, the step state of the object to be detected which changes with time can be obtained. Wherein the object to be detected includes, but is not limited to, a pedestrian.
Further, the nth frame data matrix includes a plurality of sensor data, and step S102 includes the steps of:
step S201, comparing the amplitude value corresponding to each sensor data with a set threshold value;
step S202, if the amplitude value is larger than a set threshold value, reserving the amplitude value corresponding to the sensor data;
Step S203, if the amplitude value is smaller than the set threshold value, setting the amplitude value corresponding to the sensor data to 0;
Step S204, the sensor data with the amplitude value reserved and the sensor data set to 0 are formed into a signal amplitude data matrix.
Specifically, the nth frame data matrix includes a plurality of sensor data, each sensor data corresponds to an amplitude value, the amplitude value corresponding to each sensor data is compared with a set threshold, and if the amplitude value is greater than the set threshold, the amplitude value corresponding to the sensor data is reserved; if the amplitude value is smaller than the preset value, the amplitude value corresponding to the sensor data is replaced by 0, and at the moment, the sensor data with the reserved amplitude value and the sensor data with the amplitude value replaced by 0 form a signal amplitude data matrix.
Further, step S103 includes the step of repeatedly performing the following processing until the sensor data in the signal amplitude data matrix are all traversed:
Step S201, selecting any sensor data from the signal amplitude data matrix as current sensor data, and marking the current sensor data;
Step S202, selecting sensor data adjacent to the current sensor data position from the sensor data which are not marked yet, and marking the selected sensor data;
Step S203, the current sensor data and the selected sensor data form a connected domain, and the connected domain is used as a connected domain step detection result.
Specifically, any sensor data is selected from the signal amplitude data matrix, and the selected sensor data is used as a current detection point, namely the current sensor data. Since the current sensor data has already been selected, it is necessary to mark the sensor data in order to avoid being selected again.
And selecting sensor data adjacent to the current sensor data position from the sensor data which are not marked, and marking the selected sensor data. The position adjacency means whether or not there is sensor data on top, bottom, left, right, top left, bottom left, top right and bottom right of the current sensor data centering on the current sensor data. And forming a connected domain by the current sensor data and the selected sensor data, and taking the connected domain as a connected domain step detection result.
Further, the connected domain step detection result includes at least one connected domain, and step S104 includes the following steps:
Step S301, determining the number of sensor data in the connected domain;
step S302, obtaining the pixel area of the N frame connected domain according to the number of the sensor data;
Step S303, obtaining the average signal intensity of the N frame connected domain according to the amplitude value corresponding to the sensor data.
Specifically, the connected domain includes a plurality of sensor data, the number of the sensor data is counted, and the number of the sensor data is used as the pixel area of the nth frame connected domain.
In the connected domain, each sensor data corresponds to an amplitude value, and the amplitude value corresponding to each sensor data is added and summed and divided by the number of the sensor data, so that the average signal strength of the N frame connected domain is obtained.
Further, step S105 includes the steps of:
step S401, comparing the pixel area and the average signal intensity of the connected domain of the N frame with the pixel area and the average signal intensity of the connected domain at the same position of the N-1 frame;
Specifically, the calculation process of the connected domain of the N-1 th frame is the same as that of the connected domain of the N-th frame. After the connected domain of the N-th frame is obtained, the position of the connected domain of the N-1 th frame is obtained according to the position of the connected domain, that is, the position of the connected domain of the N-th frame is the same as the position of the connected domain of the N-1 th frame, and then the pixel area and the average signal intensity of the connected domain of the N-th frame are compared with the pixel area and the average signal intensity of the connected domain of the N-1 th frame.
Step S402, if the pixel area of the connected domain of the N frame is not smaller than the pixel area of the connected domain of the N-1 frame and the average signal intensity of the connected domain of the N frame is larger than the average signal intensity of the connected domain of the N-1 frame, determining that the footstep of the object to be detected is in a falling state;
Step S403, if the pixel area of the connected domain of the N frame is equal to the pixel area of the connected domain of the N-1 frame, and the average signal intensity of the connected domain of the N frame is equal to the average signal intensity of the connected domain of the N-1 frame, determining that the step of the object to be detected is in a stay state;
In step S404, if the pixel area of the connected domain of the N frame is not greater than the pixel area of the connected domain of the N-1 frame and the average signal intensity of the connected domain of the N frame is smaller than the average signal intensity of the connected domain of the N-1 frame, determining that the step of the object to be detected is in a lifted state.
The embodiment of the invention provides a pedestrian footstep state identification method, which comprises the following steps: collecting an N frame data matrix; performing image binarization processing on the Nth frame data matrix to obtain a signal amplitude data matrix; carrying out connected domain processing on the signal amplitude data matrix to obtain a connected domain step detection result; extracting image features from the connected domain step detection result to obtain feature data of an N-th frame connected domain; the characteristic data of the connected domain of the N frame is compared with the characteristic data of the connected domain of the N-1 frame, the step state of the object to be detected is determined according to the comparison result, and the walking state of the pedestrian can be analyzed, so that the behavior of the pedestrian is monitored.
Embodiment two:
Fig. 2 is a schematic diagram of a 67 th frame data matrix according to a second embodiment of the present invention.
Referring to fig. 2, the smart ground lattice is 16×8, 16 is the number of rows, and 8 is the number of columns. Image binarization processing is carried out on the 67 th frame data matrix to obtain a signal amplitude data matrix, and sensor data with amplitude values reserved in the signal amplitude data matrix is used as a detection result of the 67 th frame data matrix, and specifically, reference is made to fig. 3.
And carrying out connected domain processing on the detection result of the 67 th frame data matrix, classifying the sensor data adjacent to each other into the same connected domain, thereby obtaining a connected domain, and taking the connected domain as a connected domain step detection result. And extracting image features from the connected domain step detection result to obtain a 67 th frame connected domain with a pixel area of 2 and an average signal intensity of 82.5450. Since two sensor data are adjacent to each other, one connected region is formed, and the pixel area of the 67 th frame connected region is 2. Referring to fig. 4, one of the sensor data [ X, Y ]: [4,3], X represents the number of rows and Y represents the number of columns, and the position of the sensor data is the 4 th row and the 3 rd column. Index represents the signal strength, and the signal strength of the sensor data was 134.6. Another sensor data [ X, Y ]: [5,3], X represents the number of rows and Y represents the number of columns, and the position of the sensor data is the 5 th row and the 3 rd column. The signal strength of the sensor data is 30.49. The average signal intensity 82.5450 of the connected domain is the sum of 134.6 and 30.49 divided by 2.
Referring to fig. 5, the connected domain step detection result of the 66 th frame data matrix is obtained by adopting the same calculation method as the 67 th frame data matrix, and the connected domain step detection result of the 66 th frame data matrix is subjected to image feature extraction to obtain a pixel area of 2 of the 66 th frame connected domain, and the average signal intensity is 135.86.
The position of the connected domain in the connected domain step detection result of the 66 th frame data matrix is the same as the position of the connected domain in the connected domain step detection result of the 67 th frame data matrix. The connected domain step detection result of the 66 th frame data matrix comprises a connected domain which is composed of two sensor data. One of the sensor data [ X, Y ]: [4,3], X represents the number of rows and Y represents the number of columns, and the position of the sensor data is the 4 th row and the 3 rd column. Index represents the signal intensity, and the signal intensity of the sensor data is 191.1. Another sensor data [ X, Y ]: [5,3], X represents the number of rows and Y represents the number of columns, and the position of the sensor data is the 5 th row and the 3 rd column. The signal strength of the sensor data is 80.62. The average signal intensity 135.86 of the connected domain is the sum of 191.1 and 80.62 divided by 2.
From the above, the pixel area of the 67 th frame connected domain is the same as the pixel area of the 66 th frame connected domain; the average signal intensity of the 67 th frame connected domain is smaller than that of the 66 th frame connected domain, so that the step state of the object to be detected in the 67 th frame data matrix is determined to be lifted.
Embodiment III:
Fig. 6 is a schematic diagram of a pedestrian step status recognition device according to a third embodiment of the present invention.
Referring to fig. 6, the apparatus includes: the device comprises an acquisition unit 1, a binarization processing unit 2, a connected domain processing unit 3, an extraction unit 4 and a comparison unit 5.
The acquisition unit 1 is used for acquiring an N frame data matrix;
A binarization processing unit 2, configured to perform image binarization processing on the nth frame data matrix to obtain a signal amplitude data matrix;
The connected domain processing unit 3 is used for performing connected domain processing on the signal amplitude data matrix to obtain a connected domain step detection result;
an extracting unit 4, configured to extract image features from the connected domain step detection result, to obtain feature data of an nth frame connected domain;
And a comparison unit 5 for comparing the characteristic data of the connected domain of the N-th frame with the characteristic data of the connected domain of the N-1 th frame, and determining the step state of the object to be detected according to the comparison result.
Further, the nth frame data matrix includes a plurality of sensor data, and the binarization processing unit 2 is specifically configured to:
Comparing the amplitude value corresponding to each sensor data with a set threshold value;
if the amplitude value is larger than the set threshold value, reserving the amplitude value corresponding to the sensor data;
If the amplitude value is smaller than the set threshold value, setting the amplitude value corresponding to the sensor data to be 0;
The sensor data with the amplitude value reserved and the sensor data set to 0 are formed into a signal amplitude data matrix.
Further, the connected domain processing unit 3 includes, repeatedly performing the following processing until the sensor data in the signal amplitude data matrix are all traversed:
Selecting any sensor data from the signal amplitude data matrix as current sensor data, and marking the current sensor data;
Selecting sensor data adjacent to the current sensor data position from the sensor data which are not marked yet, and marking the selected sensor data;
and forming a connected domain by the current sensor data and the selected sensor data, and taking the connected domain as a connected domain step detection result.
Further, the connected domain step detection result includes at least one connected domain, and the extraction unit 4 is specifically configured to:
Determining the number of sensor data in the connected domain;
Obtaining the pixel area of the N frame connected domain according to the number of the sensor data;
and obtaining the average signal intensity of the N frame connected domain according to the amplitude value corresponding to the sensor data.
Further, the comparing unit 5 is specifically configured to:
Comparing the pixel area and the average signal intensity of the connected domain of the N frame with the pixel area and the average signal intensity of the connected domain at the same position of the N-1 frame;
If the pixel area of the connected domain of the N frame is not smaller than the pixel area of the connected domain of the N-1 frame and the average signal intensity of the connected domain of the N frame is larger than the average signal intensity of the connected domain of the N-1 frame, determining that the step of the object to be detected is in a falling state;
if the pixel area of the connected domain of the N frame is equal to the pixel area of the connected domain of the N-1 frame and the average signal intensity of the connected domain of the N frame is equal to the average signal intensity of the connected domain of the N-1 frame, determining that the footstep of the object to be detected is in a stay state;
if the pixel area of the connected domain of the N frame is not larger than the pixel area of the connected domain of the N-1 frame and the average signal intensity of the connected domain of the N frame is smaller than the average signal intensity of the connected domain of the N-1 frame, determining that the step of the object to be detected is in a lifted state.
The embodiment of the invention provides a pedestrian footstep state identification device, which comprises: collecting an N frame data matrix; performing image binarization processing on the Nth frame data matrix to obtain a signal amplitude data matrix; carrying out connected domain processing on the signal amplitude data matrix to obtain a connected domain step detection result; extracting image features from the connected domain step detection result to obtain feature data of an N-th frame connected domain; the characteristic data of the connected domain of the N frame is compared with the characteristic data of the connected domain of the N-1 frame, the step state of the object to be detected is determined according to the comparison result, and the walking state of the pedestrian can be analyzed, so that the behavior of the pedestrian is monitored.
The embodiment of the invention also provides electronic equipment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the steps of the pedestrian step state identification method provided by the embodiment are realized when the processor executes the computer program.
The embodiment of the invention also provides a computer readable medium with non-volatile program code executable by a processor, wherein the computer readable medium stores a computer program, and the computer program executes the steps of the pedestrian step state identification method in the embodiment when being executed by the processor.
The computer program product provided by the embodiment of the present invention includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to perform the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment and will not be described herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (8)
1. A method for identifying a pedestrian's footstep status, the method comprising:
Collecting an N frame data matrix;
performing image binarization processing on the Nth frame data matrix to obtain a signal amplitude data matrix;
carrying out connected domain processing on the signal amplitude data matrix to obtain a connected domain step detection result;
Extracting image features from the connected domain step detection result to obtain feature data of an N-th frame connected domain;
Comparing the characteristic data of the connected domain of the N frame with the characteristic data of the connected domain of the N-1 frame, and determining the footstep state of the object to be detected according to the comparison result;
the connected domain step detection result comprises at least one connected domain, the connected domain step detection result is subjected to image feature extraction to obtain feature data of the N-th frame connected domain, and the method comprises the following steps:
Determining the number of sensor data in the connected domain;
Obtaining the pixel area of the N frame connected domain according to the number of the sensor data;
obtaining the average signal intensity of the N frame connected domain according to the amplitude value corresponding to the sensor data;
comparing the feature data of the connected domain of the nth frame with the feature data of the connected domain of the N-1 th frame, and determining the step state of the object to be detected according to the comparison result, wherein the step state comprises the following steps:
Comparing the pixel area and the average signal intensity of the connected domain of the N-1 frame with the pixel area and the average signal intensity of the connected domain at the same position of the N-1 frame;
If the pixel area of the connected domain of the N frame is not smaller than the pixel area of the connected domain of the N-1 frame and the average signal intensity of the connected domain of the N frame is larger than the average signal intensity of the connected domain of the N-1 frame, determining that the footstep of the object to be detected is in a falling state;
If the pixel area of the connected domain of the N frame is equal to the pixel area of the connected domain of the N-1 frame and the average signal intensity of the connected domain of the N frame is equal to the average signal intensity of the connected domain of the N-1 frame, determining that the step of the object to be detected is in a stay state;
And if the pixel area of the connected domain of the N frame is not larger than the pixel area of the connected domain of the N-1 frame and the average signal intensity of the connected domain of the N frame is smaller than the average signal intensity of the connected domain of the N-1 frame, determining that the step of the object to be detected is in a lifting state.
2. The pedestrian footstep status recognition method of claim 1, wherein the nth frame data matrix comprises a plurality of sensor data, the image binarization processing is performed on the nth frame data matrix to obtain a signal amplitude data matrix, comprising:
Comparing the amplitude value corresponding to each sensor data with a set threshold value;
if the amplitude value is larger than the set threshold value, reserving the amplitude value corresponding to the sensor data;
If the amplitude value is smaller than the set threshold value, setting the amplitude value corresponding to the sensor data to be 0;
The sensor data retaining the amplitude value and the sensor data set to 0 are formed into the signal amplitude data matrix.
3. The method for recognizing the footstep state of a pedestrian according to claim 1, wherein the step of performing the connected domain processing on the signal amplitude data matrix to obtain the connected domain footstep detection result comprises the steps of repeatedly performing the following processing until the sensor data in the signal amplitude data matrix are traversed:
selecting any sensor data from the signal amplitude data matrix as current sensor data, and marking the current sensor data;
Selecting sensor data adjacent to the current sensor data position from the sensor data which are not marked yet, and marking the selected sensor data;
And forming the current sensor data and the selected sensor data into the connected domain, and taking the connected domain as a step detection result of the connected domain.
4. A pedestrian footstep status recognition device, the device comprising:
The acquisition unit is used for acquiring an N frame data matrix;
the binarization processing unit is used for performing image binarization processing on the Nth frame data matrix to obtain a signal amplitude data matrix;
the connected domain processing unit is used for carrying out connected domain processing on the signal amplitude data matrix to obtain a connected domain step detection result;
the extraction unit is used for extracting image features of the connected domain step detection result to obtain feature data of an N-th frame connected domain;
a comparison unit, configured to compare the feature data of the connected domain of the nth frame with the feature data of the connected domain of the N-1 th frame, and determine a step state of the object to be detected according to a comparison result;
the connected domain step detection result comprises at least one connected domain, and the extraction unit is specifically used for:
Determining the number of sensor data in the connected domain;
Obtaining the pixel area of the N frame connected domain according to the number of the sensor data;
obtaining the average signal intensity of the N frame connected domain according to the amplitude value corresponding to the sensor data;
the comparison unit is specifically configured to:
Comparing the pixel area and the average signal intensity of the connected domain of the N-1 frame with the pixel area and the average signal intensity of the connected domain at the same position of the N-1 frame;
If the pixel area of the connected domain of the N frame is not smaller than the pixel area of the connected domain of the N-1 frame and the average signal intensity of the connected domain of the N frame is larger than the average signal intensity of the connected domain of the N-1 frame, determining that the footstep of the object to be detected is in a falling state;
If the pixel area of the connected domain of the N frame is equal to the pixel area of the connected domain of the N-1 frame and the average signal intensity of the connected domain of the N frame is equal to the average signal intensity of the connected domain of the N-1 frame, determining that the step of the object to be detected is in a stay state;
And if the pixel area of the connected domain of the N frame is not larger than the pixel area of the connected domain of the N-1 frame and the average signal intensity of the connected domain of the N frame is smaller than the average signal intensity of the connected domain of the N-1 frame, determining that the step of the object to be detected is in a lifting state.
5. The pedestrian footstep status recognition device of claim 4, wherein the nth frame data matrix comprises a plurality of sensor data, the binarization processing unit being specifically configured to:
Comparing the amplitude value corresponding to each sensor data with a set threshold value;
if the amplitude value is larger than the set threshold value, reserving the amplitude value corresponding to the sensor data;
If the amplitude value is smaller than the set threshold value, setting the amplitude value corresponding to the sensor data to be 0;
The sensor data retaining the amplitude value and the sensor data set to 0 are formed into the signal amplitude data matrix.
6. The pedestrian step status recognition device according to claim 4, wherein the connected domain processing unit includes, repeatedly performing the following processing until the sensor data in the signal amplitude data matrix are all traversed:
selecting any sensor data from the signal amplitude data matrix as current sensor data, and marking the current sensor data;
Selecting sensor data adjacent to the current sensor data position from the sensor data which are not marked yet, and marking the selected sensor data;
And forming the current sensor data and the selected sensor data into the connected domain, and taking the connected domain as a step detection result of the connected domain.
7. An electronic device comprising a memory, a processor, the memory having stored thereon a computer program executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-3 when executing the computer program.
8. A computer readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the method of any of claims 1-3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010965318.9A CN112084980B (en) | 2020-09-14 | 2020-09-14 | Pedestrian footstep state identification method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010965318.9A CN112084980B (en) | 2020-09-14 | 2020-09-14 | Pedestrian footstep state identification method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112084980A CN112084980A (en) | 2020-12-15 |
CN112084980B true CN112084980B (en) | 2024-05-28 |
Family
ID=73736282
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010965318.9A Active CN112084980B (en) | 2020-09-14 | 2020-09-14 | Pedestrian footstep state identification method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112084980B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0636033A (en) * | 1992-07-20 | 1994-02-10 | Fujitsu Ltd | Automatic target detecting method |
JPH09179930A (en) * | 1995-12-25 | 1997-07-11 | Olympus Optical Co Ltd | Information reproduction system, information recording medium, and information recording device |
CN103984040A (en) * | 2014-05-20 | 2014-08-13 | 刘达 | Biological recognition method based on infrared sensor array algorithm |
CN104408718A (en) * | 2014-11-24 | 2015-03-11 | 中国科学院自动化研究所 | Gait data processing method based on binocular vision measuring |
WO2016197297A1 (en) * | 2015-06-08 | 2016-12-15 | 北京旷视科技有限公司 | Living body detection method, living body detection system and computer program product |
CN110661529A (en) * | 2019-11-06 | 2020-01-07 | 杭州姿感科技有限公司 | Method and device for generating step amplitude sequence |
CN110728258A (en) * | 2019-10-22 | 2020-01-24 | 杭州姿感科技有限公司 | Step detection method and system based on matching of connected domains of front frame and rear frame |
CN110837794A (en) * | 2019-11-04 | 2020-02-25 | 杭州姿感科技有限公司 | Pedestrian number statistical method and device |
CN111428653A (en) * | 2020-03-27 | 2020-07-17 | 湘潭大学 | Pedestrian congestion state determination method, device, server and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10395376B2 (en) * | 2017-07-19 | 2019-08-27 | Qualcomm Incorporated | CMOS image sensor on-die motion detection using inter-pixel mesh relationship |
-
2020
- 2020-09-14 CN CN202010965318.9A patent/CN112084980B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0636033A (en) * | 1992-07-20 | 1994-02-10 | Fujitsu Ltd | Automatic target detecting method |
JPH09179930A (en) * | 1995-12-25 | 1997-07-11 | Olympus Optical Co Ltd | Information reproduction system, information recording medium, and information recording device |
CN103984040A (en) * | 2014-05-20 | 2014-08-13 | 刘达 | Biological recognition method based on infrared sensor array algorithm |
CN104408718A (en) * | 2014-11-24 | 2015-03-11 | 中国科学院自动化研究所 | Gait data processing method based on binocular vision measuring |
WO2016197297A1 (en) * | 2015-06-08 | 2016-12-15 | 北京旷视科技有限公司 | Living body detection method, living body detection system and computer program product |
CN110728258A (en) * | 2019-10-22 | 2020-01-24 | 杭州姿感科技有限公司 | Step detection method and system based on matching of connected domains of front frame and rear frame |
CN110837794A (en) * | 2019-11-04 | 2020-02-25 | 杭州姿感科技有限公司 | Pedestrian number statistical method and device |
CN110661529A (en) * | 2019-11-06 | 2020-01-07 | 杭州姿感科技有限公司 | Method and device for generating step amplitude sequence |
CN111428653A (en) * | 2020-03-27 | 2020-07-17 | 湘潭大学 | Pedestrian congestion state determination method, device, server and storage medium |
Non-Patent Citations (1)
Title |
---|
基于连通性检测的视频监控运动目标提取;朱瑾瑜;刘海燕;黄淑梅;;电视技术;20071017(10);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112084980A (en) | 2020-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110084116B (en) | Road surface detection method, road surface detection device, computer equipment and storage medium | |
CN106845890B (en) | Storage monitoring method and device based on video monitoring | |
JP5333080B2 (en) | Image recognition system | |
JP5019375B2 (en) | Object detection apparatus and object detection method | |
CN108053653B (en) | Vehicle behavior prediction method and device based on LSTM | |
CN103870824B (en) | A kind of face method for catching and device during Face datection tracking | |
CN110781844B (en) | Security patrol monitoring method and device | |
CN110458126B (en) | Pantograph state monitoring method and device | |
CN116386090B (en) | Plankton identification method, system and medium based on scanning atlas | |
CN111523416A (en) | Vehicle early warning method and device based on highway ETC portal | |
CN108629310B (en) | Engineering management supervision method and device | |
CN110837794A (en) | Pedestrian number statistical method and device | |
CN112257520A (en) | People flow statistical method, device and system | |
US20170053172A1 (en) | Image processing apparatus, and image processing method | |
CN112084980B (en) | Pedestrian footstep state identification method and device | |
CN111126257A (en) | Behavior detection method and device | |
CN117173647A (en) | Insulator abnormality detection method and device, electronic equipment and storage medium | |
CN113050063B (en) | Obstacle detection method, device, equipment and storage medium based on laser sensor | |
CN115512306A (en) | Method for early warning of violence events in elevator based on image processing | |
CN111401291B (en) | Stranger identification method and device | |
CN114596496A (en) | Wheel state recognition method and device, and water spray control method and device | |
CN118762398A (en) | Step state attribute identification method based on connected domain features | |
CN117312828B (en) | Public facility monitoring method and system | |
CN111783689B (en) | Material line pressing identification method and device | |
CN112102362A (en) | Pedestrian step track determination method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |