CN116052088B - Point cloud-based activity space measurement method, system and computer equipment - Google Patents

Point cloud-based activity space measurement method, system and computer equipment Download PDF

Info

Publication number
CN116052088B
CN116052088B CN202310200539.0A CN202310200539A CN116052088B CN 116052088 B CN116052088 B CN 116052088B CN 202310200539 A CN202310200539 A CN 202310200539A CN 116052088 B CN116052088 B CN 116052088B
Authority
CN
China
Prior art keywords
point cloud
public space
space
cloud data
human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310200539.0A
Other languages
Chinese (zh)
Other versions
CN116052088A (en
Inventor
茹冰倩
李早
金昭
储文峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202310200539.0A priority Critical patent/CN116052088B/en
Publication of CN116052088A publication Critical patent/CN116052088A/en
Application granted granted Critical
Publication of CN116052088B publication Critical patent/CN116052088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention relates to the technical field of spatial vitality evaluation, solves the technical problem of lower accuracy of current vitality spatial evaluation, in particular to a vitality spatial measurement method based on point cloud, which comprises the following steps: acquiring real public space point cloud data in a community to be measured at a measurement time point; preprocessing the public space point cloud data to obtain preprocessed public space simplified point cloud data; inputting the common space simplified point cloud data into an improved deep neural network for characteristic analysis to obtain a plurality of point cloud blocks of different categories, and screening human point cloud from the point cloud blocks according to human skeleton characteristics; and counting the number of public space personnel in the community to be measured according to the labeled human body point cloud. According to the invention, the human body point cloud is extracted from the real public space point cloud data by adopting the convolutional neural network of the attention mechanism of the embedded graph, and the spatial vitality is comprehensively evaluated from the number of people in the public space and the motion state of the people, so that the vitality spatial evaluation accuracy is improved.

Description

Point cloud-based activity space measurement method, system and computer equipment
Technical Field
The invention relates to the technical field of spatial vitality evaluation, in particular to a vitality spatial measurement method, system and computer equipment based on point cloud.
Background
The public space vitality refers to people and activities thereof which can be observed in the public space, so that the public space vitality can be evaluated to intuitively reflect the space quality of a certain community, so that a community planner and a manager can be helped to quantitatively evaluate a design strategy and a reconstruction mode of a planning area or a reconstruction area, and the method has important significance for development and construction of the community. At present, the existing public space activity measuring method mainly comprises the following two methods:
a statistical analysis of multi-source data such as mobile phone signaling data and POI density is utilized, crowd activity density is used as a measure index to directly evaluate public space activity indexes, but the measure method only evaluates space activity from crowd aggregation degree as an index, evaluation of human activity states is neglected, certain one-sided performance exists, and therefore accuracy of the space activity indexes is reduced;
the method is used for indirectly evaluating the activity index of the public space from the aspects of facility constitution, environmental comfort, functional mixing and the like of the public space, but the measuring method only carries out theoretical evaluation on the activity of the space according to the objective angle of the environmental facility required by human activities, and does not carry out fact verification, so that the activity index of the space is easy to deviate greatly.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method, a system and computer equipment for measuring the activity space based on point cloud, which solve the technical problem of lower accuracy of current activity space evaluation and achieve the aim of improving the accuracy of space activity evaluation by evaluating the activity of a public space according to real public space point cloud data containing personnel states.
In order to solve the technical problems, the invention provides the following technical scheme: a method for measuring activity space based on point cloud comprises the following steps:
s1, acquiring public space point cloud data in a community to be measured, which is acquired by image acquisition equipment at a measurement time point;
s2, acquiring public space point cloud data in the community to be measured, which are acquired by the image acquisition equipment at a measurement time point;
s3, inputting the public space simplified point cloud data into an improved deep neural network for feature analysis to obtain a plurality of point cloud blocks of different categories, and screening human point clouds from the point cloud blocks according to human skeleton features and marking;
s4, counting the number of public space personnel in the community to be measured according to the labeled human body point cloud, and calculating the ratio R of the number of the public space personnel to the number of the normally-resident personnel in the community to be measured;
S5, analyzing the gesture of each human body point cloud to count the number of moving people in a public space, and calculating the ratio Y of the number of the moving people to the number of the people in the public space;
s6, evaluating the vitality space of the community to be tested according to a preset outbound person threshold W and an liveness threshold H;
if R is more than W and Y is more than H, marking the public space in the community to be measured at the measurement time point as a high activity space;
if R is more than W and Y is less than or equal to H, marking the public space in the community to be measured at the measuring time point as a medium activity space;
and if R is less than or equal to W and Y is less than or equal to H, marking the public space in the community to be measured at the measurement time point as a low-activity space.
Further, in step S2, the specific process of preprocessing the public space point cloud data to obtain preprocessed public space reduced point cloud data includes the following steps:
s21, performing noise elimination operation on the public space point cloud data to obtain denoised public space point cloud data;
s22, carrying out sparsification operation on the denoised public space point cloud data to obtain the public space simplified point cloud data.
Further, in step S3, inputting the common space reduced point cloud data into an improved deep neural network for feature analysis to obtain a plurality of point cloud blocks of different categories, and then screening out human point clouds from the point cloud blocks according to human skeleton features and labeling the human point clouds, the specific process includes the following steps:
S31, performing multi-scale neighborhood feature analysis by adopting a coding convolution unit according to the common space reduced point cloud data so as to extract global feature information;
s32, analyzing the global feature information by adopting a graph attention unit to obtain local feature information under different scales;
s33, carrying out semantic segmentation by adopting a feature fusion and classification unit according to the global feature information and the local feature information to obtain point cloud blocks of different categories;
and S34, screening out human point clouds from the point cloud blocks by a threshold method according to human skeleton characteristics and marking.
Further, in step S5, the specific process of analyzing the pose of each human point cloud to count the number of moving people in the public space includes the following steps:
extracting human key joint points from each human point cloud according to a human skeleton structure;
calculating the three-dimensional distance between any two adjacent joint points in the key joint points of the human body, and judging the data larger than a preset threshold value as movement characteristic data;
determining a human body movement posture feature vector according to the movement feature data;
and according to the accumulation of the human motion attitude feature vectors, the number of the moving people in the public space is obtained.
Further, after step S6, the method further includes:
and S7, projecting the public space reduced point cloud data to a corresponding geographic information diagram to perform space dimension reduction, and drawing corresponding colors for coordinate points corresponding to the human body point cloud to obtain a vigor heat point diagram for representing space liveness.
Further, in step S7, the specific process of projecting the common spatially reduced point cloud data to a corresponding geographic information map to perform spatial dimension reduction, and drawing a corresponding color for a coordinate point corresponding to the human point cloud to obtain a vigor heat point map for representing spatial liveness includes the following steps:
s71, orthographic projection is carried out on the public space reduced point cloud data to a corresponding geographic information graph along a z-axis to carry out space dimension reduction, and a two-dimensional point cloud graph is obtained;
s72, drawing corresponding colors at corresponding grids in the two-dimensional point cloud image according to the gesture of each human body point cloud;
and S73, repeating the steps to draw corresponding colors on all grids corresponding to the human point cloud in the two-dimensional point cloud image, and obtaining a vigor heat point diagram for representing the space liveness.
Further, the key joint points of the human body comprise 15 joint points corresponding to the head, neck, left shoulder, right shoulder, sacral vertebra, left elbow, left hand, right elbow, right hand, left hip, right hip, left knee, right knee, left foot and right foot respectively.
The invention also provides a system for realizing the method for measuring the activity space based on the point cloud, which comprises the following steps:
the point cloud data acquisition module is used for acquiring the point cloud data of the public space in the community to be measured, which is acquired by the image acquisition equipment at the measurement time point;
the point cloud data preprocessing module is used for preprocessing the point cloud data of the public space to obtain preprocessed simplified point cloud data of the public space;
the human body point cloud extraction module is used for inputting the public space reduced point cloud data into an improved deep neural network for characteristic analysis to obtain a plurality of point cloud blocks of different categories, and then screening human body point clouds from the point cloud blocks according to human body skeleton characteristics and marking;
the first calculation module is used for counting the number of public space personnel in the community to be measured according to the labeled human body point cloud and calculating the ratio R of the number of the public space personnel to the number of the resident personnel in the community to be measured;
the second calculation module is used for analyzing the gesture of each human body point cloud to count the number of moving people in the public space and calculating the ratio Y of the number of the moving people to the number of the people in the public space;
The vitality evaluation module is used for evaluating the vitality space of the community to be tested according to a preset outbound person threshold W and an liveness threshold H.
Further, the method further comprises the following steps:
and the activity heat point diagram drawing module is used for projecting the public space reduced point cloud data to a corresponding geographic information diagram to perform space dimension reduction, and drawing corresponding colors for coordinate points corresponding to human body point clouds to obtain an activity heat point diagram for representing space activity.
The invention also provides a computer device, which comprises a processor and a memory, wherein the memory is used for storing a computer program, and the computer program is executed by the processor to realize the method for measuring the activity space based on the point cloud.
By means of the technical scheme, the invention provides a method, a system and computer equipment for measuring activity space based on point cloud, which at least have the following beneficial effects:
1. according to the method, the real public space point cloud data in the community to be measured, which are acquired by the radar, are classified by adopting the convolutional neural network of the attention mechanism of the embedded graph to extract the human body point cloud, so that the number of public space personnel in the community to be measured is obtained, then the gesture characteristics of each point cloud block in the human body point cloud are determined according to the human body skeleton structural characteristics, the number of moving personnel in the public space of the community to be measured is obtained, and finally the vitality of the public space in the community to be measured is objectively evaluated from two aspects according to the ratio of the number of the public space personnel to the number of resident personnel in the community to be measured and the ratio of the number of the moving personnel to the number of the public space personnel.
2. According to the method, the real public space reduced point cloud data in the community to be measured is projected to the corresponding geographic information graph to perform space dimension reduction, and the vigor heat point graph for representing the activity of the public space in the community to be measured is drawn by combining the gesture of each point cloud block in the human body point cloud, so that the display of public space information is realized, and a powerful support is provided for the improvement of the activity of the public space of the subsequent community.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a flowchart of a method for spatial measurement of viability according to an embodiment of the present invention;
FIG. 2 is a block diagram of an improved deep neural network according to a first embodiment of the present invention;
FIG. 3 is a flowchart of extracting human point clouds according to a first embodiment of the present invention;
FIG. 4 is a schematic diagram of a key joint structure of a human skeleton according to a first embodiment of the present invention;
FIG. 5 is a schematic block diagram of a spatial measure of activity system according to an embodiment of the present invention;
FIG. 6 is a flowchart of a method for spatial measurement of viability in a second embodiment of the present invention;
FIG. 7 is a schematic block diagram of a spatial measure system of viability in a second embodiment of the present invention;
fig. 8 is a block diagram showing an internal structure of a computer device according to an embodiment of the present invention.
In the figure: 10. a point cloud data acquisition module; 20. the point cloud data preprocessing module; 30. a human body point cloud extraction module; 40. a first computing module; 50. a second computing module; 60. the activity evaluation module; 70. and a vigor heat point diagram drawing module.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. Therefore, the implementation process of how to apply the technical means to solve the technical problems and achieve the technical effects can be fully understood and implemented.
Scene overview
With the development of town, the number of old cells is increased while new cells are continuously pulled up, and the old cells are reasonably transformed because the old cells are long in construction period and behind in infrastructure, so that living conditions of residents are bad and are not coordinated with surrounding new cells, and if reconstruction is removed, the old cells do not conform to the sustainable development principle. The transformation key points of the old community are public space areas such as roads, landscape squares and body building squares, and the vitality index of the public space can intuitively reflect the space quality of a certain old community, so that decision-making guiding significance can be provided for community transformation operators and managers.
When measuring the activity of the public space of the old community, in order to accurately evaluate the activity index of the public space of the old community, the number of people focused in the public space is evaluated, and the movement state of the people is considered.
It should be noted that the above-mentioned public spatial vitality evaluation of old cells is only one exemplary application scenario of the vitality spatial measurement method provided in the embodiments of the present application, and the specific application scenario of the vitality spatial measurement method is not limited in the embodiments of the present application, and may be applied to, for example, evaluation of spatial vitality of tourist attraction.
Example 1
Referring to fig. 1-4, a specific implementation manner of the embodiment is shown, in the embodiment, the number of public space personnel is extracted according to real public space point cloud data, the number of moving personnel is determined according to the point cloud attitude characteristics, and objective evaluation is performed on the public space vitality from two aspects of a ratio of the number of the public space personnel to the number of normally-resident personnel in a community to be measured and a ratio of the number of the moving personnel to the number of the public space personnel, so that accuracy of vitality space evaluation is improved.
As shown in fig. 1, a method for measuring activity space based on point cloud comprises the following steps:
s1, acquiring public space point cloud data in a community to be measured, which is acquired by image acquisition equipment at a measurement time point.
Specifically, a unmanned aerial vehicle carrying a laser scanner is adopted to scan a public space in a community to be measured at a measurement time point, so that three-dimensional point cloud data containing surface characteristic information of all objects in the public space is obtained
Figure SMS_1
And three-dimensional point cloud data +.>
Figure SMS_2
Transmitting the three-dimensional point cloud data to computer equipment in real time to provide data support for subsequent spatial activity evaluation, wherein the three-dimensional point cloud data are +.>
Figure SMS_3
Can be expressed as: />
Figure SMS_4
Wherein->
Figure SMS_5
Representing three-dimensional point cloud data->
Figure SMS_6
The total number of information points contained in the database.
Notably, the evaluation of the vitality space mainly has three evaluation indexes: the number of people, the activity state and the residence time, that is, the activity index of the public space is closely related to the people, for this purpose, the embodiment selects each whole time from six in the morning to ten in the evening as a measurement time point according to the work and rest law of the people
Figure SMS_7
S2, preprocessing the public space point cloud data to obtain preprocessed public space simplified point cloud data.
Three-dimensional point cloud data acquired by scanning
Figure SMS_8
Comprises some noise points and redundant points which are not valuable, has high density, consumes a great amount of time if the semantic segmentation processing is directly carried out, and has low space activity evaluation efficiency, therefore, the computer equipment is required to receive three-dimensional point cloud data->
Figure SMS_9
And a series of preprocessing operations are performed, so that the point cloud accuracy is ensured while the point cloud data volume is reduced, and the efficiency and accuracy of subsequently acquiring the human body point cloud data in the public space are improved.
In this embodiment, the specific process of preprocessing the public space point cloud data includes:
s21, performing noise elimination operation on the public space point cloud data to obtain denoised public space point cloud data.
Sequentially calculating public space point cloud data through Gaussian filters
Figure SMS_10
Every point in the field is nearest +.>
Figure SMS_15
Average value of height Cheng Juli between adjacent points +.>
Figure SMS_18
And calculate the public space point cloud data +.>
Figure SMS_12
+.>
Figure SMS_14
Mean>
Figure SMS_16
Standard deviation->
Figure SMS_19
The method comprises the steps of carrying out a first treatment on the surface of the Then, traverse the public space point cloud data +.>
Figure SMS_11
To judge their +.>
Figure SMS_13
Whether or not to be within the standard range
Figure SMS_17
And if the point is out of the standard deviation range, judging the point as a noise point, and removing the noise point from the data set to finally obtain denoised public space point cloud data.
S22, performing sparsification operation on the denoised public space point cloud data to obtain the public space simplified point cloud data.
Dividing the denoised public space point cloud into a plurality of grids with the same size, forming a point cloud grid by all points falling in each grid, and taking the average grid density of the point cloud grid as
Figure SMS_20
When the method is larger than a preset threshold, calculating the gravity centers of all points in the point cloud grid, approximately replacing all points in the point cloud grid with points with shortest distance from the gravity centers, traversing the denoised public space point cloud data by adopting the same method, and finally obtaining the public space simplified point cloud data with obviously sparse point cloud quantity on the premise of ensuring the unchanged point cloud topological structure and precision.
Because the original public space point cloud data contains a plurality of point clouds of different categories such as building points, plant points, entertainment facility points, human body points and the like, the preprocessed public space simplified point cloud data needs to be classified so as to accurately extract the human body point cloud for evaluating the activity of the public space.
And S3, inputting the public space simplified point cloud data into an improved deep neural network for characteristic analysis to obtain a plurality of point cloud blocks of different categories, and screening out human point clouds from the point cloud blocks according to human skeleton characteristics and marking.
The following describes in detail a specific process of extracting human body point cloud blocks from the common space reduced point cloud data with reference to the structural block diagram of the improved deep neural network shown in fig. 2, and as shown in fig. 3, specifically includes the following steps:
s31, performing multi-scale neighborhood feature analysis by adopting a coding convolution unit according to the common space reduced point cloud data so as to extract global feature information.
As shown in FIG. 2, the improved deep neural network is input with the public space reduced point cloud data containing m points, and each point in the public space reduced point cloud in the community to be measured has a characteristic that can be recorded as
Figure SMS_21
Wherein->
Figure SMS_22
Indicate->
Figure SMS_23
Coordinate information of individual points,/>
Figure SMS_24
Indicate->
Figure SMS_25
The point is relative to the (th)>
Figure SMS_26
The location of the individual scenes; in the embodiment, the characteristics are extracted from the pixels of the point cloud data image in the common space by utilizing a multi-layer perceptron MLP (64, 64) comprising a three-layer convolution layer Conv and a three-layer activation layer Concat, so that the 64-dimensional characteristics of each point are obtained; the feature dimension is increased to 1024-dimensional feature space through feature semantic coding, and a pooling layer is arranged between the layers, so that the functions of reducing the data size and accelerating the calculation are achieved, the local features can be well aggregated, and the phenomenon of overfitting is prevented; finally use the maximum pooling layer- >
Figure SMS_27
And carrying out pooling operation to obtain global characteristic information.
It should be noted that, because the local areas of the image are related, the multi-layer perceptron MLP adopts a weight sharing mechanism, which reduces network parameters, reduces network complexity, and improves the extraction efficiency of global features.
S32, analyzing the global characteristic information by adopting a graph attention unit to obtain local characteristic information under different scales.
With continued reference to FIG. 2, by employing four attention layers including a self-attention module and a bias attention module
Figure SMS_28
And analyzing the local geometric information of the global feature information to obtain the local feature information under different scales.
And S33, carrying out semantic segmentation by adopting a feature fusion and classification unit according to the global feature information and the local feature information to obtain point cloud blocks of different categories.
With continued reference to FIG. 2, by forming a layer of three full convolutions stacked in sequence
Figure SMS_29
A random inactivation layer->
Figure SMS_30
And a full convolution layer->
Figure SMS_31
The formed feature fusion and classification unit performs semantic segmentation on the global feature information and the local feature information to finally obtain a plurality of point cloud blocks with different categories, wherein the categories mainly comprise buildings, plants, human bodies and the like.
S34, screening out human point clouds from the point cloud blocks by a threshold method according to human skeleton characteristics and marking.
With continued reference to fig. 2, the human body feature descriptors are calculated by extracting useful information in the human body point cloud data through the label labeling unit
Figure SMS_34
Simultaneously calculating the feature descriptor +.>
Figure SMS_35
Then the feature descriptors of each point cloud block are sequentially compared>
Figure SMS_38
And human body characteristic descriptor->
Figure SMS_33
Similarity of->
Figure SMS_36
And screening out the similarity +.>
Figure SMS_37
Is greater than a preset similarity threshold +.>
Figure SMS_39
And labeling the point cloud block with the label as '1', otherwise labeling the point cloud block with the label as '0', and finally obtaining the point cloud block with the label. Wherein, similarity->
Figure SMS_32
The expression of (2) is as follows:
Figure SMS_40
in the above-mentioned method, the step of,
Figure SMS_41
representing human point cloud shape features->
Figure SMS_42
Representing a sample point cloud block->
Figure SMS_43
Is a shape feature of (a) a (b).
The similarity threshold value
Figure SMS_44
The value of (2) may be set according to the actual classification experience and accuracy requirements, and is not particularly limited herein.
S4, counting the number of public space personnel in the community to be measured according to the labeled human body point cloud, and calculating the ratio R of the number of public space personnel to the number of normally-resident personnel in the community to be measured.
In step S34, labeling the labels of the point cloud blocks of the human body as '1', and counting the number of the point cloud blocks labeled as '1', namely the number of public space personnel in the community to be measured; according to the population census or the resident population registry, the number of resident people in the community to be measured can be obtained, and the ratio R of the number of public space people to the number of resident people in the community to be measured can be obtained through calculation.
S5, analyzing the gesture of each human body point cloud to count the number of the moving people in the public space, and calculating the ratio Y of the number of the moving people to the number of the people in the public space.
In this embodiment, the specific step of counting the number of the moving people in the public space includes:
first, extracting key human body joint points from each point cloud block of the human body point cloud according to the human body skeleton structure.
From the skeleton structure of the human body, the positions of each joint point relative to each other define the posture of the human body, that is, the more joint points used for defining the skeleton, the more the human body posture can be estimated, but this will result in the longer the estimation time of the human body posture, for this purpose, 15 key joint points are selected to estimate the posture of each point cloud block in the human body point cloud in this embodiment. As shown in fig. 4, the 15 key joint points are head joint points, respectively
Figure SMS_46
Neck joint->
Figure SMS_49
Left shoulder joint->
Figure SMS_55
Right shoulder joint->
Figure SMS_47
Sacral joint->
Figure SMS_52
Left elbow joint->
Figure SMS_54
Left hand node->
Figure SMS_58
Right elbow joint->
Figure SMS_45
Right hand node->
Figure SMS_50
Left hip joint->
Figure SMS_57
Right hip joint->
Figure SMS_59
Left knee joint->
Figure SMS_48
Right knee joint->
Figure SMS_51
Left foot node->
Figure SMS_53
Right foot joint->
Figure SMS_56
And secondly, calculating the three-dimensional distance between any two adjacent joint points in the key joint points of the human body, and judging the data larger than a preset threshold value as movement characteristic data. For example by calculating the left shoulder joint point
Figure SMS_60
And left elbow joint->
Figure SMS_61
Left elbow joint->
Figure SMS_62
And left hand node->
Figure SMS_63
The motion gesture of the left arm of the human body can be judged.
Thirdly, determining a human body movement posture feature vector according to the movement feature data; that is, the motion gesture of the human body, such as squat, run, stand still and bend, can be comprehensively analyzed according to the motion characteristics of each trunk part of the human body.
And fourthly, according to the accumulation of the characteristic vectors of the motion postures of the human body, the number of the motion personnel in the public space can be obtained.
And counting the number of the moving personnel in the public space according to the steps, and calculating to obtain the ratio Y of the number of the moving personnel to the number of the personnel in the public space.
S6, evaluating the vitality space of the community to be measured according to a preset outbound person threshold W and a preset liveness threshold H;
if R is more than W and Y is more than H, marking the public space in the community to be measured at the measurement time point as a high activity space;
if R is more than W and Y is less than or equal to H, marking the public space in the community to be measured at the measurement time point as a medium activity space;
and if R is less than or equal to W and Y is less than or equal to H, marking the public space in the community to be measured at the measurement time point as a low-activity space.
Referring to fig. 5, the present embodiment further provides a system for implementing the above-mentioned method for measuring activity space based on point cloud, which includes:
the point cloud data acquisition module 10 is used for acquiring the public space point cloud data in the community to be measured, which is acquired by the image acquisition equipment at the measurement time point;
the point cloud data preprocessing module 20 is configured to preprocess the public space point cloud data to obtain preprocessed public space simplified point cloud data;
the human body point cloud extraction module 30 is used for inputting the public space simplified point cloud data into the improved deep neural network for characteristic analysis to obtain a plurality of point cloud blocks with different categories, and then screening out human body point clouds from the point cloud blocks according to human body skeleton characteristics and labeling;
the first calculation module 40 is configured to count the number of people in the public space in the community to be measured according to the labeled human body point cloud, and calculate a ratio R of the number of people in the public space to the number of people in the community to be measured;
the second calculating module 50 is configured to analyze the pose of each point cloud block in the human body point cloud to count the number of moving people in the public space, and calculate the ratio Y of the number of moving people to the number of people in the public space;
The activity evaluation module 60 is configured to evaluate an activity space of the community to be measured according to a preset egress person threshold W and an activity threshold H;
if R is more than W and Y is more than H, marking the public space in the community to be measured at the measurement time point as a high activity space;
if R is more than W and Y is less than or equal to H, marking the public space in the community to be measured at the measurement time point as a medium activity space;
and if R is less than or equal to W and Y is less than or equal to H, marking the public space in the community to be measured at the measurement time point as a low-activity space.
According to the embodiment, the human body point cloud is extracted by classifying the public space point cloud data by adopting a convolutional neural network of an embedded graph attention mechanism according to real public space point cloud data in a community to be measured, which is acquired by a radar, so as to obtain the number of public space personnel at a measurement time point, then, the gesture characteristic of each point cloud block in the human body point cloud is determined according to the human body skeleton structural characteristics, so as to obtain the number of moving personnel in the public space of the community to be measured, and finally, the vitality of the public space in the community to be measured is objectively evaluated from two aspects according to the ratio of the number of the public space personnel to the number of normally living personnel in the community to be measured and the ratio of the number of the moving personnel to the number of the public space personnel, so that the vitality space evaluation accuracy is improved.
Example two
The implementation manner provided in this embodiment is made on the basis of the first embodiment, and the same parts can solve the same technical problems and have the same beneficial effects, so that the same technical problems can be referred to each other, and detailed description will not be expanded in this embodiment.
Referring to fig. 6, a specific implementation manner of the embodiment is shown, in the embodiment, spatial dimension reduction is performed on real public space reduced point cloud data according to two-dimensional coordinates of public space in a community to be measured in a geographic information diagram, and a vigor heat map is drawn according to real human body states in the public space, so that powerful data support is provided for subsequent improvement of public space vigor.
Fig. 6 shows a method for measuring activity space based on point cloud, comprising the following steps:
s1, acquiring public space point cloud data in a community to be measured, which is acquired by image acquisition equipment at a measurement time point.
S2, preprocessing the public space point cloud data to obtain preprocessed public space simplified point cloud data.
And S3, inputting the public space simplified point cloud data into an improved deep neural network for characteristic analysis to obtain a plurality of point cloud blocks of different categories, and screening out human point clouds from the point cloud blocks according to human skeleton characteristics and marking.
S4, counting the number of public space personnel in the community to be measured according to the labeled human body point cloud, and calculating the ratio R of the number of public space personnel to the number of normally-resident personnel in the community to be measured.
S5, analyzing the gesture of each human body point cloud to count the number of the moving people in the public space, and calculating the ratio Y of the number of the moving people to the number of the people in the public space.
S6, evaluating the vitality space of the community to be measured according to a preset outbound person threshold W and a preset liveness threshold H;
if R is more than W and Y is more than H, marking the public space in the community to be measured at the measurement time point as a high activity space;
if R is more than W and Y is less than or equal to H, marking the public space in the community to be measured at the measurement time point as a medium activity space;
and if R is less than or equal to W and Y is less than or equal to H, marking the public space in the community to be measured at the measurement time point as a low-activity space.
And S7, projecting the public space simplified point cloud data to a corresponding geographic information diagram to perform space dimension reduction, and drawing corresponding colors for coordinate points corresponding to the human point cloud to obtain a vigor heat point diagram for representing space liveness.
And S71, orthographic projection is carried out on the public space simplified point cloud data to a corresponding geographic information graph along the z axis to carry out space dimension reduction, and a two-dimensional point cloud graph is obtained.
Specifically, two-dimensional coordinates of a public space in the community to be measured in the geographic information diagram are found, registration is carried out on the public space reduced point cloud data and the position of the public space in the community to be measured in the geographic information diagram, then the public space reduced point cloud data are orthographically projected to the corresponding geographic information diagram along the z-axis to carry out space dimension reduction, and finally the two-dimensional point cloud diagram containing the real human motion state is obtained.
S72, drawing corresponding colors at corresponding grids in the two-dimensional point cloud picture according to the gesture of each point cloud block in the human body point cloud.
And marking a corresponding grid in the two-dimensional point cloud image by adopting corresponding colors according to the current gesture characteristics of the human body point cloud data, wherein the darker the color is, the more obvious the personnel motion characteristics of the corresponding region are, the lighter the color is, the more obvious the personnel static characteristics of the corresponding region are, and the colorless region is no personnel aggregation.
And S73, repeating the steps to draw corresponding colors on all grids corresponding to the human point cloud in the two-dimensional point cloud picture, and obtaining a vigor heat point diagram for representing the space liveness.
By drawing and outputting the activity heat point diagram representing the activity of the public space in the community to be measured, decision support can be provided for community reformers, so that the working efficiency of improving the activity of the public space of the community is improved, and the method has higher social value and application prospect.
Referring to fig. 7, the present embodiment further provides a system for implementing the above-mentioned method for measuring activity space based on point cloud, which includes:
the point cloud data acquisition module 10 is used for acquiring the public space point cloud data in the community to be measured, which is acquired by the image acquisition equipment at the measurement time point;
the point cloud data preprocessing module 20 is configured to preprocess the public space point cloud data to obtain preprocessed public space simplified point cloud data;
the human body point cloud extraction module 30 is used for inputting the public space simplified point cloud data into the improved deep neural network for characteristic analysis to obtain a plurality of point cloud blocks with different categories, and then screening out human body point clouds from the point cloud blocks according to human body skeleton characteristics and labeling;
the first calculation module 40 is configured to count the number of people in the public space in the community to be measured according to the labeled human body point cloud, and calculate a ratio R of the number of people in the public space to the number of people in the community to be measured;
the second calculating module 50 is configured to analyze the pose of each point cloud block in the human body point cloud to count the number of moving people in the public space, and calculate the ratio Y of the number of moving people to the number of people in the public space;
The activity evaluation module 60 is configured to evaluate an activity space of the community to be measured according to a preset egress person threshold W and an activity threshold H;
if R is more than W and Y is more than H, marking the public space in the community to be measured at the measurement time point as a high activity space;
if R is more than W and Y is less than or equal to H, marking the public space in the community to be measured at the measurement time point as a medium activity space;
if R is less than or equal to W and Y is less than or equal to H, marking the public space in the community to be measured at the measurement time point as a low-activity space;
and the activity heat point diagram drawing module 70 is used for projecting the public space reduced point cloud data to the corresponding geographic information diagram to perform space dimension reduction, and drawing corresponding colors for coordinate points corresponding to the human body point cloud to obtain an activity heat point diagram for representing space activity.
According to the embodiment, the human body point cloud is extracted by classifying the public space point cloud data through a convolutional neural network of an embedding diagram attention mechanism according to real public space point cloud data in a community to be measured, which is acquired by a radar, so that the number of public space personnel at a measurement time point is obtained, then the gesture feature of each point cloud block in the human body point cloud is determined according to the human body skeleton structural feature, the number of moving personnel in the public space of the community to be measured is obtained, then the real public space reduced point cloud data in the community to be measured is projected to a corresponding geographic information diagram for space dimension reduction, the human body state is determined by combining with the motion gesture feature vector corresponding to each point cloud block in the human body point cloud, and a vitality heat map for representing the liveness of the public space in the community to be measured is drawn, so that the display of public space information is realized, and powerful support is provided for the follow-up promotion of the public space liveness of the community.
The present embodiment also provides a computer apparatus, and an internal structure of the computer apparatus will be described below with reference to fig. 8, where the computer apparatus includes a processor, a memory, a network interface, a display screen, and an input device connected through a system bus.
The processor is a control center of the computer device, utilizes various interfaces and lines to connect various parts of the whole device, and realizes various functions of the computer device by running or executing computer readable instructions and/or modules stored in the memory and calling data stored in the memory; the processor referred to herein may be a central processing unit, CPU, or other general purpose processor, digital signal processor, DSP, application specific integrated circuit, ASIC, off-the-shelf programmable gate array, FPGA, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like; wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory is used for storing computer readable instructions and/or modules, and mainly comprises a storage medium and an internal memory, wherein the storage medium can be a nonvolatile storage medium or a volatile storage medium, the storage medium stores an operating system, and can also store computer readable instructions, and the computer readable instructions can cause the processor to realize a point cloud-based activity space measurement method when being executed by the processor, for example, the steps S1 to S7 shown in fig. 1 and 6 and other extensions of the method and extensions of related steps; alternatively, the processor executes the computer readable instructions to implement the functions of each module/unit of the point cloud based activity spatial measurement system in the above embodiment, as shown in fig. 7, and the functions of the modules 10 to 70 are not repeated here.
The network interface is used for communicating with an external server through network connection; the display screen can be a liquid crystal display screen or an electronic ink display screen; the input device can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse, etc.
It should be noted that the memory may be integrated into the processor or may be separate from the processor, and the structure shown in fig. 8 is merely a schematic block diagram of a part of the structure related to the present application, and does not constitute a limitation of the computer device to which the present application is applied, and a specific computer device may include more or less components than those shown in the drawings, or may combine some components, or use a different arrangement of components.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments. From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, or may be implemented by hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above, including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
The foregoing embodiments have been presented in a detail description of the invention, and are presented herein with a particular application to the understanding of the principles and embodiments of the invention, the foregoing embodiments being merely intended to facilitate an understanding of the method of the invention and its core concepts; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (8)

1. The method for measuring the activity space based on the point cloud is characterized by comprising the following steps of:
s1, acquiring public space point cloud data in a community to be measured, which is acquired by image acquisition equipment at a measurement time point;
s2, preprocessing the public space point cloud data to obtain preprocessed public space simplified point cloud data;
s3, inputting the public space simplified point cloud data into an improved deep neural network for feature analysis to obtain a plurality of point cloud blocks of different categories, and screening and labeling human point clouds from the point cloud blocks according to human skeleton features, wherein the method specifically comprises the following steps of:
s31, performing multi-scale neighborhood feature analysis by adopting a coding convolution unit according to the common space reduced point cloud data so as to extract global feature information;
S32, analyzing the global feature information by adopting a graph attention unit to obtain local feature information under different scales;
s33, carrying out semantic segmentation by adopting a feature fusion and classification unit according to the global feature information and the local feature information to obtain point cloud blocks of different categories;
s34, screening out human point clouds from the point cloud blocks by a threshold method according to human skeleton characteristics and marking;
s4, counting the number of public space personnel in the community to be measured according to the labeled human body point cloud, and calculating the ratio R of the number of the public space personnel to the number of the normally-resident personnel in the community to be measured;
s5, analyzing the gesture of each human body point cloud to count the number of moving people in a public space, and calculating the ratio Y of the number of the moving people to the number of the people in the public space;
in step S5, the specific process of analyzing the pose of each human point cloud to count the number of moving people in the public space includes the following steps:
extracting human key joint points from each human point cloud according to a human skeleton structure;
calculating the three-dimensional distance between any two adjacent joint points in the key joint points of the human body, and judging the data larger than a preset threshold value as movement characteristic data;
Determining a human body movement posture feature vector according to the movement feature data;
according to the accumulation of the human motion attitude feature vectors, the number of the moving people in the public space is obtained;
s6, evaluating the vitality space of the community to be tested according to a preset outbound person threshold W and an liveness threshold H;
if R is more than W and Y is more than H, marking the public space in the community to be measured at the measurement time point as a high activity space;
if R is more than W and Y is less than or equal to H, marking the public space in the community to be measured at the measuring time point as a medium activity space;
and if R is less than or equal to W and Y is less than or equal to H, marking the public space in the community to be measured at the measurement time point as a low-activity space.
2. The method according to claim 1, wherein in step S2, the specific process of preprocessing the public space point cloud data to obtain preprocessed public space reduced point cloud data includes the following steps:
s21, performing noise elimination operation on the public space point cloud data to obtain denoised public space point cloud data;
s22, carrying out sparsification operation on the denoised public space point cloud data to obtain the public space simplified point cloud data.
3. The activity spatial measurement method according to claim 1, further comprising, after step S6:
and S7, projecting the public space reduced point cloud data to a corresponding geographic information diagram to perform space dimension reduction, and drawing corresponding colors for coordinate points corresponding to the human body point cloud to obtain a vigor heat point diagram for representing space liveness.
4. The method of claim 3, wherein in step S7, the specific process of projecting the common spatially reduced point cloud data to a corresponding geographic information map to perform spatial dimension reduction, and drawing a corresponding color for a coordinate point corresponding to the human point cloud to obtain a vigor heat point map for representing spatial liveness includes the following steps:
s71, orthographic projection is carried out on the public space reduced point cloud data to a corresponding geographic information graph along a z-axis to carry out space dimension reduction, and a two-dimensional point cloud graph is obtained;
s72, drawing corresponding colors at corresponding grids in the two-dimensional point cloud image according to the gesture of each human body point cloud;
and S73, repeating the steps to draw corresponding colors on all grids corresponding to the human point cloud in the two-dimensional point cloud image, and obtaining a vigor heat point diagram for representing the space liveness.
5. The method of claim 1, wherein the human critical joint points comprise 15 corresponding joint points of the head, neck, left shoulder, right shoulder, sacral vertebra, left elbow, left hand, right elbow, right hand, left hip, right hip, left knee, right knee, left foot and right foot, respectively.
6. A system for implementing the point cloud based activity spatial measurement method according to any of the preceding claims 1-5, comprising:
the point cloud data acquisition module (10) is used for acquiring the point cloud data of the public space in the community to be measured, which is acquired by the image acquisition equipment at the measurement time point;
the point cloud data preprocessing module (20), the point cloud data preprocessing module (20) is used for preprocessing the public space point cloud data to obtain preprocessed public space simplified point cloud data;
the human body point cloud extraction module (30), the human body point cloud extraction module (30) is used for inputting the public space simplified point cloud data into an improved deep neural network for characteristic analysis so as to obtain a plurality of point cloud blocks with different categories, and then the human body point cloud is screened out from the point cloud blocks according to human body skeleton characteristics and marked, and the method specifically comprises the following steps:
Performing multi-scale neighborhood feature analysis by adopting a coding convolution unit according to the common space reduced point cloud data so as to extract global feature information;
analyzing the global characteristic information by adopting a graph attention unit to obtain local characteristic information under different scales;
carrying out semantic segmentation by adopting a feature fusion and classification unit according to the global feature information and the local feature information to obtain point cloud blocks of different categories;
screening out human point clouds from the point cloud blocks by a threshold method according to human skeleton characteristics and marking;
the first calculation module (40) is used for counting the number of public space personnel in the community to be measured according to the labeled human body point cloud, and calculating the ratio R of the number of public space personnel to the number of resident personnel in the community to be measured;
the second calculating module (50), the second calculating module (50) is used for analyzing the gesture of each human body point cloud to count the number of moving people in the public space, and calculating the ratio Y of the number of moving people to the number of people in the public space, wherein the specific process of analyzing the gesture of each human body point cloud to count the number of moving people in the public space comprises the following steps:
Extracting human key joint points from each human point cloud according to a human skeleton structure;
calculating the three-dimensional distance between any two adjacent joint points in the key joint points of the human body, and judging the data larger than a preset threshold value as movement characteristic data;
determining a human body movement posture feature vector according to the movement feature data;
according to the accumulation of the human motion attitude feature vectors, the number of the moving people in the public space is obtained;
the vitality evaluation module (60), wherein the vitality evaluation module (60) is used for evaluating the vitality space of the community to be measured according to a preset outbound person threshold W and an liveness threshold H.
7. The system of claim 6, further comprising:
and the activity heat point diagram drawing module (70) is used for projecting the public space reduced point cloud data to a corresponding geographic information diagram to perform space dimension reduction, and drawing corresponding colors for coordinate points corresponding to the human body point cloud to obtain an activity heat point diagram for representing space activity.
8. A computer device comprising a processor and a memory for storing a computer program which, when executed by the processor, implements the point cloud based activity spatial measurement method according to any of claims 1 to 5.
CN202310200539.0A 2023-03-06 2023-03-06 Point cloud-based activity space measurement method, system and computer equipment Active CN116052088B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310200539.0A CN116052088B (en) 2023-03-06 2023-03-06 Point cloud-based activity space measurement method, system and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310200539.0A CN116052088B (en) 2023-03-06 2023-03-06 Point cloud-based activity space measurement method, system and computer equipment

Publications (2)

Publication Number Publication Date
CN116052088A CN116052088A (en) 2023-05-02
CN116052088B true CN116052088B (en) 2023-06-16

Family

ID=86127490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310200539.0A Active CN116052088B (en) 2023-03-06 2023-03-06 Point cloud-based activity space measurement method, system and computer equipment

Country Status (1)

Country Link
CN (1) CN116052088B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114885133A (en) * 2022-07-04 2022-08-09 中科航迈数控软件(深圳)有限公司 Depth image-based equipment safety real-time monitoring method and system and related equipment
CN114973422A (en) * 2022-07-19 2022-08-30 南京应用数学中心 Gait recognition method based on three-dimensional human body modeling point cloud feature coding
CN115482504A (en) * 2022-09-02 2022-12-16 同炎数智科技(重庆)有限公司 AI algorithm-based city park vitality calculation method and system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090319330A1 (en) * 2008-06-18 2009-12-24 Microsoft Corporation Techniques for evaluating recommendation systems
BR112019007000A2 (en) * 2016-10-07 2019-06-25 Cmte Development Limited system and method for object shape and position point cloud diagnostic testing
JP2021530033A (en) * 2018-09-24 2021-11-04 パナソニックIpマネジメント株式会社 Community-defined space
US20200402116A1 (en) * 2019-06-19 2020-12-24 Reali Inc. System, method, computer program product or platform for efficient real estate value estimation and/or optimization
CN110377679B (en) * 2019-07-10 2021-03-26 南京大学 Public space activity measuring method and system based on track positioning data
CN112819319A (en) * 2021-01-29 2021-05-18 华南理工大学 Method for measuring correlation between city vitality and spatial social characteristics and application
CN114663976A (en) * 2022-03-18 2022-06-24 合肥工业大学 Landscape space activity monitoring method, system and device based on human body action recognition
CN114863272A (en) * 2022-04-20 2022-08-05 岭南师范学院 Method and system for determining influence strength of urban vegetation on urban comprehensive vitality
CN115238584B (en) * 2022-07-29 2023-07-11 湖南大学 Population distribution identification method based on multi-source big data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114885133A (en) * 2022-07-04 2022-08-09 中科航迈数控软件(深圳)有限公司 Depth image-based equipment safety real-time monitoring method and system and related equipment
CN114973422A (en) * 2022-07-19 2022-08-30 南京应用数学中心 Gait recognition method based on three-dimensional human body modeling point cloud feature coding
CN115482504A (en) * 2022-09-02 2022-12-16 同炎数智科技(重庆)有限公司 AI algorithm-based city park vitality calculation method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Small public space vitality analysis and evaluation based on human trajectory modeling using video data.《Building and Environment》.2022,1-14. *
基于深度学习的人体点云骨架提取方法研究;张嵩山;《中国优秀硕士学位论文全文数据库信息科技辑》;I138-1360 *

Also Published As

Publication number Publication date
CN116052088A (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN109344736B (en) Static image crowd counting method based on joint learning
CN109410168B (en) Modeling method of convolutional neural network for determining sub-tile classes in an image
CN109241871A (en) A kind of public domain stream of people's tracking based on video data
CN112529015A (en) Three-dimensional point cloud processing method, device and equipment based on geometric unwrapping
CN109920055A (en) Construction method, device and the electronic equipment of 3D vision map
CN110827304B (en) Traditional Chinese medicine tongue image positioning method and system based on deep convolution network and level set method
CN113537180B (en) Tree obstacle identification method and device, computer equipment and storage medium
CN110991532A (en) Scene graph generation method based on relational visual attention mechanism
CN112991534B (en) Indoor semantic map construction method and system based on multi-granularity object model
CN114219855A (en) Point cloud normal vector estimation method and device, computer equipment and storage medium
CN110942110A (en) Feature extraction method and device of three-dimensional model
CN108182218A (en) A kind of video character recognition method, system and electronic equipment based on GIS-Geographic Information System
Tao et al. Indoor 3D semantic robot VSLAM based on mask regional convolutional neural network
CN109684910A (en) A kind of method and system of network detection transmission line of electricity ground surface environment variation
Gao et al. Road extraction using a dual attention dilated-linknet based on satellite images and floating vehicle trajectory data
CN112668675B (en) Image processing method and device, computer equipment and storage medium
CN110569926A (en) point cloud classification method based on local edge feature enhancement
Yadav et al. An improved deep learning-based optimal object detection system from images
CN116052088B (en) Point cloud-based activity space measurement method, system and computer equipment
CN115953330B (en) Texture optimization method, device, equipment and storage medium for virtual scene image
CN114565753A (en) Unmanned aerial vehicle small target identification method based on improved YOLOv4 network
CN114663835A (en) Pedestrian tracking method, system, equipment and storage medium
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN113706448B (en) Method, device and equipment for determining image and storage medium
CN113723468B (en) Object detection method of three-dimensional point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant