CN113112544A - Personnel positioning abnormity detection system based on intelligent Internet of things and big data - Google Patents

Personnel positioning abnormity detection system based on intelligent Internet of things and big data Download PDF

Info

Publication number
CN113112544A
CN113112544A CN202110381158.8A CN202110381158A CN113112544A CN 113112544 A CN113112544 A CN 113112544A CN 202110381158 A CN202110381158 A CN 202110381158A CN 113112544 A CN113112544 A CN 113112544A
Authority
CN
China
Prior art keywords
building
pixel
point
angle
buildings
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110381158.8A
Other languages
Chinese (zh)
Other versions
CN113112544B (en
Inventor
徐乙馨
徐致远
沈昀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoneng Smart Technology Development Jiangsu Co ltd
Original Assignee
Guoneng Smart Technology Development Jiangsu Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoneng Smart Technology Development Jiangsu Co ltd filed Critical Guoneng Smart Technology Development Jiangsu Co ltd
Priority to CN202110381158.8A priority Critical patent/CN113112544B/en
Publication of CN113112544A publication Critical patent/CN113112544A/en
Application granted granted Critical
Publication of CN113112544B publication Critical patent/CN113112544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/40Correcting position, velocity or attitude
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y10/00Economic sectors
    • G16Y10/40Transportation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y20/00Information sensed or collected by the things
    • G16Y20/10Information sensed or collected by the things relating to the environment, e.g. temperature; relating to location
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/60Positioning; Navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computing Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Environmental & Geological Engineering (AREA)
  • Toxicology (AREA)
  • Operations Research (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a personnel positioning abnormity detection system based on an intelligent Internet of things and big data. The method comprises the following steps: and the building detection module is used for detecting the pixels of the buildings in the panoramic image to obtain a building segmentation image, and segmenting the buildings to obtain a building segmentation image. And the building attitude acquisition module is used for restoring the building segmentation image, detecting the top edge and the angular point of the building to obtain an attitude angle corresponding to the top edge of the building, and connecting the angular point and the shooting point to obtain a building deflection angle. And the building hierarchy acquisition module is used for analyzing the building segmentation image to obtain an ID sequence, obtaining a hierarchy value and a relative inclination angle of the building, and obtaining a first annular space description vector according to the relative inclination angle, the hierarchy value, the deflection angle and the attitude angle of the building. And the positioning comparison module is used for obtaining a second annular space description vector in the CIM, comparing the first annular space description vector with the second annular space description vector and judging whether the positioning is abnormal or not.

Description

Personnel positioning abnormity detection system based on intelligent Internet of things and big data
Technical Field
The application relates to the technical field of positioning, in particular to a personnel positioning abnormity detection system based on an intelligent Internet of things and big data.
Background
Nowadays, the online booking vehicle is popularized and becomes a popular travel mode for the public. When a driver receives a passenger, the driver needs GPS positioning information of a passenger mobile phone, a commercially operated GPS signal provides positioning information with the precision of 10 meters, and the improved GPS positioning technology can improve the positioning precision to a centimeter level. However, in urban areas, the GPS signals often have insufficient number of received satellites due to the blockage of the GPS signals by tall buildings, or positioning information cannot be given due to problems such as multipath effects of the GPS signals. If the passenger is in the condition that the GPS cannot work normally, the GPS positioning information has errors, and the passenger cannot describe the position of the passenger through the surrounding environment because the passenger is not familiar with the environment, the driver cannot receive the passenger in time, and the time of the passenger is delayed.
In the prior art, an image around a location is shot by a mobile phone, and the location of a photographer is obtained by comparing an existing street view database with features in the shot image, wherein the features are feature points or obvious traffic signs in the image. The feature points are selected randomly, and the traffic signs do not exist anywhere. The selected characteristics have no universality, and the robustness of the positioning system is not high.
Disclosure of Invention
Aiming at the problems, the invention provides a personnel positioning abnormity detection system based on an intelligent Internet of things and big data. The method comprises the following steps: and the building detection module is used for detecting the pixels of the buildings in the panoramic image to obtain a building segmentation image, and segmenting the buildings to obtain a building segmentation image. And the building attitude acquisition module is used for restoring the building segmentation image, detecting the top edge and the angular point of the building to obtain an attitude angle corresponding to the top edge of the building, and connecting the angular point and the shooting point to obtain a building deflection angle. And the building hierarchy acquisition module is used for analyzing the building segmentation image to obtain an ID sequence, obtaining a hierarchy value and a relative inclination angle of the building, and obtaining a first annular space description vector according to the relative inclination angle, the hierarchy value, the deflection angle and the attitude angle of the building. And the positioning comparison module is used for obtaining a second annular space description vector in the CIM, comparing the first annular space description vector with the second annular space description vector and judging whether the positioning is abnormal or not.
A personnel location anomaly detection system based on intelligent Internet of things and big data comprises:
the building detection module is used for acquiring the panoramic image, detecting pixels of each building, acquiring a building segmentation image, and segmenting each building from the building segmentation image to obtain a building segmentation image;
the building attitude acquisition module is used for restoring the building segmentation image into an orthographic view image, detecting the top edge and the angular points of the building in the orthographic view image, analyzing to obtain attitude angles corresponding to the top edge of the building in an overlooking visual angle, and obtaining building deflection angles according to the building angular points and shooting point connecting lines;
the building hierarchy acquiring module is used for counting the ID of each line of pixels in the segmented image of the building to obtain a pixel ID sequence, and analyzing the pixel ID sequence to obtain a hierarchy value and a relative inclination angle of the building;
the positioning comparison module is used for obtaining a first annular space description vector according to the relative inclination angle, the level value, the deflection angle and the attitude angle of the building; and obtaining a second annular space description vector in the CIM according to the GPS positioning point, comparing the first annular space description vector with the second annular space description vector and judging whether the positioning is abnormal or not.
The method for segmenting each building from the panoramic image to obtain the segmentation image of the building specifically comprises the following steps: the height of the building segmentation graph is g, the abscissa of each building pixel in the building segmentation graph is obtained, the difference value between the maximum value and the minimum value of the abscissa is k, and for a building, all the columns of pixels including the building pixel are segmented to obtain a building segmentation image with the width of k and the height of g.
The method for detecting the top edge and the corner points of the building in the front-view image and analyzing to obtain the corresponding attitude angle of the top edge of the building under the overlooking view angle specifically comprises the following steps:
if the building has two top edges, the intersection point of the two top edges is an angular point, a top surface top view is restored according to the top edges, the projection of the angular point on the ground plane is taken as a reference point, a reference point and a shooting point are connected under a top view angle to obtain a straight line l, a reference point polar coordinate system is established by taking a ray which starts from the reference point and passes through the shooting point as a polar axis and the reference point as a pole, and the projection of the top edge with the minimum polar angle in the reference point polar coordinate system is taken as the projection of the reference top edge to obtain a posture angle formed by the projection of;
if the building is detected to have only one top edge, the two ends of the top edge are angular points, the projection of the middle point of the top edge on the ground plane is taken as a reference point, the reference point and the shooting point are connected under the overlooking visual angle to obtain a straight line l, and the attitude angle of the building is judged to be 90 degrees.
The method for obtaining the building deflection angle according to the connecting line of each building corner point and the shooting point specifically comprises the following steps: and combining the l corresponding to all the buildings to obtain a global top view, taking a ray which is from a shooting point and passes through an optional building reference point as a polar axis, establishing a shooting point polar coordinate system by taking the shooting point as a pole, and obtaining the deflection angle of each building according to the position of the reference point of each building in the panoramic image.
The method for counting the labels of the pixels in each row in the building segmentation image to obtain the pixel ID sequence specifically comprises the following steps: the pixel IDs of different buildings are different, the pixel ID of a building which does not belong to is 0, and the split image of each building corresponds to the pixel ID of one building; the mode of each row of pixel IDs is used as the ID of the row, and the IDs of each row are arranged according to the sequence of the pixel rows to obtain a pixel ID sequence.
The analyzing the pixel ID sequence to obtain the hierarchy value of the building specifically includes: and analyzing the non-zero ID of the pixel ID sequence from bottom to top, wherein the appearing first-class pixel ID is the hierarchy value of 1, and then the hierarchy values corresponding to other appearing pixel IDs are increased progressively to obtain the hierarchy value of each pixel ID.
The method for calculating the relative inclination angle of the building specifically comprises the following steps:
counting the number H of lines occupied by the pixel ID of each building, and when the m-level value of the building is not 1, the relative inclination angle is
Figure BDA0003013025550000021
Figure BDA0003013025550000022
Hm、HmlThe number of lines corresponding to the adjacent buildings is m, the pixel ID sequence corresponds to the pixel ID of the building in the segmented image of the building, ml is the pixel ID of the layer lower than the building m, and f is the focal length of the camera. When the m-storey number of the building is 1, the relative inclination angle gammam=0°。
The obtaining a second annular space description vector in the CIM according to the GPS positioning point specifically includes:
acquiring GPS positioning points and CIMs around the positioning points, and acquiring the level value of each building under the view angle of the positioning points; and acquiring the corner points of each building closest to the GPS positioning points, respectively connecting the positioning points with the highest point of each building three-dimensional model, and calculating the relative inclination angle of each building and the building of a lower hierarchy.
Under a overlooking view angle, selecting projection points of corner points of each building, which are closest to the positioning points, on a ground plane to be respectively connected with the positioning points to obtain a plurality of line segments l ', taking rays which start from the positioning points and pass through the shortest l ' as polar axes and taking the GPS positioning points as poles, establishing a positioning point polar coordinate system, and taking polar angles corresponding to the l ' as deflection angles of the corresponding buildings; and for each building, establishing an angular point polar coordinate system by taking the projection point as a pole and taking the ray which starts from the projection point and passes through the GPS positioning point as a polar axis, and obtaining the attitude angle formed by the top edge with the minimum polar angle and the ray in the angular point polar coordinate system.
The calculating the similarity between the first annular space description vector and the second annular space description vector to judge whether to locate the abnormal location specifically includes:
the number of buildings in the first annular space description vector is mu, and mu matching is carried out, wherein each matching carries out the following operations:
and selecting a building in the first annular space description vector, setting the corresponding deflection angle to be 0 DEG and adaptively changing the deflection angles of other buildings.
Respectively selecting one building from the first annular space description vector and the second annular space description vector, and calculating the similarity and the deflection angle similarity of the two buildings
Figure BDA0003013025550000031
θ1、θ2Respectively, the deflection angles, CK, of the two selected buildingsθThe minimum value of the difference between every two deflection angles of the second annular space description vector is obtained.
Attitude angle similarity
Figure BDA0003013025550000032
ω1、ω2Respectively the attitude angles, CK, of the two selected buildingsωThe minimum value of the difference between every two attitude angles of the second annular space description vector.
Similarity of relative inclination angle f (γ) ═ DS × (1- | γ)12|),γ1、γ2Respectively the relative inclination angles of the two selected buildings,
Figure BDA0003013025550000033
d1、d2respectively, the floor level values of the two selected buildings.
The similarity index of the two buildings is
Figure BDA0003013025550000034
Setting a similarity index threshold value alpha, and judging that the two buildings are similar when XS is larger than alpha; and when XS is less than or equal to alpha, judging that the two buildings are not similar.
Obtaining the number ms of similar building pairs, the number me of buildings in the second annular space description vector, and the vector matching degree
Figure BDA0003013025550000035
Setting a vector matching degree threshold value beta, judging that matching is successful if a primary matching result is XP & gt beta, and accurately positioning by a GPS; if all the matching results are XP which is less than or equal to beta, judging that the matching is failed and the GPS positioning is abnormal.
Compared with the prior art, the invention has the following beneficial effects:
(1) the hierarchy information of the buildings in the panoramic image and the hierarchy information of the buildings in the CIM are compared to determine whether the GPS positioning has errors, and the information of all angles around passengers and the information of the relative positions of the buildings are considered, so that whether the GPS positioning has errors can be more accurately judged.
(2) The relative inclination angle characteristics of each building and the buildings one floor lower than the building are obtained, the level information of the buildings is considered, the position information of the buildings relative to the shooting points is also considered, and the accuracy of positioning error judgment is improved.
(3) The panorama is compared with the deflection angle theta and the rotation angle omega of the buildings in the CIM, the attitude information of the buildings and the relative position information between the buildings are considered, and whether the GPS positioning has errors or not can be judged more accurately.
Drawings
Fig. 1 is a system configuration diagram.
Fig. 2 is a top view of a building.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The first embodiment is as follows:
the invention mainly aims to judge whether the GPS positioning has errors according to a panorama shot by a mobile phone.
In order to realize the content of the invention, the invention designs a personnel positioning abnormity detection system based on the intelligent Internet of things and big data, and the system structure diagram is shown in figure 1.
The invention aims at the situation that the passenger has errors in GPS positioning in an unfamiliar environment, and the invention needs to acquire the environmental information around the passenger in order to verify whether the GPS positioning is accurate. The environment information is collected through a mobile phone of a passenger, the passenger needs to shoot a surrounding panoramic image through the mobile phone, and the information of the sky and the feet of the passenger has little use for judging the position of the passenger, so that the panoramic image shot by the invention is a cylindrical panoramic image. After the panoramic image is subjected to certain preprocessing, the panoramic image is input into a building detection network to detect a building, and preparation is made for subsequently acquiring the building characteristics.
The training method of the building detection network comprises the following steps: taking a plurality of collected panoramic images as a data set; manually labeling the data set, labeling the pixels in the panoramic image with IDs, labeling the pixels belonging to different buildings with different nonzero IDs, and generating labeled data when the IDs of the pixels of the non-buildings are 0; training is performed using a mean square error loss function.
Inputting the panoramic image into a trained building detection module, detecting the pixel type in the panoramic image, labeling the pixels with IDs, labeling the pixels belonging to different buildings with different nonzero IDs, and outputting a building segmentation graph when the IDs of the pixels of the non-buildings are 0.
The height of the building segmentation graph is g, the abscissa of each building pixel in the building segmentation graph is obtained, the difference value between the maximum value and the minimum value of the abscissa is k, all columns of pixels including the building pixel are segmented for one building, and the building segmentation image with the width of k and the height of g is obtained. It should be noted that the segmented image corresponding to one building may include pixels of other buildings.
And simultaneously inputting the building segmentation image into the building posture acquisition module and the building layering acquisition module.
And the building attitude acquisition module is used for acquiring an attitude angle and a deflection angle of the building. The attitude angle and declination angle in the top view of the building are shown in figure 2.
The distance from the building to the shooting point cannot be accurately measured from the building segmentation map, so that other description information needs to be obtained from the image to characterize the features of the building. The invention characterizes the angle relation between the building attitude and the building through the building attitude angle and the declination angle.
And restoring the segmented images of the buildings into front-view images, and restoring the segmented images of the buildings into undistorted front-view images according to the homography matrix during splicing of the panoramic images.
According to the method, the top side of the building in the front-view image is detected through the top side detection network, and the attitude angle and the building deflection angle corresponding to the building in the overlooking view angle are obtained according to top side analysis. The general building can be abstracted into a cube, the top surface of the building can be seen in a top view, and the four corners of the top surface are right angles, so that the approximate top view of the top surface of the building can be obtained according to the angles of two top edges intersected in the building segmentation image. And the deflection angle of the building is a polar angle corresponding to a projection point of each angular point on the ground plane in the polar coordinates of the shooting points taking the shooting points as poles in the overlooking view.
The training method of the top edge detection network comprises the following steps: taking a plurality of building images as a data set; labeling the data set, labeling the data set as the edge of the top surface of the building, and generating labeled data; training is performed using a mean square error loss function.
And inputting the segmentation images of the buildings into the trained top edge detection network, and detecting the top edges of the corresponding buildings.
And analyzing the corresponding attitude angle and deflection angle of the building according to the top edge. If the building is detected to have two top edges, the intersection point of the two top edges is an angular point, a top surface top view is restored according to the top edges, the projection of the angular point on the ground plane is taken as a reference point, a reference point and a shooting point are connected under a top view angle to obtain a straight line l, a reference point polar coordinate system is established by taking a ray starting from the reference point and passing through the shooting point as a polar axis and the reference point as a pole point, the projection of the two top edges on the ground plane is detected, the projection of the top edge with the minimum polar angle in the reference point polar coordinate system is taken as the projection of the reference top edge, and the attitude. It should be noted that the positive direction of the polar coordinate system established in the present invention is all the counterclockwise direction.
If the building is detected to have only one top edge, the two ends of the top edge are angular points, the projection of the middle point of the top edge on the ground plane is taken as a reference point, the reference point and the shooting point are connected under the overlooking visual angle to obtain a straight line l, and the attitude angle of the building is judged to be 90 degrees.
And combining the l corresponding to all the buildings to obtain a global top view, taking a ray which is from a shooting point and passes through an optional building reference point as a polar axis, establishing a shooting point polar coordinate system by taking the shooting point as a pole, and obtaining the deflection angle of each building according to the position of the reference point of each building in the panoramic image. It can be known from a general theory that the pixels of the reference point in the building segmentation image are easy to obtain, and further the corresponding pixels of the reference point in the panorama are obtained, wherein the panorama has a circle of 360 degrees. So if a column of pixels in the panorama is set to correspond to a pixel in the reference column, this corresponds to 0. And setting a positive direction, namely obtaining the angle of the pixel according to the column number of the pixel in the panoramic image, namely the deflection angle corresponding to the reference point.
And a building story acquisition module. The method is used for acquiring the hierarchical information of the building, and the hierarchical information, namely the front-back information, of the building relative to the buildings around the building exists. For the same building, the hierarchy information obtained from different positions is different, so the hierarchy information of the building can effectively represent the position of the building relative to people. The invention obtains the hierarchy information of the building through the hierarchy obtaining module of the building.
And processing the segmentation images of the buildings to obtain the hierarchy value and the relative inclination angle of each building. The method comprises the following specific steps:
counting the ID of each line of pixels in the segmentation image of the building to obtain the masses of the lineThe number is taken as the ID of the row. Finally, a 1 × H ID sequence is output. For example, the ID sequence obtained in this embodiment is: [22233311000]T
And analyzing the non-zero ID of the pixel ID sequence from bottom to top, wherein the appearing first-class pixel ID is the hierarchy value of 1, and then the hierarchy values corresponding to other appearing pixel IDs are increased progressively to obtain the hierarchy value of each pixel ID.
For the ID sequence obtained in this embodiment, the number of non-zero ID classes, i.e., the number of different buildings, is counted from bottom to top, and for the above sequence, the non-zero IDs from bottom to top are sequentially 1, 3, and 2, which indicates that in the building segmentation image, part of the building 3 is blocked by the building 1, and part of the building 2 is blocked by the building 3, so that from the ID sequence, the following can be obtained: the hierarchy value of building 1 is 1, the hierarchy value of building 3 is 2, and the hierarchy value of building 2 is 3.
And simultaneously, obtaining a relative inclination angle of the building according to the ID sequence, wherein the inclination angle is an angle formed by a connecting line of the building angle point and the shooting point and the ground, and the calculation method of the relative inclination angle of the building specifically comprises the following steps:
counting the number H of lines occupied by the pixel ID of each building, and when the m-level value of the building is not 1, the relative inclination angle is
Figure BDA0003013025550000051
Figure BDA0003013025550000052
Hm、HmlThe number of lines corresponding to adjacent buildings is m, the pixel ID sequence corresponds to the pixel ID of the building in the segmented image of the building, ml is the pixel ID of a layer lower than the building m, and f is the focal length of the camera;
when the m-storey number of the building is 1, the relative inclination angle gammam=0°。
And a positioning comparison module.
And obtaining a first annular space description vector according to the obtained relative inclination angle, the hierarchy value, the deflection angle and the attitude angle of the building, wherein the annular space description vector is an annular vector, and the pixel ID of each building comprises the relative inclination angle, the hierarchy value, the deflection angle and the attitude angle of the building.
And acquiring a building characteristic number string, a building window number string and a window proportion characteristic number string according to the CIM. The CIM (City information model) city information model is based on technologies such as a Building Information Model (BIM), a Geographic Information System (GIS), an internet of things (IoT) and the like, integrates the data of the city ground, underground, indoor and outdoor future multi-dimensional information model and the city perception data of the current historical situation, and constructs a three-dimensional digital space city information organic complex.
The invention utilizes the three-dimensional model data of the building in the CIM to acquire the environmental information around the GPS positioning place. The specific method comprises the following steps: and acquiring GPS positioning points and CIMs around the positioning points, and acquiring the hierarchical relation between each building and surrounding buildings under the view angle of the positioning points.
And acquiring the corner points of each building closest to the GPS positioning points, respectively connecting the positioning points with the highest point of each building three-dimensional model, and calculating the relative inclination angle of each building and the building of a lower hierarchy.
Under a overlooking view angle, selecting projection points of corner points of each building, which are closest to the positioning points, on a ground plane to be respectively connected with the positioning points to obtain a plurality of line segments l ', taking rays which start from the positioning points and pass through the shortest l ' as polar axes and taking the GPS positioning points as poles, establishing a positioning point polar coordinate system, and taking polar angles corresponding to the l ' as deflection angles of the corresponding buildings; and for each building, establishing an angular point polar coordinate system by taking the projection point as a pole and taking the ray which starts from the projection point and passes through the GPS positioning point as a polar axis, and obtaining the attitude angle formed by the top edge with the minimum polar angle and the ray in the angular point polar coordinate system.
The second annular space description vector has the same form as the first annular space description vector, and each building comprises a corresponding relative inclination angle, a corresponding hierarchy value, a corresponding deflection angle and a corresponding attitude angle.
Comparing the form of the second annular space description vector with the first annular space description vector and calculating the similarity, wherein the deflection angle value in the annular space description vector is related to the arrangement of the polar axis. The number of buildings in the first annular space description vector is mu, and mu matching is carried out, wherein each matching carries out the following operations:
and selecting a building in the first annular space description vector, setting the corresponding deflection angle to be 0 DEG and adaptively changing the deflection angles of other buildings.
Respectively selecting one building from the first annular space description vector and the second annular space description vector, and calculating the similarity and the deflection angle similarity of the two buildings
Figure BDA0003013025550000061
θ1、θ2Respectively, the deflection angles, CK, of the two selected buildingsθThe minimum value of the difference between every two deflection angles of the second annular space description vector is obtained.
Attitude angle similarity
Figure BDA0003013025550000062
ω1、ω2Respectively the attitude angles, CK, of the two selected buildingsωThe minimum value of the difference between every two attitude angles of the second annular space description vector is obtained;
similarity of relative inclination angle f (γ) ═ DS × (1- | γ)12|),γ1、γ2Respectively the relative inclination angles of the two selected buildings,
Figure BDA0003013025550000071
d1、d2respectively, the floor level values of the two selected buildings.
The similarity index of the two buildings is
Figure BDA0003013025550000072
Setting a similarity index threshold value alpha, and judging that the two buildings are similar when XS is larger than alpha; and when XS is less than or equal to alpha, judging that the two buildings are not similar.
Obtaining the number ms of similar building pairs, the number me of buildings in the second annular space description vector, and the vector matching degree
Figure BDA0003013025550000073
Setting a vector matching degree threshold value beta, judging that matching is successful if a primary matching result is XP & gt beta, and accurately positioning by a GPS; if all the matching results are XP which is less than or equal to beta, judging that the matching is failed and the GPS positioning is abnormal.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. The utility model provides a personnel location anomaly detection system based on intelligence thing networking and big data which characterized in that, this system includes:
the building detection module is used for acquiring the panoramic image, detecting pixels of each building, acquiring a building segmentation image, and segmenting each building from the building segmentation image to obtain a building segmentation image;
the building attitude acquisition module is used for restoring the building segmentation image into an orthographic view image, detecting the top edge and the angular points of the building in the orthographic view image, analyzing to obtain attitude angles corresponding to the top edge of the building in an overlooking visual angle, and obtaining building deflection angles according to the building angular points and shooting point connecting lines;
the building hierarchy acquiring module is used for counting the ID of each line of pixels in the segmented image of the building to obtain a pixel ID sequence, and analyzing the pixel ID sequence to obtain a hierarchy value and a relative inclination angle of the building;
the positioning comparison module is used for obtaining a first annular space description vector according to the relative inclination angle, the level value, the deflection angle and the attitude angle of the building; and obtaining a second annular space description vector in the CIM according to the GPS positioning point, comparing the first annular space description vector with the second annular space description vector and judging whether the positioning is abnormal or not.
2. The system of claim 1, wherein the segmenting each building from the panorama to obtain a building segmentation image comprises:
the height of the building segmentation graph is g, the abscissa of each building pixel in the building segmentation graph is obtained, the difference value between the maximum value and the minimum value of the abscissa is k, and for a building, all the columns of pixels including the building pixel are segmented to obtain a building segmentation image with the width of k and the height of g.
3. The system as claimed in claim 1, wherein the detecting the top edge and the corner points of the building in the front view image and analyzing the top edge of the building to obtain the corresponding attitude angle from the top view comprises:
if the building has two top edges, the intersection point of the two top edges is an angular point, a top surface top view is restored according to the top edges, the projection of the angular point on the ground plane is taken as a reference point, a reference point and a shooting point are connected under a top view angle to obtain a straight line l, a reference point polar coordinate system is established by taking a ray which starts from the reference point and passes through the shooting point as a polar axis and the reference point as a pole, and the projection of the top edge with the minimum polar angle in the reference point polar coordinate system is taken as the projection of the reference top edge to obtain a posture angle formed by the projection of;
if the building is detected to have only one top edge, the two ends of the top edge are angular points, the projection of the middle point of the top edge on the ground plane is taken as a reference point, the reference point and the shooting point are connected under the overlooking visual angle to obtain a straight line l, and the attitude angle of the building is judged to be 90 degrees.
4. The system according to claim 3, wherein the obtaining of the building deflection angle according to the connection line between each building corner point and the shooting point comprises:
and combining the l corresponding to all the buildings to obtain a global top view, taking a ray which is from a shooting point and passes through an optional building reference point as a polar axis, establishing a shooting point polar coordinate system by taking the shooting point as a pole, and obtaining the deflection angle of each building according to the position of the reference point of each building in the panoramic image.
5. The system of claim 1, wherein the labeling of each row of pixels in the statistical building segmentation image yields a pixel ID sequence, specifically comprising:
the pixel IDs of different buildings are different, the pixel ID of a building which does not belong to is 0, and the split image of each building corresponds to the pixel ID of one building;
the mode of each row of pixel IDs is used as the ID of the row, and the IDs of each row are arranged according to the sequence of the pixel rows to obtain a pixel ID sequence.
6. The system of claim 1, wherein the analyzing of the pixel ID sequence to obtain a hierarchy value of the building comprises:
and analyzing the non-zero ID of the pixel ID sequence from bottom to top, wherein the appearing first-class pixel ID is the hierarchy value of 1, and then the hierarchy values corresponding to other appearing pixel IDs are increased progressively to obtain the hierarchy value of each pixel ID.
7. The system according to claim 1, wherein the calculation method of the relative inclination of the building is embodied as:
counting the number H of lines occupied by the pixel ID of each building, and when the m-level value of the building is not 1, the relative inclination angle is
Figure FDA0003013025540000021
Figure FDA0003013025540000022
Hm、HmlThe number of lines corresponding to adjacent buildings is m, the pixel ID sequence corresponds to the pixel ID of the building in the segmented image of the building, ml is the pixel ID of a layer lower than the building m, and f is the focal length of the camera;
when the m-storey number of the building is 1, the relative inclination angle gammam=0°。
8. The system of claim 1, wherein said obtaining a second annular space description vector in the CIM from GPS fixes comprises:
acquiring GPS positioning points and CIMs around the positioning points, and acquiring the level value of each building under the view angle of the positioning points;
acquiring the corner points of each building closest to the GPS positioning points, respectively connecting the positioning points with the highest point of each building three-dimensional model, and calculating the relative inclination angle of each building and a building of a lower hierarchy;
under a overlooking view angle, selecting projection points of corner points of each building, which are closest to the positioning points, on a ground plane to be respectively connected with the positioning points to obtain a plurality of line segments l ', taking rays which start from the positioning points and pass through the shortest l ' as polar axes and taking the GPS positioning points as poles, establishing a positioning point polar coordinate system, and taking polar angles corresponding to the l ' as deflection angles of the corresponding buildings; and for each building, establishing an angular point polar coordinate system by taking the projection point as a pole and taking the ray which starts from the projection point and passes through the GPS positioning point as a polar axis, and obtaining the attitude angle formed by the top edge with the minimum polar angle and the ray in the angular point polar coordinate system.
9. The system of claim 1, wherein the calculating the similarity between the first annular space description vector and the second annular space description vector to determine whether to locate the anomaly comprises:
the number of buildings in the first annular space description vector is mu, and mu matching is carried out, wherein each matching carries out the following operations:
selecting a building from the first annular space description vector, setting the corresponding deflection angle as 0 degree and adaptively changing the deflection angles of other buildings;
respectively selecting one building from the first annular space description vector and the second annular space description vector, and calculating the similarity and the deflection angle similarity of the two buildings
Figure FDA0003013025540000023
θ1、θ2Respectively, the deflection angles, CK, of the two selected buildingsθThe minimum value of the difference between every two deflection angles of the second annular space description vector is obtained;
attitude angle similarity
Figure FDA0003013025540000024
ω1、ω2Respectively the attitude angles, CK, of the two selected buildingsωThe minimum value of the difference between every two attitude angles of the second annular space description vector is obtained;
similarity of relative inclination angle f (γ) ═ DS × (1- | γ)12|),γ1、γ2Respectively the relative inclination angles of the two selected buildings,
Figure FDA0003013025540000025
d1、d2the floor values of the two selected buildings are respectively;
the similarity index of the two buildings is
Figure FDA0003013025540000026
Setting a similarity index threshold value alpha, and judging that the two buildings are similar when XS is larger than alpha; when XS is less than or equal to alpha, judging that the two buildings are not similar;
obtaining the number ms of similar building pairs, the number me of buildings in the second annular space description vector, and the vector matching degree
Figure FDA0003013025540000031
Setting a vector matching degree threshold value beta, judging that matching is successful if a primary matching result is XP & gt beta, and accurately positioning by a GPS; if all the matching results are XP which is less than or equal to beta, judging that the matching is failed and the GPS positioning is abnormal.
CN202110381158.8A 2021-04-09 2021-04-09 Personnel positioning abnormity detection system based on intelligent Internet of things and big data Active CN113112544B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110381158.8A CN113112544B (en) 2021-04-09 2021-04-09 Personnel positioning abnormity detection system based on intelligent Internet of things and big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110381158.8A CN113112544B (en) 2021-04-09 2021-04-09 Personnel positioning abnormity detection system based on intelligent Internet of things and big data

Publications (2)

Publication Number Publication Date
CN113112544A true CN113112544A (en) 2021-07-13
CN113112544B CN113112544B (en) 2022-07-19

Family

ID=76715179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110381158.8A Active CN113112544B (en) 2021-04-09 2021-04-09 Personnel positioning abnormity detection system based on intelligent Internet of things and big data

Country Status (1)

Country Link
CN (1) CN113112544B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107449434A (en) * 2016-05-31 2017-12-08 法拉第未来公司 Safety vehicle navigation is carried out using Positioning estimation error boundary
CN110675034A (en) * 2019-09-06 2020-01-10 河北省水利水电勘测设计研究院 Ecological dredging intelligent management and control system and method based on CIM and GIS
CN110749308A (en) * 2019-09-30 2020-02-04 浙江工业大学 SLAM-oriented outdoor positioning method using consumer-grade GPS and 2.5D building models
CN110926474A (en) * 2019-11-28 2020-03-27 南京航空航天大学 Satellite/vision/laser combined urban canyon environment UAV positioning and navigation method
CN111189440A (en) * 2019-12-31 2020-05-22 中国电建集团华东勘测设计研究院有限公司 Positioning navigation method based on comparison of spatial information model and real-time image
CN111612895A (en) * 2020-05-27 2020-09-01 魏寸新 Leaf-shielding-resistant CIM real-time imaging method for detecting abnormal parking of shared bicycle
CN111694879A (en) * 2020-05-22 2020-09-22 北京科技大学 Multivariate time series abnormal mode prediction method and data acquisition monitoring device
CN111783676A (en) * 2020-07-03 2020-10-16 郑州迈拓信息技术有限公司 Intelligent urban road video continuous covering method based on key area perception
CN112348887A (en) * 2019-08-09 2021-02-09 华为技术有限公司 Terminal pose determining method and related device
WO2021027676A1 (en) * 2019-08-09 2021-02-18 华为技术有限公司 Visual positioning method, terminal, and server

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107449434A (en) * 2016-05-31 2017-12-08 法拉第未来公司 Safety vehicle navigation is carried out using Positioning estimation error boundary
CN112348887A (en) * 2019-08-09 2021-02-09 华为技术有限公司 Terminal pose determining method and related device
WO2021027676A1 (en) * 2019-08-09 2021-02-18 华为技术有限公司 Visual positioning method, terminal, and server
CN110675034A (en) * 2019-09-06 2020-01-10 河北省水利水电勘测设计研究院 Ecological dredging intelligent management and control system and method based on CIM and GIS
CN110749308A (en) * 2019-09-30 2020-02-04 浙江工业大学 SLAM-oriented outdoor positioning method using consumer-grade GPS and 2.5D building models
CN110926474A (en) * 2019-11-28 2020-03-27 南京航空航天大学 Satellite/vision/laser combined urban canyon environment UAV positioning and navigation method
CN111189440A (en) * 2019-12-31 2020-05-22 中国电建集团华东勘测设计研究院有限公司 Positioning navigation method based on comparison of spatial information model and real-time image
CN111694879A (en) * 2020-05-22 2020-09-22 北京科技大学 Multivariate time series abnormal mode prediction method and data acquisition monitoring device
CN111612895A (en) * 2020-05-27 2020-09-01 魏寸新 Leaf-shielding-resistant CIM real-time imaging method for detecting abnormal parking of shared bicycle
CN111783676A (en) * 2020-07-03 2020-10-16 郑州迈拓信息技术有限公司 Intelligent urban road video continuous covering method based on key area perception

Also Published As

Publication number Publication date
CN113112544B (en) 2022-07-19

Similar Documents

Publication Publication Date Title
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN107067794B (en) Indoor vehicle positioning and navigation system and method based on video image processing
CN111209915B (en) Three-dimensional image synchronous recognition and segmentation method based on deep learning
US10664708B2 (en) Image location through large object detection
CN107480727B (en) Unmanned aerial vehicle image fast matching method combining SIFT and ORB
Duarte et al. Towards a more efficient detection of earthquake induced facade damages using oblique UAV imagery
CN104280036B (en) A kind of detection of transport information and localization method, device and electronic equipment
CN107545538B (en) Panoramic image splicing method and device based on unmanned aerial vehicle
CN111882612A (en) Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
Taneja et al. Geometric change detection in urban environments using images
CN109446973B (en) Vehicle positioning method based on deep neural network image recognition
CN109596121B (en) Automatic target detection and space positioning method for mobile station
CN104077760A (en) Rapid splicing system for aerial photogrammetry and implementing method thereof
CN102034238A (en) Multi-camera system calibrating method based on optical imaging test head and visual graph structure
CN114004977A (en) Aerial photography data target positioning method and system based on deep learning
Xiao et al. Geo-spatial aerial video processing for scene understanding and object tracking
David et al. Orientation descriptors for localization in urban environments
CN112150358A (en) Image feature matching method for resisting large geometric distortion
CN110160503B (en) Unmanned aerial vehicle landscape matching positioning method considering elevation
Ogawa et al. Deep Learning Approach for Classifying the Built Year and Structure of Individual Buildings by Automatically Linking Street View Images and GIS Building Data
WO2014112909A1 (en) Method and system for geo-referencing at least one sensor image
CN113112544B (en) Personnel positioning abnormity detection system based on intelligent Internet of things and big data
CN111508067B (en) Lightweight indoor modeling method based on vertical plane and vertical line
Jende et al. Fully automatic feature-based registration of mobile mapping and aerial nadir images for enabling the adjustment of mobile platform locations in GNSS-denied urban environments
CN111477013B (en) Vehicle measuring method based on map image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Personnel Localization Anomaly Detection System Based on Intelligent Internet of Things and Big Data

Granted publication date: 20220719

Pledgee: Bank of China Limited Yixing branch

Pledgor: Guoneng smart technology development (Jiangsu) Co.,Ltd.

Registration number: Y2024980012078

PE01 Entry into force of the registration of the contract for pledge of patent right