CN111028344A - Fusion method and system for personnel positioning and three-dimensional visualization - Google Patents

Fusion method and system for personnel positioning and three-dimensional visualization Download PDF

Info

Publication number
CN111028344A
CN111028344A CN201911302148.XA CN201911302148A CN111028344A CN 111028344 A CN111028344 A CN 111028344A CN 201911302148 A CN201911302148 A CN 201911302148A CN 111028344 A CN111028344 A CN 111028344A
Authority
CN
China
Prior art keywords
positioning
data
dimensional
coordinate
personnel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911302148.XA
Other languages
Chinese (zh)
Inventor
杨佩星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Weijing Shikong Information Technology Co ltd
Original Assignee
Nanjing Weijing Shikong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Weijing Shikong Information Technology Co ltd filed Critical Nanjing Weijing Shikong Information Technology Co ltd
Priority to CN201911302148.XA priority Critical patent/CN111028344A/en
Publication of CN111028344A publication Critical patent/CN111028344A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3635Guidance using 3D or perspective road maps
    • G01C21/3638Guidance using 3D or perspective road maps including 3D objects and buildings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Automation & Control Theory (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a fusion method and a fusion system of personnel positioning and three-dimensional visualization, and belongs to the technical field of internet. The method comprises the following steps: step one, establishing a three-dimensional model; step two, navigation grid preprocessing; step three, preprocessing a coordinate system; step four, receiving personnel positioning data; step five, reading personnel positioning data; and step six, processing personnel positioning data. The method and the device process the positioning data based on the navigation grid, can make up the defect that UWB positioning cannot provide height information, can dynamically avoid obstacles, and reduce performance overhead because the position refreshing frequency of the self three-dimensional personnel model is lower than that of the personnel positioning data; and the rationality, the real-time performance and the continuity of the positioning effect can be ensured by combining the processing algorithm of the personnel positioning data.

Description

Fusion method and system for personnel positioning and three-dimensional visualization
Technical Field
The invention belongs to the technical field of internet, and particularly relates to a fusion method and a fusion system of personnel positioning and three-dimensional visualization.
Background
The map is used as an image symbol model for representing an objective world, can reflect the space structure characteristics of a drawing object, can also reflect the change of time series, and can complete various evaluations, predictions, plans and decisions through establishing a mathematical mode, a graphic digitization and a digital model and processing by a computer according to needs.
The map recognition function comprises the steps of acquiring the knowledge of the space structure and the time process change of the drawing object through graphic analysis; the color shading map of the drawing object can be obtained through map quantitative calculation analysis.
The map recognition function is exerted, and the functions of the map in analysis rule, comprehensive evaluation, prediction, decision making and strategy, planning and design and command management are fully exerted.
However, the conventional UWB positioning system and the two-dimensional map lack altitude information and cannot reflect altitude changes of positioning data, such as going upstairs and downstairs, going downstairs, and going uphill; the relevance between a plurality of two-dimensional maps is lacked, and the personnel positioning data are lacked in connection transition when being switched between the two-dimensional maps; the two-dimensional map has no obstacle avoidance function, and can not correct the abnormal positioning position of people caused by insufficient positioning accuracy, such as wall penetration.
Disclosure of Invention
In order to solve the technical problems in the background technology, the traditional person positioning two-dimensional map and the person icons are replaced by the three-dimensional scene model and the three-dimensional character model, and the person positioning data is processed by combining the three-dimensional scene, so that the person positioning display effect is optimized.
The invention adopts the following technical scheme: a fusion method of personnel positioning and three-dimensional visualization specifically comprises the following steps:
step one, establishing a three-dimensional model: the system comprises a scene model and a character model; establishing a three-dimensional scene model, and establishing a three-dimensional character model containing walking animation;
step two, navigation grid preprocessing;
step three, coordinate system preprocessing: in the three-dimensional scene model, a coordinate origin and an area number are set for each positioning area, so that the x and y directions of the position of the origin and a coordinate axis are consistent with the two-dimensional map;
step four, receiving personnel positioning data: acquiring by adopting a communication protocol and logic and storing the personnel positioning data in a buffer pool;
step five, reading personnel positioning data: reading the latest personnel positioning data of all the positioning cards from the buffer pool every 0.5 second, and sequentially carrying out personnel positioning processing;
and step six, processing personnel positioning data.
In a further embodiment, the navigation grid preprocessing specifically comprises the steps of: and (4) importing the three-dimensional scene model and the three-dimensional character model in the step one into 3D software, and baking the three-dimensional scene model by using the navigation grid function of the 3D software to generate navigation grid data.
In a further embodiment, the personnel location data specifically includes: personnel basic information, a positioning card number, an area number, an original positioning coordinate and a timestamp; the positioning card number is used as a unique identifier and represents a positioning person.
In a further embodiment, the communication protocol is WebSocket, the personnel location data is updated every 0.1 second, after the client and the server establish communication connection, the server actively pushes the personnel location data to the client, after the client receives the data, the client distinguishes according to the location card number, and each location card number only keeps the latest data to the buffer pool.
In a further embodiment, the step six specifically includes the following steps:
step 601, real-time processing
When positioning data of personnel is processed, firstly judging whether the data is valid; taking a limit of 60 seconds as an example, if the time stamp exceeds 60 seconds from the current time, it is indicated that the data of the locator card is not received within one minute, the data is considered to be invalid due to expiration, the corresponding three-dimensional character model is not displayed, and the data processing is finished; if the time between the timestamp and the current time is less than or equal to 60 seconds, the data is considered to be valid, and the next step 602 is carried out;
step 602, mapping the original positioning coordinates to world positioning coordinates
The area number and the original positioning coordinate of the personnel positioning data jointly determine a positioning position, a set coordinate origin is found in the three-dimensional scene model according to the area number, the original positioning coordinate is converted into a 3D world coordinate from a coordinate origin coordinate system, the converted coordinate is the world positioning coordinate, and the next step 603 is carried out;
step 603, mapping the world positioning coordinates to navigation grid positioning coordinates
Taking the world positioning coordinate as a sphere center, wherein the radius is within the range of 2-3 meters; if the navigation grid is not searched, the data is considered invalid, and the data processing is finished; if the searched coordinates of the nearest navigation grid point are the navigation grid positioning coordinates, the next step 604 is executed;
step 604, positioning data small-range fluctuation filtering
Calculating the distance between the current coordinate of the three-dimensional character model and the positioning coordinate of the navigation grid, if the distance does not exceed the movement threshold, considering the distance as invalid data caused by small-range fluctuation of the data, and finishing data processing; if the distance exceeds the moving threshold, the data is considered to be valid and needs to be moved, and the next step 605 is carried out;
step 605, determine if the position of the three-dimensional character model needs to be synchronized immediately
If the navigation grid positioning coordinates are far away from the current position of the three-dimensional character model for a plurality of times continuously, calculating the distance between the current coordinates of the three-dimensional character model and the navigation grid positioning coordinates, if the distance does not exceed a synchronization distance threshold value, resetting the number of times of synchronization tolerance, and entering the next step 606; if the distance exceeds the synchronization distance threshold, adding 1 to the accumulated value of the number of times of synchronization tolerance; after adding 1 to the integrated value of the number of synchronous tolerance times, judging whether the value exceeds a synchronous tolerance threshold value, if so, resetting the number of synchronous tolerance times, immediately synchronizing the position of the three-dimensional character model to the navigation grid positioning coordinate, and finishing data processing; if not, the number of times of synchronization tolerance is not changed, and the next step 606 is carried out;
step 606, calculate the set of waypoints
Setting a target point of the three-dimensional character model as a navigation grid positioning coordinate, calculating a path point set from the current position of the three-dimensional character model to the target point after setting, wherein the set comprises a starting point, an end point and all inflection points, and is arranged according to the sequence from the starting point to the end point;
step 607, calculating the defined velocity values of the three-dimensional character model
Calculating and accumulating the straight-line distance between two adjacent points in the path point set to obtain the actual path length, and dividing the actual path length by the time for refreshing the personnel positioning data by the system to obtain an average speed value;
and taking the limiting speed based on the calculated average speed: if the average speed value is smaller than the lower speed limit value, taking the lower speed limit value; if the average speed value is greater than the upper speed limit value, taking the upper speed limit value;
and taking the limited speed value as the actual speed of the three-dimensional character model, and ending the complete processing flow of the personnel positioning data of one positioning card.
In a further embodiment, the distance D1 between the current coordinates of the three-dimensional character model and the positioning coordinates of the navigation grid in steps 604 and 605 is calculated as follows:
Figure BDA0002322101580000031
wherein V0 is recorded as obtaining the current position of the three-dimensional character model first, and adopts a three-dimensional coordinate form; v1 is recorded as the navigation grid positioning coordinate obtained in the previous step;
the threshold value of the synchronous distance is 1 meter; the synchronization distance threshold is 10 meters, and the synchronization tolerance threshold is 5 times.
In a further embodiment, the average speed value in step 607 is calculated as follows:
all waypoint coordinates are noted as: p0,P1,P2……Pn
The distance between two adjacent points is recorded as:
Figure BDA0002322101580000041
the actual path length is noted as:
L=D0,1+D1,2+…+Dn-1,n
the time that the system refreshes the personnel location is recorded as: t is
The average velocity value is recorded as
Figure BDA0002322101580000042
A fusion method of personnel positioning and three-dimensional visualization comprises the following modules:
a first module for building a three-dimensional model;
a second module for pre-processing the navigation grid;
a third module for preprocessing the coordinate system;
a fourth module for receiving people positioning data;
a fifth module for reading personnel positioning data;
a sixth module for processing the personnel positioning data.
In a further embodiment, the first module is further configured to create a three-dimensional scene model and a three-dimensional character model;
the second module is further used for guiding the three-dimensional scene model and the three-dimensional character model into the 3D software when the navigation grid is preprocessed, and baking the three-dimensional scene model by using the navigation grid function of the 3D software to generate navigation grid data;
the third module is further used for positioning personnel data during coordinate system preprocessing, and comprises: setting a positioning card number as a unique identifier by using personnel basic information, a positioning card number, an area number, an original positioning coordinate and a timestamp, and representing a positioning personnel; the area number represents the area where the person is located, each area corresponds to a two-dimensional map, any point in the two-dimensional map can be represented by coordinates in an X and Y form, and the original positioning coordinates represent the coordinates of the specific position where the person is located in the area; in the three-dimensional scene model, setting a coordinate origin for each positioning area, and keeping the position of the origin and the direction of a coordinate axis consistent with that of the two-dimensional map;
the fourth module is further to receive people positioning data: receiving personnel positioning data: the method comprises the steps that a communication protocol and logic are adopted for obtaining and storing personnel positioning data in a buffer pool, the buffer pool is arranged in a fourth module, the communication protocol is WebSocket, the personnel positioning data are updated every 0.1 second, after the client side and a server are in communication connection, the server actively pushes the personnel positioning data to the client side, the client side distinguishes according to positioning card numbers after receiving the data, and each positioning card number only keeps the latest data to the buffer pool.
In a further embodiment, the fifth module is further configured to read personnel positioning data: reading the latest personnel positioning data of all the positioning cards from the buffer pool every 0.5 second, and sequentially carrying out personnel positioning processing;
the sixth module is further used for processing personnel positioning data, and specifically comprises the following steps:
step A, real-time processing
When positioning data of personnel is processed, firstly judging whether the data is valid; taking a limit of 60 seconds as an example, if the time stamp exceeds 60 seconds from the current time, it is indicated that the data of the locator card is not received within one minute, the data is considered to be invalid due to expiration, the corresponding three-dimensional character model is not displayed, and the data processing is finished; if the time between the timestamp and the current time is less than or equal to 60 seconds, the data is considered to be valid, and the next step B is carried out;
step B, mapping the original positioning coordinates to world positioning coordinates
The area number and the original positioning coordinate of the personnel positioning data jointly determine a positioning position, a set coordinate origin is found in the three-dimensional scene model according to the area number, the original positioning coordinate is converted into a 3D world coordinate from a coordinate origin coordinate system, the converted coordinate is the world positioning coordinate, and the next step C is carried out;
step C, mapping the world positioning coordinates to navigation grid positioning coordinates
Taking the world positioning coordinate as a sphere center, wherein the radius is within the range of 2-3 meters; if the navigation grid is not searched, the data is considered invalid, and the data processing is finished; if the searched coordinates of the nearest navigation grid point are the navigation grid positioning coordinates, the next step D is carried out;
step D, positioning data small-range fluctuation filtering
Calculating the distance between the current coordinate of the three-dimensional character model and the positioning coordinate of the navigation grid, if the distance does not exceed the movement threshold, considering the distance as invalid data caused by small-range fluctuation of the data, and finishing data processing; if the distance exceeds the moving threshold, the data is considered to be valid and needs to be moved, and the next step E is carried out;
step E, judging whether the position of the three-dimensional character model needs to be synchronized immediately
If the navigation grid positioning coordinates are far away from the current position of the three-dimensional character model for a plurality of times continuously, calculating the distance between the current coordinates of the three-dimensional character model and the navigation grid positioning coordinates, if the distance does not exceed a synchronization distance threshold value, resetting the number of times of synchronization tolerance, and entering the next step F; if the distance exceeds the synchronization distance threshold, adding 1 to the accumulated value of the number of times of synchronization tolerance; after adding 1 to the integrated value of the number of synchronous tolerance times, judging whether the value exceeds a synchronous tolerance threshold value, if so, resetting the number of synchronous tolerance times, immediately synchronizing the position of the three-dimensional character model to the navigation grid positioning coordinate, and finishing data processing; if the number of times of synchronization tolerance is not changed, entering the next step F;
step F, calculating a path point set
Setting a target point of the three-dimensional character model as a navigation grid positioning coordinate, calculating a path point set from the current position of the three-dimensional character model to the target point after setting, wherein the set comprises a starting point, an end point and all inflection points, and is arranged according to the sequence from the starting point to the end point;
step G, calculating the limited speed value of the three-dimensional character model
Calculating and accumulating the straight-line distance between two adjacent points in the path point set to obtain the actual path length, and dividing the actual path length by the time for refreshing the personnel positioning data by the system to obtain an average speed value;
the average velocity value is calculated as follows:
all waypoint coordinates are noted as: p0,P1,P2……Pn
The distance between two adjacent points is recorded as:
Figure BDA0002322101580000061
the actual path length is noted as:
L=D0,1+D1,2+…+Dn-1,n
the time that the system refreshes the personnel location is recorded as: t is
The average velocity value is recorded as
Figure BDA0002322101580000063
And taking the limiting speed based on the calculated average speed: if the average speed value is smaller than the lower speed limit value, taking the lower speed limit value; if the average speed value is greater than the upper speed limit value, taking the upper speed limit value;
taking the limited speed value as the actual speed of the three-dimensional character model, and ending the complete processing flow of the personnel positioning data of one positioning card;
the calculation formula of the distance D1 between the current coordinates of the three-dimensional character model and the positioning coordinates of the navigation grid is as follows:
Figure BDA0002322101580000062
wherein V0 is recorded as obtaining the current position of the three-dimensional character model first, and adopts a three-dimensional coordinate form; v1 is recorded as the navigation grid positioning coordinate obtained in the previous step;
the threshold value of the synchronous distance is 1 meter; the synchronization distance threshold is 10 meters, and the synchronization tolerance threshold is 5 times.
The invention has the beneficial effects that: firstly, the invention processes the positioning data based on the navigation grid, can make up the defect that UWB positioning can not provide height information, and can dynamically avoid obstacles.
Secondly, the refreshing frequency of the self three-dimensional personnel model position is lower than that of personnel positioning data, so that the performance expense is reduced; and the rationality, the real-time performance and the continuity of the positioning effect can be ensured by combining the processing algorithm of the personnel positioning data.
Finally, the invention maps the original positioning coordinates to the world positioning coordinates, and converts the coordinates of each positioning area into a unified coordinate system, so that the personnel positioning data can be perfectly connected and transited when being switched between the positioning areas.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a flowchart of the method for reading and processing the personnel location data.
FIG. 3 is a flow chart of the present invention for processing locator card data.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention.
The applicant believes that the conventional UWB people positioning system merely shows the people's positioning position on a two-dimensional map, but the two-dimensional map is known to lack height information and cannot reflect height changes of positioning data, such as going upstairs and downstairs or going downslopes. Meanwhile, relevance is lacked among multiple two-dimensional maps, and when personnel positioning data are switched among the two-dimensional maps, linkage transition is lacked. In addition, the two-dimensional map has no function of avoiding obstacles, so that the positioning accuracy cannot be corrected, and the personnel positioning position is abnormal, which is better than wall penetration.
Therefore, for solving the technical problem, the applicant combines the personnel positioning data with the three-dimensional visualization technology, and solves the problems that the personnel positioning information lacks height information, the map is switched without transition and lacks collision detection when being displayed on the two-dimensional map.
The technical scheme of the invention is further specifically explained by the embodiment and the implementation processes of 3dsmax modeling software, cad drawing, on-site photographing, Unity3D software and the like.
As shown in fig. 1, a method for fusing human localization and three-dimensional visualization specifically includes the following steps:
step one, establishing a three-dimensional model: the system comprises a scene model and a character model;
and establishing a three-dimensional scene model, and manufacturing the three-dimensional scene model by adopting modeling software such as 3dsmax and the like and combining cad drawing of a real scene and data information of field photographing measurement.
And establishing a three-dimensional character model, and manufacturing the three-dimensional character model including walking animation by adopting modeling software such as 3dsmax and the like.
And step two, navigation grid preprocessing, namely importing the three-dimensional scene model and the three-dimensional character model into Unity3D software. The three-dimensional scene model is baked using the navigation grid (NavMesh) function of Unity3D, and navigation grid data is generated. Areas with navigation grid data representing walkable areas such as roads, room interiors, corridors, stairways, and the like; areas without navigation grid data represent non-walkable areas such as walls, trees, pools, etc.
Step three, coordinate system preprocessing: personnel positioning data includes: personnel basic information, location card number, area number, original location coordinate, timestamp, wherein, personnel location data include: name, gender, job, etc. Setting a positioning card number as a unique identifier to represent a positioning person; the area number represents the area where the person is located, each area corresponds to a two-dimensional map, any point in the two-dimensional map can be represented by coordinates in an X and Y form, and the original positioning coordinates represent the coordinates of the specific position where the person is located in the area; in the three-dimensional scene model, a coordinate origin is set for each positioning area, so that the position of the origin and the direction of a coordinate axis are consistent with that of the two-dimensional map. The positioning card can be carried on the body of a positioned person, and a plurality of base stations are installed in a positioning area. The positioning card continuously transmits signals to the periphery, and after the surrounding base stations receive the signals, the position of the positioning card is calculated. The positioning card can be set as a mobile phone, and the base station can be a satellite.
Step four, receiving personnel positioning data: acquiring by adopting a communication protocol and logic and storing the personnel positioning data in a buffer pool; the communication protocol is WebSocket, the personnel positioning data is updated every 0.1 second, after the client and the server are in communication connection, the server actively pushes the personnel positioning data to the client, the client distinguishes according to the positioning card number after receiving the data, and each positioning card number only keeps the latest data to the buffer pool. Conventional UWB location data has its own refresh frequency (e.g., 0.1 second refresh) and three-dimensional systems also have their own refresh frequency (e.g., 0.02 second refresh), which are different, so that it is necessary to enable a thread alone in a three-dimensional application to receive UWB data and store the data in a buffer pool, and the three-dimensional system will take data from the buffer pool when the data is needed (e.g., every 0.5 second).
Step five, reading personnel positioning data: the frequency of the system refreshing the positioning data of the processing personnel is lower than or equal to the acquisition frequency so as to achieve the balance of calculation performance and display effect. And reading the latest personnel positioning data of all the positioning cards from the buffer pool every 0.5 second, and sequentially carrying out personnel positioning processing.
Step six, processing personnel positioning data, and specifically comprising the following steps:
step 601, real-time processing
The timestamp information of the personnel positioning data represents the time corresponding to the data. When positioning the data of the person, firstly, whether the data is valid is judged. Taking a limit of 60 seconds as an example, if the time stamp exceeds 60 seconds from the current time, it is indicated that the data of the locator card is not received within one minute, the data is considered to be invalid due to expiration, the corresponding three-dimensional character model is not displayed, and the data processing is finished; if the time between the timestamp and the current time is less than or equal to 60 seconds, the data is considered to be valid, and the next step 602 is carried out;
step 602, mapping the original positioning coordinates to world positioning coordinates
The area number of the person positioning data and the original positioning coordinates together determine the positioning position. In the three-dimensional scene model, finding a set coordinate origin according to the area number, converting the original positioning coordinate from a coordinate origin coordinate system into a world coordinate of Unity3D, wherein the converted coordinate is the world positioning coordinate, and entering the next step 603;
step 603, mapping the world positioning coordinates to navigation grid positioning coordinates
The accuracy of personnel positioning data is influenced by the number of base stations, the shielding of obstacles, an engine algorithm and other reasons, certain errors exist, and the traditional UWB positioning has no effective height information. The size of the three-dimensional scene model is limited by drawing precision and measurement photographing precision, and certain errors also exist. Therefore, there may also be errors in the world positioning coordinates, which may lead to abnormal display effects. For example, the actual personnel location is on the pond shore, and the world location coordinates may be on the water surface beside the pond; the actual position of the person on the slope may be suspended or sunk into the slope because the height of the world positioning coordinates is consistent with the origin of the area coordinates (the original positioning coordinates have no height information). And searching the navigation grid within the range of 2-3 m in radius by taking the world positioning coordinate as the sphere center. If the navigation grid is not searched, the data is considered invalid, and the data processing is finished; if the searched coordinates of the nearest navigation grid point are the navigation grid positioning coordinates, the next step 604 is proceeded.
The radius is more suitable to be 2-3 m. If the data precision of personnel positioning is high (the base stations are more, but the cost is increased), the position of the converted world positioning coordinate is more accurate, and the searching radius can be smaller; if the accuracy is low, the position of the converted world positioning coordinates is not accurate, and the actual position may deviate to a large extent, so that the search radius needs to be enlarged.
Step 604, positioning data small-range fluctuation filtering
And calculating the distance between the current coordinates of the three-dimensional character model and the positioning coordinates of the navigation grid. If the distance does not exceed the moving threshold, the data is considered to be invalid data caused by small-range fluctuation of the data, and the data processing is finished; if the distance exceeds the move threshold, the data is considered valid and movement is required, and the next step 605 is entered.
Step 605, determine if the position of the three-dimensional character model needs to be synchronized immediately
If the navigation grid positioning coordinates are far away from the current position of the three-dimensional character model for a plurality of times continuously, calculating the distance between the current coordinates of the three-dimensional character model and the navigation grid positioning coordinates, if the distance does not exceed a synchronization distance threshold value, resetting the number of times of synchronization tolerance, and entering the next step 606; if the distance exceeds the synchronization distance threshold, adding 1 to the accumulated value of the number of times of synchronization tolerance; after adding 1 to the integrated value of the number of synchronous tolerance times, judging whether the value exceeds a synchronous tolerance threshold value, if so, resetting the number of synchronous tolerance times, immediately synchronizing the position of the three-dimensional character model to the navigation grid positioning coordinate, and finishing data processing; if not, the number of times of synchronization tolerance is not changed, and the next step 606 is carried out;
step 606, calculate the set of waypoints
Setting a target point of the three-dimensional character model as a navigation grid positioning coordinate, calculating a path point set from the current position of the three-dimensional character model to the target point after setting, wherein the set comprises a starting point, an end point and all inflection points, and is arranged according to the sequence from the starting point to the end point;
step 607, calculating the defined velocity values of the three-dimensional character model
Calculating and accumulating the straight-line distance between two adjacent points in the path point set to obtain the actual path length, and dividing the actual path length by the time for refreshing the personnel positioning data by the system to obtain an average speed value;
and taking the limiting speed based on the calculated average speed: if the average speed value is smaller than the lower speed limit value, taking the lower speed limit value; if the average speed value is greater than the upper speed limit value, taking the upper speed limit value;
and taking the limited speed value as the actual speed of the three-dimensional character model, and ending the complete processing flow of the personnel positioning data of one positioning card.
The average velocity value is calculated as follows:
all waypoint coordinates are noted as: p0,P1,P2……Pn
The distance between two adjacent points is recorded as:
Figure BDA0002322101580000101
the actual path length is noted as:
L=D0,1+D1,2+…+Dn-1,n
the time that the system refreshes the personnel location is recorded as: t is
The average velocity value is recorded as
Figure BDA0002322101580000103
The calculation of the distance D1 between the current coordinates of the three-dimensional character model and the positioning coordinates of the navigation grid in said steps 604 and 605 is as follows:
Figure BDA0002322101580000102
wherein V0 is recorded as obtaining the current position of the three-dimensional character model first, and adopts a three-dimensional coordinate form; v1 is recorded as the navigation grid positioning coordinate obtained in the previous step;
the threshold value of the synchronous distance is 1 meter; the synchronization distance threshold is 10 meters, and the synchronization tolerance threshold is 5 times.
A fusion method of personnel positioning and three-dimensional visualization further comprises the following steps: a first module for building a three-dimensional model; a second module for pre-processing the navigation grid; a third module for preprocessing the coordinate system; a fourth module for receiving people positioning data; a fifth module for reading personnel positioning data; a sixth module for processing the personnel positioning data.
The first module is further used for establishing a three-dimensional scene model and a three-dimensional character model; the second module is further used for guiding the three-dimensional scene model and the three-dimensional character model into the 3D software when the navigation grid is preprocessed, and baking the three-dimensional scene model by using the navigation grid function of the 3D software to generate navigation grid data; the third module is further used for setting personnel positioning data when the coordinate system is preprocessed, and comprises the following steps: setting a positioning card number as a unique identifier by using personnel basic information, a positioning card number, an area number, an original positioning coordinate and a timestamp, and representing a positioning personnel; the area number represents the area where the person is located, each area corresponds to a two-dimensional map, any point in the two-dimensional map can be represented by coordinates in an X and Y form, and the original positioning coordinates represent the coordinates of the specific position where the person is located in the area; in the three-dimensional scene model, setting a coordinate origin for each positioning area, and keeping the position of the origin and the direction of a coordinate axis consistent with that of the two-dimensional map; the fourth module is further to receive people positioning data: receiving personnel positioning data: the method comprises the steps that a communication protocol and logic are adopted for obtaining and storing personnel positioning data in a buffer pool, the buffer pool is arranged in a fourth module, the communication protocol is WebSocket, the personnel positioning data are updated every 0.1 second, after the client side and a server are in communication connection, the server actively pushes the personnel positioning data to the client side, the client side distinguishes according to positioning card numbers after receiving the data, and each positioning card number only keeps the latest data to the buffer pool.
The fifth module is further configured to read personnel positioning data: reading the latest personnel positioning data of all the positioning cards from the buffer pool every 0.5 second, and sequentially carrying out personnel positioning processing;
the sixth module is further used for processing personnel positioning data, and specifically comprises the following steps:
step A, real-time processing
When positioning data of personnel is processed, firstly judging whether the data is valid; taking a limit of 60 seconds as an example, if the time stamp exceeds 60 seconds from the current time, it is indicated that the data of the locator card is not received within one minute, the data is considered to be invalid due to expiration, the corresponding three-dimensional character model is not displayed, and the data processing is finished; if the time stamp is less than or equal to 60 seconds from the current time, the data is considered to be valid, and the next step B is carried out
Step B, mapping the original positioning coordinates to world positioning coordinates
The area number and the original positioning coordinate of the personnel positioning data jointly determine a positioning position, a set coordinate origin is found in the three-dimensional scene model according to the area number, the original positioning coordinate is converted into a 3D world coordinate from a coordinate origin coordinate system, the converted coordinate is the world positioning coordinate, and the next step C is carried out;
step C, mapping the world positioning coordinates to navigation grid positioning coordinates
Taking the world positioning coordinate as a sphere center, wherein the radius is within the range of 2-3 meters; if the navigation grid is not searched, the data is considered invalid, and the data processing is finished; if the searched coordinates of the nearest navigation grid point are the navigation grid positioning coordinates, the next step D is carried out;
step D, positioning data small-range fluctuation filtering
Calculating the distance between the current coordinate of the three-dimensional character model and the positioning coordinate of the navigation grid, if the distance does not exceed the movement threshold, considering the distance as invalid data caused by small-range fluctuation of the data, and finishing data processing; if the distance exceeds the moving threshold, the data is considered to be valid and needs to be moved, and the next step E is carried out;
step E, judging whether the position of the three-dimensional character model needs to be synchronized immediately
If the navigation grid positioning coordinates are far away from the current position of the three-dimensional character model for a plurality of times continuously, calculating the distance between the current coordinates of the three-dimensional character model and the navigation grid positioning coordinates, if the distance does not exceed a synchronization distance threshold value, resetting the number of times of synchronization tolerance, and entering the next step F; if the distance exceeds the synchronization distance threshold, adding 1 to the accumulated value of the number of times of synchronization tolerance; after adding 1 to the integrated value of the number of synchronous tolerance times, judging whether the value exceeds a synchronous tolerance threshold value, if so, resetting the number of synchronous tolerance times, immediately synchronizing the position of the three-dimensional character model to the navigation grid positioning coordinate, and finishing data processing; if the number of times of synchronization tolerance is not changed, entering the next step F;
step F, calculating a path point set
Setting a target point of the three-dimensional character model as a navigation grid positioning coordinate, calculating a path point set from the current position of the three-dimensional character model to the target point after setting, wherein the set comprises a starting point, an end point and all inflection points, and is arranged according to the sequence from the starting point to the end point;
step G, calculating the limited speed value of the three-dimensional character model
Calculating and accumulating the straight-line distance between two adjacent points in the path point set to obtain the actual path length, and dividing the actual path length by the time for refreshing the personnel positioning data by the system to obtain an average speed value;
the average velocity value is calculated as follows:
all waypoint coordinates are noted as: p0,P1,P2……Pn
The distance between two adjacent points is recorded as:
Figure BDA0002322101580000131
the actual path length is noted as:
L=D0,1+D1,2+…+Dn-1,n
the time that the system refreshes the personnel location is recorded as: t is
The average velocity value is recorded as
Figure BDA0002322101580000132
And taking the limiting speed based on the calculated average speed: if the average speed value is smaller than the lower speed limit value, taking the lower speed limit value; if the average speed value is greater than the upper speed limit value, taking the upper speed limit value;
taking the limited speed value as the actual speed of the three-dimensional character model, and ending the complete processing flow of the personnel positioning data of one positioning card;
the calculation formula of the distance D1 between the current coordinates of the three-dimensional character model and the positioning coordinates of the navigation grid is as follows:
Figure BDA0002322101580000133
wherein V0 is recorded as obtaining the current position of the three-dimensional character model first, and adopts a three-dimensional coordinate form; v1 is recorded as the navigation grid positioning coordinate obtained in the previous step;
the threshold value of the synchronous distance is 1 meter; the synchronization distance threshold is 10 meters, and the synchronization tolerance threshold is 5 times.

Claims (10)

1. A fusion method of personnel positioning and three-dimensional visualization is characterized by comprising the following steps:
step one, establishing a three-dimensional model: the system comprises a scene model and a character model; establishing a three-dimensional scene model, and establishing a three-dimensional character model containing walking animation;
step two, navigation grid preprocessing;
step three, coordinate system preprocessing: in the three-dimensional scene model, a coordinate origin and an area number are set for each positioning area, so that the x and y directions of the position of the origin and a coordinate axis are consistent with the two-dimensional map;
step four, receiving personnel positioning data: acquiring by adopting a communication protocol and logic and storing the personnel positioning data in a buffer pool;
step five, reading personnel positioning data: reading the latest personnel positioning data of all the positioning cards from the buffer pool every 0.5 second, and sequentially carrying out personnel positioning processing;
and step six, processing personnel positioning data.
2. The fusion method of human localization and three-dimensional visualization according to claim 1, wherein the navigation grid preprocessing specifically comprises the following steps: and (4) importing the three-dimensional scene model and the three-dimensional character model in the step one into 3D software, and baking the three-dimensional scene model by using the navigation grid function of the 3D software to generate navigation grid data.
3. The fusion method of people positioning and three-dimensional visualization according to claim 1, wherein the people positioning data specifically comprises: personnel basic information, a positioning card number, an area number, an original positioning coordinate and a timestamp; the positioning card number is used as a unique identifier and represents a positioning person.
4. The fusion method of personnel positioning and three-dimensional visualization as claimed in claim 1, wherein the communication protocol is WebSocket, the personnel positioning data is updated every 0.1 second, after the client establishes communication connection with the server, the server actively pushes the personnel positioning data to the client, after the client receives the data, the client distinguishes according to the positioning card number, and each positioning card number only retains the latest data to the buffer pool.
5. The fusion method of human body positioning and three-dimensional visualization according to claim 1, wherein the sixth step specifically comprises the following steps:
step 601, real-time processing
When positioning data of personnel is processed, firstly judging whether the data is valid; taking a limit of 60 seconds as an example, if the time stamp exceeds 60 seconds from the current time, it is indicated that the data of the locator card is not received within one minute, the data is considered to be invalid due to expiration, the corresponding three-dimensional character model is not displayed, and the data processing is finished; if the time between the timestamp and the current time is less than or equal to 60 seconds, the data is considered to be valid, and the next step 602 is carried out;
step 602, mapping the original positioning coordinates to world positioning coordinates
The area number and the original positioning coordinate of the personnel positioning data jointly determine a positioning position, a set coordinate origin is found in the three-dimensional scene model according to the area number, the original positioning coordinate is converted into a 3D world coordinate from a coordinate origin coordinate system, the converted coordinate is the world positioning coordinate, and the next step 603 is carried out;
step 603, mapping the world positioning coordinates to navigation grid positioning coordinates
Taking the world positioning coordinate as a sphere center, wherein the radius is within the range of 2-3 meters; if the navigation grid is not searched, the data is considered invalid, and the data processing is finished; if the searched coordinates of the nearest navigation grid point are the navigation grid positioning coordinates, the next step 604 is executed;
step 604, positioning data small-range fluctuation filtering
Calculating the distance between the current coordinate of the three-dimensional character model and the positioning coordinate of the navigation grid, if the distance does not exceed the movement threshold, considering the distance as invalid data caused by small-range fluctuation of the data, and finishing data processing; if the distance exceeds the moving threshold, the data is considered to be valid and needs to be moved, and the next step 605 is carried out;
step 605, determine if the position of the three-dimensional character model needs to be synchronized immediately
If the navigation grid positioning coordinates are far away from the current position of the three-dimensional character model for a plurality of times continuously, calculating the distance between the current coordinates of the three-dimensional character model and the navigation grid positioning coordinates, if the distance does not exceed a synchronization distance threshold value, resetting the number of times of synchronization tolerance, and entering the next step 606; if the distance exceeds the synchronization distance threshold, adding 1 to the accumulated value of the number of times of synchronization tolerance; after adding 1 to the integrated value of the number of synchronous tolerance times, judging whether the value exceeds a synchronous tolerance threshold value, if so, resetting the number of synchronous tolerance times, immediately synchronizing the position of the three-dimensional character model to the navigation grid positioning coordinate, and finishing data processing; if not, the number of times of synchronization tolerance is not changed, and the next step 606 is carried out;
step 606, calculate the set of waypoints
Setting a target point of the three-dimensional character model as a navigation grid positioning coordinate, calculating a path point set from the current position of the three-dimensional character model to the target point after setting, wherein the set comprises a starting point, an end point and all inflection points, and is arranged according to the sequence from the starting point to the end point;
step 607, calculating the defined velocity values of the three-dimensional character model
Calculating and accumulating the straight-line distance between two adjacent points in the path point set to obtain the actual path length, and dividing the actual path length by the time for refreshing the personnel positioning data by the system to obtain an average speed value;
and taking the limiting speed based on the calculated average speed: if the average speed value is smaller than the lower speed limit value, taking the lower speed limit value; if the average speed value is greater than the upper speed limit value, taking the upper speed limit value;
and taking the limited speed value as the actual speed of the three-dimensional character model, and ending the complete processing flow of the personnel positioning data of one positioning card.
6. A method for fusion of person localization and three-dimensional visualization according to claim 5, wherein the distance D1 between the current coordinates of the three-dimensional character model and the localization coordinates of the navigation grid in steps 604 and 605 is calculated as follows:
Figure FDA0002322101570000031
wherein V0 is recorded as obtaining the current position of the three-dimensional character model first, and adopts a three-dimensional coordinate form; v1 is recorded as the navigation grid positioning coordinate obtained in the previous step;
the threshold value of the synchronous distance is 1 meter; the synchronization distance threshold is 10 meters, and the synchronization tolerance threshold is 5 times.
7. A method for fusing human localization and three-dimensional visualization according to claim 6, wherein the average velocity value in step 607 is calculated as follows:
all waypoint coordinates are noted as: p0,P1,P2……Pn
The distance between two adjacent points is recorded as:
Figure FDA0002322101570000032
the actual path length is noted as:
L=D0,1+D1,2+…+Dn-1,n
the time that the system refreshes the personnel location is recorded as: t is
The average velocity value is recorded as
Figure FDA0002322101570000033
8. A fusion method of personnel positioning and three-dimensional visualization is characterized by comprising the following modules:
a first module for building a three-dimensional model;
a second module for pre-processing the navigation grid;
a third module for preprocessing the coordinate system;
a fourth module for receiving people positioning data;
a fifth module for reading personnel positioning data;
a sixth module for processing the personnel positioning data.
9. A method of fusion of person localization and three-dimensional visualization according to claim 8,
the first module is further used for establishing a three-dimensional scene model and a three-dimensional character model;
the second module is further used for guiding the three-dimensional scene model and the three-dimensional character model into the 3D software when the navigation grid is preprocessed, and baking the three-dimensional scene model by using the navigation grid function of the 3D software to generate navigation grid data;
the third module is further used for representing areas where the personnel are located by the area numbers when the coordinate system is preprocessed, each area corresponds to a two-dimensional map, any point in the two-dimensional map can be represented by coordinates in an X and Y form, and the original positioning coordinates represent the coordinates of the specific position where the personnel are located in the area; in the three-dimensional scene model, setting a coordinate origin for each positioning area, and keeping the position of the origin and the direction of a coordinate axis consistent with that of the two-dimensional map;
the fourth module is further to receive people positioning data: receiving personnel positioning data: the method comprises the steps that a communication protocol and logic are adopted for obtaining and storing personnel positioning data in a buffer pool, the buffer pool is arranged in a fourth module, the communication protocol is WebSocket, the personnel positioning data are updated every 0.1 second, after the client side and a server are in communication connection, the server actively pushes the personnel positioning data to the client side, the client side distinguishes according to positioning card numbers after receiving the data, and each positioning card number only keeps the latest data to the buffer pool.
10. The fusion method of people positioning data and three-dimensional visualization according to claim 8, wherein the fifth module is further configured to read people positioning data: reading the latest personnel positioning data of all the positioning cards from the buffer pool every 0.5 second, and sequentially carrying out personnel positioning processing;
the sixth module is further used for processing personnel positioning data, and specifically comprises the following steps:
step A, real-time processing
When positioning data of personnel is processed, firstly judging whether the data is valid; taking a limit of 60 seconds as an example, if the time stamp exceeds 60 seconds from the current time, it is indicated that the data of the locator card is not received within one minute, the data is considered to be invalid due to expiration, the corresponding three-dimensional character model is not displayed, and the data processing is finished; if the time stamp is less than or equal to 60 seconds from the current time, the data is considered to be valid, and the next step B is carried out
Step B, mapping the original positioning coordinates to world positioning coordinates
The area number and the original positioning coordinate of the personnel positioning data jointly determine a positioning position, a set coordinate origin is found in the three-dimensional scene model according to the area number, the original positioning coordinate is converted into a 3D world coordinate from a coordinate origin coordinate system, the converted coordinate is the world positioning coordinate, and the next step C is carried out;
step C, mapping the world positioning coordinates to navigation grid positioning coordinates
Taking the world positioning coordinate as a sphere center, wherein the radius is within the range of 2-3 meters; if the navigation grid is not searched, the data is considered invalid, and the data processing is finished; if the searched coordinates of the nearest navigation grid point are the navigation grid positioning coordinates, the next step D is carried out;
step D, positioning data small-range fluctuation filtering
Calculating the distance between the current coordinate of the three-dimensional character model and the positioning coordinate of the navigation grid, if the distance does not exceed the movement threshold, considering the distance as invalid data caused by small-range fluctuation of the data, and finishing data processing; if the distance exceeds the moving threshold, the data is considered to be valid and needs to be moved, and the next step E is carried out;
step E, judging whether the position of the three-dimensional character model needs to be synchronized immediately
If the navigation grid positioning coordinates are far away from the current position of the three-dimensional character model for a plurality of times continuously, calculating the distance between the current coordinates of the three-dimensional character model and the navigation grid positioning coordinates, if the distance does not exceed a synchronization distance threshold value, resetting the number of times of synchronization tolerance, and entering the next step F; if the distance exceeds the synchronization distance threshold, adding 1 to the accumulated value of the number of times of synchronization tolerance; after adding 1 to the integrated value of the number of synchronous tolerance times, judging whether the value exceeds a synchronous tolerance threshold value, if so, resetting the number of synchronous tolerance times, immediately synchronizing the position of the three-dimensional character model to the navigation grid positioning coordinate, and finishing data processing; if the number of times of synchronization tolerance is not changed, entering the next step F;
step F, calculating a path point set
Setting a target point of the three-dimensional character model as a navigation grid positioning coordinate, calculating a path point set from the current position of the three-dimensional character model to the target point after setting, wherein the set comprises a starting point, an end point and all inflection points, and is arranged according to the sequence from the starting point to the end point;
step G, calculating the limited speed value of the three-dimensional character model
Calculating and accumulating the straight-line distance between two adjacent points in the path point set to obtain the actual path length, and dividing the actual path length by the time for refreshing the personnel positioning data by the system to obtain an average speed value;
the average velocity value is calculated as follows:
all waypoint coordinates are noted as: p0,P1,P2……Pn
The distance between two adjacent points is recorded as:
Figure FDA0002322101570000051
the actual path length is noted as:
L=D0,1+D1,2+…+Dn-1,n
the time that the system refreshes the personnel location is recorded as: t is
The average velocity value is recorded as
Figure FDA0002322101570000061
And taking the limiting speed based on the calculated average speed: if the average speed value is smaller than the lower speed limit value, taking the lower speed limit value; if the average speed value is greater than the upper speed limit value, taking the upper speed limit value;
taking the limited speed value as the actual speed of the three-dimensional character model, and ending the complete processing flow of the personnel positioning data of one positioning card;
the calculation formula of the distance D1 between the current coordinates of the three-dimensional character model and the positioning coordinates of the navigation grid is as follows:
Figure FDA0002322101570000062
wherein V0 is recorded as obtaining the current position of the three-dimensional character model first, and adopts a three-dimensional coordinate form; v1 is recorded as the navigation grid positioning coordinate obtained in the previous step;
the threshold value of the synchronous distance is 1 meter; the synchronization distance threshold is 10 meters, and the synchronization tolerance threshold is 5 times.
CN201911302148.XA 2019-12-17 2019-12-17 Fusion method and system for personnel positioning and three-dimensional visualization Withdrawn CN111028344A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911302148.XA CN111028344A (en) 2019-12-17 2019-12-17 Fusion method and system for personnel positioning and three-dimensional visualization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911302148.XA CN111028344A (en) 2019-12-17 2019-12-17 Fusion method and system for personnel positioning and three-dimensional visualization

Publications (1)

Publication Number Publication Date
CN111028344A true CN111028344A (en) 2020-04-17

Family

ID=70209359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911302148.XA Withdrawn CN111028344A (en) 2019-12-17 2019-12-17 Fusion method and system for personnel positioning and three-dimensional visualization

Country Status (1)

Country Link
CN (1) CN111028344A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114339601A (en) * 2020-10-09 2022-04-12 美的集团股份有限公司 UWB-based automatic network distribution method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114339601A (en) * 2020-10-09 2022-04-12 美的集团股份有限公司 UWB-based automatic network distribution method and device
CN114339601B (en) * 2020-10-09 2023-12-26 美的集团股份有限公司 Automatic network distribution method and device based on UWB

Similar Documents

Publication Publication Date Title
CN111486855B (en) Indoor two-dimensional semantic grid map construction method with object navigation points
CN113065000B (en) Multisource heterogeneous data fusion method based on geographic entity
Nothegger et al. Selection of salient features for route directions
CN103472823B (en) A kind of grating map creating method of intelligent robot
CN110617821B (en) Positioning method, positioning device and storage medium
CN110361027A (en) Robot path planning method based on single line laser radar Yu binocular camera data fusion
EP2769183A2 (en) Three dimensional routing
CN109883418A (en) A kind of indoor orientation method and device
CN115388902B (en) Indoor positioning method and system, AR indoor positioning navigation method and system
CN111862214B (en) Computer equipment positioning method, device, computer equipment and storage medium
Mueller et al. GIS-based topological robot localization through LIDAR crossroad detection
CN116518960B (en) Road network updating method, device, electronic equipment and storage medium
CN112799096A (en) Map construction method based on low-cost vehicle-mounted two-dimensional laser radar
CN110136174A (en) A kind of target object tracking and device
CN114547866A (en) Intelligent detection method for prefabricated part based on BIM-unmanned aerial vehicle-mechanical dog
WO2023060632A1 (en) Street view ground object multi-dimensional extraction method and system based on point cloud data
EP3825804A1 (en) Map construction method, apparatus, storage medium and electronic device
CN111028344A (en) Fusion method and system for personnel positioning and three-dimensional visualization
CN116698014A (en) Map fusion and splicing method based on multi-robot laser SLAM and visual SLAM
CN115493596A (en) Semantic map construction and navigation method for mobile robot
CN103335645A (en) Landscape-planning-oriented image acquisition method for accurate space positioning
CN116758269B (en) Position verification method
CN112651991B (en) Visual positioning method, device and computer system
CN116226298B (en) Automatic assessment method for map quality
CN115909183B (en) Monitoring system and monitoring method for external environment of fuel gas delivery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200417