CN111612904A - Position sensing system based on three-dimensional model image machine learning - Google Patents

Position sensing system based on three-dimensional model image machine learning Download PDF

Info

Publication number
CN111612904A
CN111612904A CN202010380587.9A CN202010380587A CN111612904A CN 111612904 A CN111612904 A CN 111612904A CN 202010380587 A CN202010380587 A CN 202010380587A CN 111612904 A CN111612904 A CN 111612904A
Authority
CN
China
Prior art keywords
dimensional
image
server
positioning
wireless
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010380587.9A
Other languages
Chinese (zh)
Other versions
CN111612904B (en
Inventor
刘毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology Beijing CUMTB
Original Assignee
China University of Mining and Technology Beijing CUMTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology Beijing CUMTB filed Critical China University of Mining and Technology Beijing CUMTB
Priority to CN202010380587.9A priority Critical patent/CN111612904B/en
Publication of CN111612904A publication Critical patent/CN111612904A/en
Application granted granted Critical
Publication of CN111612904B publication Critical patent/CN111612904B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • H04L67/025Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Geometry (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a position sensing system based on three-dimensional model image machine learning, which realizes three-dimensional accurate positioning and target orientation positioning in closed spaces such as coal mines and the like. The position perception system comprises a three-dimensional GIS server, an image training server, a positioning server, a wireless camera and wireless access equipment. The image training server is responsible for performing machine learning by taking a two-dimensional image set generated by rendering of the three-dimensional GIS server as a classification training sample to obtain an image classification recognition model and sending the image classification recognition model to the positioning server. The positioning server realizes position perception by classifying images collected by the wireless camera. The invention can realize three-dimensional accurate positioning including target orientation only by carrying the camera with wireless communication function without carrying a special wireless positioning card for positioning the target, can be widely applied to the field of accurate positioning of closed space, such as schools, hospitals, factories, coal mines, subways and the like, and has wide application and popularization space.

Description

Position sensing system based on three-dimensional model image machine learning
Technical Field
The invention relates to a position perception system based on three-dimensional model image machine learning, and relates to the fields of geographic information systems, three-dimensional image rendering, machine learning, communication and the like.
Background
At present, the positioning of the mobile target in the open space mainly takes satellite positioning as a main part, and comprises a GPS (global positioning system), a Gralens, a Galileo and a Beidou positioning system, and besides, a positioning technology based on a cellular mobile communication system, a Wifi (wireless fidelity) network and a wireless sensor network is also applied. However, cellular mobile communication system positioning is generally only used for service management of communication operators, and does not directly provide personal positioning service for users; positioning based on the Wifi network and the wireless sensor network can only realize positioning in a local area range. Therefore, satellite positioning is still a positioning technology which is mainly relied on in life of people, but the satellite positioning needs to be used under the condition that the sky is not shielded, so that the application condition of the satellite positioning is greatly limited; and when the satellite fails or the satellite positioning service is turned off, positioning cannot be performed, which causes great influence.
Satellite signals cannot be received in closed spaces such as building rooms, tunnels, subway stations, coal mines and the like, so that a satellite positioning technology cannot be used. In the early positioning of the closed space, the RFID card identification and radio signal positioning technology are mostly adopted. The RFID card identification utilizes a radio frequency mode to carry out non-contact bidirectional communication, and the radio frequency card and the card reader can realize the identification and position monitoring of a moving target without contact. The identification and positioning based on the RFID card belongs to the area positioning technology, and can only identify whether an underground moving target passes through a certain area or not, and cannot accurately position the moving target in the area. The radio signal positioning technology carries out positioning based on transmission signal attenuation RSSI or transmission time of radio signals in space, and because the radio signals are easily influenced by factors such as underground space size, shape, space wall roughness, obstacles and the like in the transmission process, a radio signal attenuation model is extremely complex, and the positioning error reaches more than 10 m. The positioning accuracy of the positioning system based on the radio signal transmission time is higher than that of the RSSI positioning system, but the radio signal transmission time is affected by multipath effect, non-line-of-sight propagation delay, clock synchronization, clock timing error and the like, and the positioning accuracy is difficult to guarantee in the environment with more obstacles. In addition, the existing positioning methods such as RFID, RSSI, time of flight TOA, time of flight difference TDOA, etc. are all based on the ranging positioning principle, and the orientation of the target cannot be positioned under the static condition.
Therefore, a positioning system suitable for a closed space environment such as a coal mine, which is simple and effective, has low construction cost and high positioning accuracy, and can realize a three-dimensional accurate positioning system including the orientation of a target under a static condition of the target is needed.
Disclosure of Invention
With the development of GIS systems and three-dimensional modeling and rendering technologies, the application of three-dimensional GIS technologies and systems is greatly promoted. The invention provides a position sensing system based on three-dimensional model image machine learning by combining a wireless positioning technology, a three-dimensional GIS technology and a digital image processing technology, and realizes three-dimensional accurate positioning and target orientation positioning in local space. The position sensing system comprises a three-dimensional GIS server, an image training server, a positioning server, a wireless camera and wireless access equipment; the wireless camera is carried by a mobile target and is accessed to a communication network through fixedly installed wireless access equipment; the three-dimensional GIS server stores three-dimensional geographic information in a set space and can render and generate a two-dimensional image of the space according to the data; the image training server is responsible for performing machine learning by taking a space region as a unit and taking a two-dimensional image set generated by rendering of the three-dimensional GIS server as a classification training sample, and transmitting image classification recognition models of all regions obtained by learning to the positioning server; the positioning server is responsible for classifying and identifying the images collected by the wireless camera according to the image identification model and sensing the position according to the identification result.
The machine learning step of the image training server is as follows:
(1) calling three-dimensional geographic information in a set space stored by a three-dimensional GIS server, dividing the set space into a plurality of space areas, determining a plurality of viewpoints in each area, and determining a plurality of visual angles at each viewpoint;
(2) rendering a plurality of spatial two-dimensional images in each view angle direction of each view point determined in the step (1) through a three-dimensional GIS server;
(3) taking the region determined in the step (1) as a unit, taking each visual angle of each viewpoint in the region as a class, taking a plurality of spatial two-dimensional images obtained in the step (2) of each visual angle of each viewpoint as machine learning samples of the corresponding class, sending the machine learning samples into a machine learning network for training, and training to obtain an image classification and identification model of the region;
(4) repeating the training process in the step (3) until image classification recognition models of all the regions are obtained;
(5) and transmitting the image classification identification models of all the areas, the viewpoint position data and the view angle direction data corresponding to all the classes to a positioning server.
The position sensing step of the position sensing system specifically comprises the following steps:
(1) the wireless camera and the wireless access equipment perform handshake communication, the positioning server determines the distance d between the wireless camera and the wireless access equipment through the signal strength or signal flight time of wireless communication between the wireless camera and the wireless access equipment, and determines a two-dimensional position area S of the wireless camera by referring to the known position of the fixed wireless access equipment;
(2) the wireless camera collects an environment image and transmits the image to the positioning server;
(3) the positioning server processes the image received in the step (2);
(4) the positioning server determines a unit area of machine learning where the wireless camera is located according to the two-dimensional position area S of the wireless camera determined in the step (1), and sends the image obtained in the step (3) to an image classification recognition model of the area for classification recognition to obtain the class and the confidence level of the class, if the viewpoint position of the class is in the two-dimensional position area S and the confidence level is greater than a set threshold value, the classification is judged to be successful, the step (5) is continuously executed, otherwise, the classification is judged to be failed, and the step (1) is returned;
(5) and (4) the positioning server obtains the position and the direction of the wireless camera according to the viewpoint position and the view angle direction data corresponding to the class obtained in the step (4).
1. The location awareness system further comprises: and (3) each rendered space two-dimensional image in the machine learning step (2) of the image training server has viewpoint position deviation or view direction deviation within a set range.
2. The location awareness system further comprises: the three-dimensional geographic information data stored by the three-dimensional GIS server comprises the appearance shape and size, surface material, surface color and model data of surface identification of all fixed objects in the position sensing area of the system.
3. The location awareness system further comprises: the two-dimensional position area S of the wireless camera in the system position sensing step (1) is defined as the position (x) of the wireless access equipment on a horizontal plane with the height h1,y1) An annular area S formed by circles with the radius of d-E and d + E as the center1I.e. satisfy
Figure BDA0002481804680000031
The area of (a); wherein E is the wireless communication ranging error between the wireless camera and the wireless access equipment, and h is the set installation height of the wireless camera.
4. The location awareness system further comprises: for an environment in which more than one wireless access device is fixedly installed within the communication range of the wireless camera, the two-dimensional location area S of the wireless camera is the area S determined according to claim 3 for each wireless access deviceiThe intersection area of (a) is provided with n wireless access devices, then S is equal to S1∩S2∩……Sn
5. The location awareness system further comprises: and (3) the viewpoint position in the step (1) of machine learning of the image training server is a square grid line intersection point position on a horizontal plane with a set height h and with a set positioning precision m as a side length.
6. The position perception system further comprises a machine learning step (1) of the image training server, wherein the visual angle direction is a direction which is 0 degree in true north and has an angle of α -i-theta with 0 degree on a horizontal plane of the set height, i-0, 1,2, … …, n,
Figure BDA0002481804680000032
n is an integer, theta is the accuracy of the set positioning angle, and the viewing angle direction further includes an angle of β ═ j · theta, | β | < 90 °, | j | 1,2, … … from the horizontal plane of the set height,k,
Figure BDA0002481804680000033
And projected on a horizontal plane as the viewing direction of α.
7. The location awareness system further comprises: in the system position sensing step (4), when the two-dimensional position area S spans a plurality of unit areas, the images obtained in the step (3) are sent to image classification recognition models of all related areas for classification recognition, a plurality of classes and confidence degrees thereof are respectively obtained, and whether classification is successful or not is judged according to the class with the highest confidence degree.
8. The location awareness system further comprises: the image processing method in the system position sensing step (3) comprises the following steps: filtering, edge enhancement, brightness adjustment, contrast adjustment, hue adjustment and saturation adjustment.
The invention achieves the following beneficial effects: the invention combines the wireless positioning technology, the three-dimensional GIS technology and the digital image processing technology with the machine learning technology to sense the position, and the positioning target can realize the three-dimensional accurate positioning including the target orientation only by carrying the camera with the wireless communication function without carrying a special wireless positioning card, such as intelligent handheld equipment such as a mobile phone, a tablet personal computer and the like. The system has simple structure and easy implementation, can be widely applied to the field of local space accurate positioning, such as schools, hospitals, factories, coal mines, subways and the like, and has wide application and popularization space.
Drawings
FIG. 1 is a schematic diagram of a location-aware system based on three-dimensional model image machine learning.
Fig. 2 is a schematic diagram of a machine learning process of an image training server.
Fig. 3 is a view illustrating an example of a viewpoint position for rendering a two-dimensional image.
FIG. 4 renders an exemplary view of a perspective direction of a two-dimensional image.
FIG. 5 is an exemplary diagram of rendering two-dimensional image view direction based on three-dimensional angle.
FIG. 6 is a schematic diagram of a position sensing process of a position sensing system based on three-dimensional model image machine learning.
Fig. 7 is a schematic diagram of a two-dimensional location area of a wireless camera of a single wireless access device.
Fig. 8 is a schematic diagram of two-dimensional location areas of wireless cameras of two wireless access devices.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic diagram of an indoor application of the intelligent position sensing system based on spatial three-dimensional model image matching. As shown in fig. 1, the location awareness system in this embodiment includes:
1. the three-dimensional GIS server (101) stores three-dimensional geographic information, the three-dimensional geographic information generally adopts a global positioning system as a coordinate system, and the stored data comprises model data of appearance shapes and sizes, surface materials, surface colors and surface identifications of all fixed objects in the position sensing area. The three-dimensional GIS server storage data also comprises position information of the wireless access equipment. The three-dimensional GIS server is also provided with a three-dimensional modeling and rendering engine for rendering a two-dimensional image in the position sensing area according to the model data, and the three-dimensional modeling and rendering engine can adopt a data processing core engine of three-dimensional modeling software such as 3DMAX and the like.
2. And the image training server (102) is responsible for performing machine learning by taking the spatial region as a unit and taking a two-dimensional image set generated by rendering of the three-dimensional GIS server as a classification training sample, and transmitting the image classification identification models of all regions obtained by learning to the positioning server.
3. And the positioning server (103) is responsible for classifying the images acquired by the wireless camera by using the image classification and identification model generated by the image training server, determining the position and the direction of the wireless camera according to the classification result and realizing position perception. The positioning server also has a function of positioning the wireless camera, acquires communication signal intensity or flight time data of the wireless access equipment and the wireless camera from the wireless network management equipment (105), processes the data to obtain the distance between the wireless camera and the wireless access equipment, and determines a two-dimensional position area of the wireless camera through the position data of the wireless access equipment provided by the three-dimensional GIS server.
4. The monitoring terminal (104) is mainly responsible for equipment management of the three-dimensional GIS server, the image training server, the positioning server and the wireless network management equipment, and managers can access the equipment through the monitoring terminal and manage equipment parameters and storage data of the equipment. Such as accessing the three-dimensional GIS server to add, delete and modify geographic information data of the three-dimensional GIS server. According to the application requirements of the system, mobile target monitoring software can be installed on the monitoring terminal to display the three-dimensional map and the images collected by the wireless camera and monitor the position of the mobile target.
5. The wireless network management device (105) is used for the unified management of the wireless access device and can collect the communication signal intensity or flight time data of the wireless access device and the wireless camera.
6. The wireless access device (105), the access device of the wireless network, is responsible for the wireless network access of the wireless communication device including the wireless camera (107), and can monitor the signal strength or the signal flight time of the communication of the wireless camera.
7. A wireless camera (107), such as a video and image capturing device with wireless communication capability, is used to capture images of the environment and transmit the images to the positioning server. The personnel who need the location awareness service can also adopt intelligent equipment such as a mobile phone and a tablet personal computer with a wireless communication function as a wireless camera, and the intelligent equipment can display a three-dimensional map and the position of the intelligent equipment such as installation of special navigation software.
The image training server machine learning process is shown in fig. 2, and includes:
(201) calling three-dimensional geographic information in a set space stored by a three-dimensional GIS server, dividing the set space into a plurality of space areas, determining a plurality of viewpoints in each area, and determining a plurality of visual angles in each viewpoint;
(202) controlling a three-dimensional GIS server to render (201) a determined space two-dimensional image at a viewpoint and a view angle, and collecting and storing the image;
(203) randomly moving the viewpoint position and the view angle direction within the set position and direction range, controlling a three-dimensional GIS server to render a space two-dimensional image on the moved viewpoint and view angle, and collecting and storing the space two-dimensional image;
(204) determining whether a set number of class training samples is reached, if so, executing (205), otherwise, returning to execute (203);
(205) judging whether the rendering and the storage of the space two-dimensional images of all the visual angles of all the visual points are finished, if so, executing (206), and if not, returning to (203) to continue the rendering and the storage;
(206) taking the area determined in the step (201) as a unit, taking each visual angle of each viewpoint in the area as a class, taking the image obtained in the step (203) and the step (204) of each visual angle of each viewpoint as a machine learning sample of the corresponding class, sending the machine learning sample into a machine learning network for training, and training to obtain an image classification and identification model of the area; in this example, the machine learning network employs a VGG convolutional neural network;
7, (207) judging whether the machine learning of all the areas is finished, if so, executing (208); if not, returning (206) to perform machine learning on the incomplete area;
(208) transmitting the image classification identification models of all the areas and the viewpoint position data and the view direction data corresponding to all the classes to a positioning server and transmitting the data to the positioning server.
Fig. 3 is an example of a viewpoint position for rendering a two-dimensional image. In the present example, the setting space stored by the three-dimensional GIS server is divided into 4 dX×dYOn the horizontal plane of h height, the intersection point of the square grid lines with m as the side length is used as the viewpoint position for rendering the two-dimensional image. h is the set wireless camera installation height, and m is the set positioning accuracy.
FIG. 4 is an exemplary view of rendering a two-dimensional image. On the horizontal plane of the set installation height h of the wireless camera, the viewpoint is taken asThe starting point of the viewing direction, north is 0 °, and the direction at an angle α of i · θ from 0 °, i is 1,2, … …, n,
Figure BDA0002481804680000061
n is an integer, and θ is the accuracy of the set positioning angle. The example takes theta 45 deg.,
Figure BDA0002481804680000062
α=0°,45°,90°,135°,180°,225°,270°,315°。
fig. 5 is a schematic view of a perspective view of a rendered two-dimensional image based on a three-dimensional angle, as shown, in addition to the planar perspective direction shown in fig. 4, a perspective direction having an angle of β with a horizontal plane of the installation height of the wireless camera is added as an example, and the perspective direction is projected on the horizontal plane as α, | β | ═ j · θ, | β | < 90 °, | j | -1, 2, … …, k,
Figure BDA0002481804680000063
the example takes k 1.
The system location awareness procedure is shown as 6, and includes:
(601) the wireless camera is in ranging communication with the wireless access device.
(602) the wireless access device monitors the signal strength or signal time of flight of the wireless camera's communications.
(603) the positioning server determines the distance d between the wireless camera and the wireless access device by the signal strength or signal time of flight of the wireless communication between the wireless camera and the wireless access device, and determines the two-dimensional location area S of the wireless camera with reference to the known location of the fixed wireless access device.
(604) the wireless camera captures an image of the environment and transmits the image to a location server.
(605) the positioning server processes (604) the received image, the processing method comprising: filtering, edge enhancement, brightness adjustment, contrast adjustment, hue adjustment and saturation adjustment.
And 6, (606) the positioning server sends the image obtained in the step 605 into an image classification recognition model of the region for classification recognition to obtain the class and the confidence coefficient of the class. And when the two-dimensional position area S spans a plurality of unit areas, sending the image into the image classification and identification models of all related areas for classification and identification, respectively obtaining classification results and confidence degrees thereof, and taking the class with the highest confidence degree as the belonged class.
(607) the positioning server determines whether the classification is successful according to the class to which the positioning server belongs (606), the confidence coefficient and the viewpoint position data corresponding to the class, if the confidence coefficient is greater than a set threshold value and the viewpoint position corresponding to the class is in the two-dimensional position area S, the positioning server determines that the classification is successful, and continues to execute (608); otherwise, judging that the classification fails, and returning to 601.
And 8, (608) the positioning server takes the viewpoint position data and the view angle direction data corresponding to the obtained (606) belonged class as the position and the direction of the wireless camera.
And 9, (609) the positioning server transmits the position and the direction of the wireless camera obtained in the step 608 to a monitoring terminal needing position monitoring service or an intelligent device needing position navigation service, and returns to the step 601.
Fig. 7 is a schematic diagram of a two-dimensional location area of a wireless camera of a single wireless access device. As shown, with wireless access device location (x)1,y1) An annular area S formed by the circles taking d-E and d + E as the radiuses and taking the circle as the center of the circle meets the requirement
Figure BDA0002481804680000071
Wherein E is a wireless communication ranging error of the wireless camera and the wireless access device.
Fig. 8 is a schematic diagram of a two-dimensional position area of a wireless camera based on a three-dimensional angle, as shown, in addition to the planar view angle shown in fig. 7, a direction having an angle β with a horizontal plane of the installation height of the wireless camera is added as an example, and a direction α is projected on the horizontal plane, | β | ═ j · θ, | β | < 90 °, | j | ═ 1,2, … …, k,
Figure BDA0002481804680000072
the example takes k 1.

Claims (11)

1. Position perception system based on three-dimensional model image machine learning, its characterized in that: the system comprises a three-dimensional GIS server, an image training server, a positioning server, a wireless camera and wireless access equipment; the camera is a wireless camera carried by a mobile target and is accessed to a communication network through fixedly installed wireless access equipment; the three-dimensional GIS server stores three-dimensional geographic information in a set space and can render and generate a two-dimensional image of the space according to the data; the image training server is responsible for performing machine learning by taking a space region as a unit and taking a two-dimensional image set generated by rendering of the three-dimensional GIS server as a classification training sample, and transmitting image classification recognition models of all regions obtained by learning to the positioning server; the positioning server is responsible for classifying and identifying the images collected by the wireless camera according to the image identification model and sensing the position according to the identification result.
2. The location awareness system of claim 1 wherein: the machine learning step of the image training server is as follows:
(1) calling three-dimensional geographic information in a set space stored by a three-dimensional GIS server, dividing the set space into a plurality of space areas, determining a plurality of viewpoints in each area, and determining a plurality of visual angles at each viewpoint;
(2) rendering a plurality of spatial two-dimensional images in each view angle direction of each view point determined in the step (1) through a three-dimensional GIS server;
(3) taking the region determined in the step (1) as a unit, taking each visual angle of each viewpoint in the region as a class, taking a plurality of spatial two-dimensional images obtained in the step (2) of each visual angle of each viewpoint as machine learning samples of the corresponding class, sending the machine learning samples into a machine learning network for training, and training to obtain an image classification and identification model of the region;
(4) repeating the training process of the step (4) until image classification recognition models of all the regions are obtained;
(5) and transmitting the image classification identification models of all the areas, the viewpoint position data and the view angle direction data corresponding to all the classes to a positioning server.
3. The location awareness system of claim 1 wherein: the system location sensing step specifically comprises:
(1) the wireless camera and the wireless access equipment perform handshake communication, the positioning server determines the distance d between the wireless camera and the wireless access equipment through the signal strength or signal flight time of wireless communication between the wireless camera and the wireless access equipment, and determines a two-dimensional position area S of the wireless camera by referring to the known position of the fixed wireless access equipment;
(2) the wireless camera collects an environment image and transmits the image to the positioning server;
(3) the positioning server processes the image received in the step (2);
(4) the positioning server determines a unit area of machine learning where the wireless camera is located according to the two-dimensional position area S of the wireless camera determined in the step (1), and sends the image obtained in the step (3) to an image classification recognition model of the area for classification recognition to obtain the class and the confidence level of the class, if the viewpoint position of the class is in the two-dimensional position area S and the confidence level is greater than a set threshold value, the classification is judged to be successful, the step (5) is continuously executed, otherwise, the classification is judged to be failed, and the step (1) is returned;
(5) and (4) the positioning server obtains the position and the direction of the wireless camera according to the visual point position data and the visual angle direction data corresponding to the class obtained in the step (4).
4. The location awareness system of claim 2 wherein: and (3) each rendered space two-dimensional image in the machine learning step (2) of the image training server has viewpoint position deviation or view direction deviation within a set range.
5. The location awareness system of claim 1 wherein: the three-dimensional geographic information data stored by the three-dimensional GIS server comprises the appearance shape and size, surface material, surface color and model data of surface identification of all fixed objects in the position sensing area of the system.
6. A location awareness system as claimed in claim 3 wherein: the two-dimensional position area S of the wireless camera in the system position sensing step (1) is defined as the position (x) of the wireless access equipment on a horizontal plane with the height h1,y1) An annular area S formed by circles with the radius of d-E and d + E as the center1I.e. satisfy
Figure RE-FDA0002549411240000021
The area of (a); wherein E is the wireless communication ranging error between the wireless camera and the wireless access equipment, and h is the set installation height of the wireless camera.
7. A location awareness system as claimed in claim 3 wherein: for an environment in which more than one wireless access device is fixedly installed within the communication range of the wireless camera, the two-dimensional location area S of the wireless camera is the area S determined according to claim 3 for each wireless access deviceiThe intersection area of (a) is provided with n wireless access devices, then S is equal to S1∩S2∩……∩Sn
8. The location awareness system of claim 2 wherein: and (3) the viewpoint position in the step (1) of machine learning of the image training server is a square grid line intersection point position on a horizontal plane with a set height h and with a set positioning precision m as a side length.
9. The location awareness system according to claim 2, wherein the machine learning step (1) of the image training server performs the viewing direction in a direction of 0 ° due to north on a horizontal plane of the set height, and an angle of α ═ i · θ from 0 °, i ═ 0,1,2, … …, n,
Figure RE-FDA0002549411240000022
n is an integer, theta is the accuracy of the set positioning angle, and the viewing angle direction further includes a horizontal plane angle of β ═ j · theta, | β | < 90 °, | j | 1,2, … …, k,
Figure RE-FDA0002549411240000023
and projected on a horizontal plane as the viewing direction of α.
10. A location awareness system as claimed in claim 3 wherein: in the system position sensing step (4), when the two-dimensional position area S spans a plurality of unit areas, the images obtained in the step (3) are sent to image classification recognition models of all related areas for classification recognition, a plurality of classes and confidence degrees thereof are respectively obtained, and whether classification is successful or not is judged according to the class with the highest confidence degree.
11. A location awareness system as claimed in claim 3 wherein: the system location sensing step (3) the image processing method includes: filtering, edge enhancement, brightness adjustment, contrast adjustment, hue adjustment and saturation adjustment.
CN202010380587.9A 2020-05-08 2020-05-08 Position sensing system based on three-dimensional model image machine learning Active CN111612904B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010380587.9A CN111612904B (en) 2020-05-08 2020-05-08 Position sensing system based on three-dimensional model image machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010380587.9A CN111612904B (en) 2020-05-08 2020-05-08 Position sensing system based on three-dimensional model image machine learning

Publications (2)

Publication Number Publication Date
CN111612904A true CN111612904A (en) 2020-09-01
CN111612904B CN111612904B (en) 2024-02-23

Family

ID=72199559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010380587.9A Active CN111612904B (en) 2020-05-08 2020-05-08 Position sensing system based on three-dimensional model image machine learning

Country Status (1)

Country Link
CN (1) CN111612904B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130023278A1 (en) * 2011-07-18 2013-01-24 Ting-Yueh Chin Rss-based doa indoor location estimation system and method
CN110415302A (en) * 2019-09-02 2019-11-05 中国矿业大学(北京) Mine positioning system based on image recognition
WO2020026514A1 (en) * 2018-08-02 2020-02-06 日本電気株式会社 Indoor position estimation apparatus, indoor position estimation method, and program
CN111046752A (en) * 2019-11-26 2020-04-21 上海兴容信息技术有限公司 Indoor positioning method and device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130023278A1 (en) * 2011-07-18 2013-01-24 Ting-Yueh Chin Rss-based doa indoor location estimation system and method
WO2020026514A1 (en) * 2018-08-02 2020-02-06 日本電気株式会社 Indoor position estimation apparatus, indoor position estimation method, and program
CN110415302A (en) * 2019-09-02 2019-11-05 中国矿业大学(北京) Mine positioning system based on image recognition
CN111046752A (en) * 2019-11-26 2020-04-21 上海兴容信息技术有限公司 Indoor positioning method and device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHENBIN ZHANG 等: "Received signal strength-based indoor localization using hierarchical classification", 《SENSORS》 *
朱会平 等: "基于深度学习 的室内视觉位置识别技术", 《信息技术》 *

Also Published As

Publication number Publication date
CN111612904B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
US11100260B2 (en) Method and apparatus for interacting with a tag in a wireless communication area
US7991194B2 (en) Apparatus and method for recognizing position using camera
CN107067794B (en) Indoor vehicle positioning and navigation system and method based on video image processing
US11893317B2 (en) Method and apparatus for associating digital content with wireless transmission nodes in a wireless communication area
US20190325230A1 (en) System for tracking and visualizing objects and a method therefor
CN102960036A (en) Crowd-sourced vision and sensor-surveyed mapping
US20200090405A1 (en) Geophysical sensor positioning system
US11106837B2 (en) Method and apparatus for enhanced position and orientation based information display
CN104333564A (en) Target operation method, system and device
CN111354037A (en) Positioning method and system
EP3940666A1 (en) Digital reconstruction method, apparatus, and system for traffic road
CN112422653A (en) Scene information pushing method, system, storage medium and equipment based on location service
CN111899298B (en) Location sensing system based on live-action image machine learning
Blankenbach et al. Building information systems based on precise indoor positioning
CN112799014A (en) Ultra-wideband positioning system and method based on ellipsoid intersection, wireless terminal and server
CN111612904B (en) Position sensing system based on three-dimensional model image machine learning
CN111601246B (en) Intelligent position sensing system based on space three-dimensional model image matching
US11475177B2 (en) Method and apparatus for improved position and orientation based information display
Jian et al. Hybrid cloud computing for user location-aware augmented reality construction
CN113237464A (en) Positioning system, positioning method, positioner, and storage medium
CN117392364A (en) Position sensing system based on panoramic image deep learning
CN114513746B (en) Indoor positioning method integrating triple vision matching model and multi-base station regression model
Li et al. Research on Semantic Map Generation and Location Intelligent Recognition Method for Scenic SPOT Space Perception
CN110646785B (en) Positioning system for factory line based on array frequency modulation continuous wave and sensing algorithm
KR100959246B1 (en) A method and a system for generating geographical information of city facilities using stereo images and gps coordination

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant