CN113949734A - Positioning method, device, equipment, medium and program product in subway scene - Google Patents

Positioning method, device, equipment, medium and program product in subway scene Download PDF

Info

Publication number
CN113949734A
CN113949734A CN202111197085.3A CN202111197085A CN113949734A CN 113949734 A CN113949734 A CN 113949734A CN 202111197085 A CN202111197085 A CN 202111197085A CN 113949734 A CN113949734 A CN 113949734A
Authority
CN
China
Prior art keywords
target
line
subway
target object
state point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111197085.3A
Other languages
Chinese (zh)
Other versions
CN113949734B (en
Inventor
卞光宇
郭若南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111197085.3A priority Critical patent/CN113949734B/en
Publication of CN113949734A publication Critical patent/CN113949734A/en
Application granted granted Critical
Publication of CN113949734B publication Critical patent/CN113949734B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/006Locating users or terminals or network equipment for network management purposes, e.g. mobility management with additional information processing, e.g. for direction or speed determination

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Navigation (AREA)

Abstract

The application provides a positioning method, a positioning device, equipment, a storage medium and a computer program product in a subway scene, which are applied to the field of maps; the method comprises the following steps: acquiring the current position of a target object, and determining at least one subway line associated with the position; respectively determining a plurality of state points contained in each subway line and a line position corresponding to each state point through a plurality of line sections contained in the subway line; determining the target position of the target object in the subway line based on the line position of each state point and the current position of the target object; through the method and the device, the labor cost required by positioning in the subway scene can be saved, and the accuracy of positioning in the subway scene is improved.

Description

Positioning method, device, equipment, medium and program product in subway scene
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a positioning method, apparatus, device, storage medium, and computer program product in a subway scene.
Background
With the rapid development of subway traffic, more and more users rely on subway traffic for going out. In the related technology, a positioning scheme in a subway scene is to perform positioning through base station signals, and perform auxiliary positioning by using information such as subway line parameter values, total subway operation time, average acceleration time between every two adjacent stations in the operation process of the subway, average stop station time and the like. However, the scheme needs detailed operation data of the subway vehicles, the data are difficult to obtain, the number of subway lines is large, and the labor cost is high; and poor base station signals can cause the problem of positioning deviation.
Disclosure of Invention
The embodiment of the application provides a positioning method, a positioning device, equipment, a storage medium and a computer program product in a subway scene, which can save the labor cost required by positioning in the subway scene and improve the accuracy of positioning in the subway scene.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a positioning method in a subway scene, which comprises the following steps:
acquiring the current position of a target object, and determining at least one subway line associated with the position;
respectively determining a plurality of state points contained in each subway line and a line position corresponding to each state point through a plurality of line sections contained in the subway line;
and determining the target position of the target object in the subway line based on the line position of each state point and the current position of the target object.
In the above scheme, the determining at least one subway line associated with the position includes:
determining a target area which takes the position as a center and takes the target distance as a radius;
and determining at least one subway line passing through the target area as the subway line associated with the position.
In the above scheme, the determining the line position corresponding to each state point includes:
acquiring the longitude and latitude information of the corresponding line of each subway line;
searching longitude and latitude information corresponding to each state point from the longitude and latitude information of the circuit;
and determining the longitude and latitude information corresponding to each state point as the line position corresponding to the corresponding state point.
In the foregoing scheme, the obtaining the current position of the target object includes:
performing at least one of the following processes:
positioning processing based on a global positioning system is carried out on the target object to obtain the current position of the target object;
and positioning the target object based on network positioning service to obtain the current position of the target object.
The embodiment of the present application further provides a positioning device in a subway scene, including:
the acquisition module is used for acquiring the current position of the target object and determining at least one subway line associated with the position;
the first determining module is used for respectively determining a plurality of state points contained in each subway line and a line position corresponding to each state point through a plurality of line sections included in the subway line;
and the second determining module is used for determining the target position of the target object in the subway line based on the line position of each state point and the current position of the target object.
In the foregoing solution, the second determining module is further configured to determine, for each subway line, a position score of a corresponding state point based on a line position of each state point and a current position of the target object, where the position score is used to indicate a possible degree of the target object at the corresponding state point;
and determining the line position corresponding to the state point with the maximum position score as the target position of the target object in the subway line.
In the above scheme, the obtaining module is further configured to determine a target area with the position as a center and a target distance as a radius;
and determining at least one subway line passing through the target area as the subway line associated with the position.
In the foregoing solution, the first determining module is further configured to, for each subway line, respectively perform the following processing:
acquiring a division mode corresponding to a subway line, and dividing the subway line into a plurality of line sections according to the division mode;
and determining the starting point and the end point of the subway line and the dividing point between every two adjacent line sections as the state points in the subway line.
In the above scheme, the first determining module is further configured to obtain line longitude and latitude information corresponding to each subway line;
searching longitude and latitude information corresponding to each state point from the longitude and latitude information of the circuit;
and determining the longitude and latitude information corresponding to each state point as the line position corresponding to the corresponding state point.
In the foregoing solution, when the current position of the target object is obtained by performing network positioning for the first time, the second determining module is further configured to perform the following processing for each of the state points:
acquiring a positioning error corresponding to the current position of the target object, and determining the distance between the line position of the state point and the current position of the target object;
obtaining a first mapping relation among the positioning error, the distance and the position fraction;
determining a position score of the state point based on the first mapping relation in combination with the positioning error and the distance.
In the above scheme, when the subway vehicle in which the target object is located is in a driving state, the second determining module is further configured to determine a traveling direction corresponding to the subway vehicle, and determine a target subway line corresponding to the traveling direction from the at least one subway line;
determining at least one target state point which is not passed by the target object in the traveling direction from a plurality of state points included in the target subway line;
and determining the position score of the corresponding target state point based on the line position of each target state point and the current position of the target object.
In the above scheme, the second determining module is further configured to obtain at least one historical position where the target object is located before the current time point;
performing straight line fitting on the at least one historical position and the current position of the target object, and performing projection processing on the straight line obtained by fitting to a subway line to obtain a projection result;
and taking the projection positive direction of the projection result as the corresponding advancing direction of the subway vehicle.
In the above scheme, the second determining module is further configured to obtain a positioning error corresponding to a current position of the target object;
for a first target state point which is at the top in the travel direction, determining a position score of the first target state point based on the positioning error, the distance between the line position of the first target state point and the position where the target object is currently located;
aiming at each second target state point which is not the first target state point, acquiring a noise parameter and a target position score of a third target state point which is positioned in front of the second target state point and is away from the second target state point by a target distance; determining a position score for the second target state point based on the target position score and the noise parameter.
In the foregoing solution, when the current position of the target object is obtained by performing network positioning for a non-first time, the second determining module is further configured to perform the following processing for each of the state points:
acquiring a positioning error corresponding to the current position of the target object;
determining an error parameter corresponding to the state point based on the positioning error, the distance between the line position of the state point and the current position of the target object;
acquiring a transfer score of the state point corresponding to the current time point and a historical position score of the state point at a target time point;
determining a location score for the state point based on the error parameter, the transition score, and the historical location score;
wherein the target time point is located before the current time point; the transition score is used for indicating the possible degree of the target object moving from each other state point to the state point from the target time point to the current time point.
In the above scheme, the second determining module is further configured to obtain a driving speed of a subway vehicle in which the target object is located and a time interval from the target time point to the current time point;
determining the distance between the line position of the state point and the line positions of the other state points;
acquiring a second mapping relation among the travelling speed, the time interval, the distance and the transfer fraction;
and determining a transfer score of the state point corresponding to the current time point based on the second mapping relation by combining the traveling speed, the time interval and the distance.
In the above scheme, the second determining module is further configured to obtain at least one target state point corresponding to the subway station from the plurality of state points when an in-out signal corresponding to the subway station is obtained;
when the current position of the target object is obtained by performing network positioning for the first time, the second determining module is further configured to perform the following processing for each target state point:
acquiring a historical position score of the target state point at a target time point, wherein the target time point is positioned before the current time point;
determining the distance between the line position of the target state point and the current position of the target object;
obtaining a third mapping relation among the historical position score, the distance and the position score;
and determining the position score of the target state point based on the third mapping relation by combining the historical position score and the distance.
In the foregoing solution, the obtaining module is further configured to perform at least one of the following processes:
positioning processing based on a global positioning system is carried out on the target object to obtain the current position of the target object;
and positioning the target object based on network positioning service to obtain the current position of the target object.
In the above scheme, the apparatus further comprises:
the system comprises a presentation module, a positioning module and a display module, wherein the presentation module is used for presenting a map interface comprising at least one subway line and presenting a positioning function item in the map interface;
and presenting the target position of the target object in the corresponding subway line in response to the triggering operation aiming at the positioning function item.
An embodiment of the present application further provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the positioning method in the subway scene provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the present application further provides a computer-readable storage medium, which stores executable instructions, and when the executable instructions are executed by a processor, the positioning method in the subway scene provided by the embodiment of the present application is implemented.
The embodiment of the present application further provides a computer program product, which includes a computer program or an instruction, and when the computer program or the instruction is executed by a processor, the positioning method in the subway scene provided by the embodiment of the present application is implemented.
The embodiment of the application has the following beneficial effects:
by applying the embodiment of the application, the current position of the target object is obtained, and at least one subway line related to the position is determined; then respectively determining a plurality of state points contained in each subway line and line positions corresponding to the state points through a plurality of line sections contained in the subway line; and determining the target position of the target object in the subway line based on the line position of each state point and the current position of the target object. Therefore, the whole positioning process only needs to acquire the line position corresponding to the state point contained in the subway line, and the labor cost is saved; and the line position of the state point contained in the subway line associated with the current position is combined with the current position to determine the target position of the target object in the subway line, so that the accuracy of positioning in the subway scene is improved.
Drawings
Fig. 1 is an architecture diagram of a positioning system 100 in a subway scene provided by an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device 500 for implementing a positioning method in a subway scene according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a positioning method in a subway scene according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a process for determining a position score of a state point according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a determination process of a transition score corresponding to a state point at a current time point according to an embodiment of the present application;
fig. 6 is a schematic display diagram of a target position of a target object in a subway line according to an embodiment of the present application;
fig. 7 is a schematic diagram of an architecture of a positioning system in a subway scene according to an embodiment of the present application;
FIG. 8 is a schematic illustration of a target area provided by an embodiment of the present application;
fig. 9 is a schematic diagram of a status point included in a subway line provided in an embodiment of the present application;
fig. 10 is a schematic positioning flow chart in different motion states according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The terminal comprises a client and application programs running in the terminal and used for providing various services, such as a map client and a navigation client.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
Based on the above explanations of terms and terms involved in the embodiments of the present application, the following describes a positioning system in a subway scene provided by the embodiments of the present application. Referring to fig. 1, fig. 1 is a schematic diagram of an architecture of a positioning system 100 in a subway scenario, in order to support an exemplary application, a terminal 400 is connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two, and data transmission is implemented using a wireless or wired link.
The terminal 400 (which may be equipped with a client for positioning, such as a map client) is configured to, in response to a positioning instruction for the target object triggered based on the map interface, obtain a current location of the target object, and send the current location of the target object to the server 200;
the server 200 is configured to receive a current location of a target object sent by the terminal 400, and determine at least one subway line associated with the location; respectively determining a plurality of state points contained in each subway line and line positions corresponding to the state points through a plurality of line sections contained in the subway line; determining the target position of the target object in the subway line based on the line position of each state point and the current position of the target object, and sending the target position of the target object in the corresponding subway line to the terminal 400;
and the terminal 400 is configured to receive the target position of the target object in the corresponding subway line sent by the server 200, and present the target position of the target object in the corresponding subway line in the map interface.
In practical application, the server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart television, a smart watch, and the like. The terminal 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, and the present application is not limited thereto.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device 500 for implementing a positioning method in a subway scene according to an embodiment of the present application. In practical application, the electronic device 500 may be a server or a terminal shown in fig. 1, and taking the electronic device 500 as the terminal shown in fig. 1 as an example, an electronic device implementing the positioning method in the subway scenario in the embodiment of the present application is described, where the electronic device 500 provided in the embodiment of the present application includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 2.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the positioning apparatus in the subway scene provided by the embodiments of the present application may be implemented in a software manner, and fig. 2 shows a positioning apparatus 555 in the subway scene stored in a memory 550, which may be software in the form of programs and plug-ins, and includes the following software modules: the obtaining module 5551, the first determining module 5552 and the second determining module 5553 are logical modules, and thus may be arbitrarily combined or further split according to the implemented functions, and the functions of the respective modules will be described below.
In other embodiments, the positioning Device in the subway scene provided by the embodiment of the present Application may be implemented by combining software and hardware, and as an example, the positioning Device in the subway scene provided by the embodiment of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the positioning method in the subway scene provided by the embodiment of the present Application, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
In some embodiments, a terminal or a server may implement the positioning method in the subway scene provided by the embodiment of the present application by running a computer program. For example, the computer program may be a native program or a software module in an operating system; can be a local (Native) Application program (APP), i.e. a program that needs to be installed in an operating system to run, such as a map APP or a navigation APP; or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also an applet that can be embedded into any APP. In general, the computer programs described above may be any form of application, module or plug-in.
Based on the above description of the positioning system and the electronic device in the subway scene provided by the embodiment of the present application, the following description of the positioning method in the subway scene provided by the embodiment of the present application is provided. In some embodiments, the positioning method in the subway scene provided by the embodiment of the present application may be implemented by a server or a terminal alone, or implemented by a server and a terminal in a cooperation manner, and the following describes the positioning method in the subway scene provided by the embodiment of the present application by taking a terminal embodiment as an example.
Referring to fig. 3, fig. 3 is a schematic flow chart of a positioning method in a subway scene provided in an embodiment of the present application, where the positioning method in the subway scene provided in the embodiment of the present application includes:
step 101: the terminal obtains the current position of the target object and determines at least one subway line related to the position.
Here, the terminal may be installed with a client supporting a positioning function, such as a map client, and when receiving an operation instruction for the client triggered by a user, the terminal operates the client, and when receiving a positioning instruction for the target object, the terminal acquires a current location of the target object. Here, the terminal includes, but is not limited to, a mobile phone, a computer, an intelligent voice interaction device, an intelligent home appliance, a vehicle-mounted terminal, and the like.
In some embodiments, the terminal may obtain the current position of the target object by at least one of the following methods: positioning processing based on a global positioning system is carried out on the target object to obtain the current position of the target object; and positioning the target object based on network positioning service to obtain the current position of the target object.
Here, the terminal may perform positioning processing on the target object by at least one of a global positioning system and a network positioning service to obtain a current location of the target object. In practical implementation, the obtained current position of the target object may be realized by a network positioning service, or may be realized by a global positioning system, for example, in a subway scene, if the global positioning system is not accurately determined, the target object may be positioned by the network positioning service, or the two may be jointly positioned. Here, the current location of the target object may be considered as an approximate location, and the precise location of the subway line where the target object is located will be subsequently determined based on the location.
In practical applications, the target object may be a user located in a subway scene. And after the terminal acquires the current position of the target object, further determining at least one subway line associated with the current position of the target object.
In some embodiments, the terminal may determine the at least one subway line associated in position by: determining a target area which takes the position as a center and takes the target distance as a radius; and determining at least one subway line passing through the target area as a subway line with associated position.
In practical applications, the subway line associated with the position where the target object is currently located may be determined as follows: and determining a target area by taking the current position of the target object as the center and the target distance as the radius, and then determining at least one subway line passing through the target area as the subway line associated with the position. Since there may be an error in the positioning process of the current position of the target object, when determining the precise position of the target object in the subway line, it is necessary to determine the subway line in which the target object may be located, that is, at least one subway line associated with the current position of the target object. In practical implementation, when the current position of the target object is located, the location error may generally range from several tens of meters to several kilometers, that is, the radius "target distance" of the target area may be determined based on the location error, or may be preset according to an empirical value.
Step 102: and respectively determining a plurality of state points contained in each subway line and line positions corresponding to the state points through a plurality of line sections contained in the subway line.
After determining at least one subway line associated with the current position of the target object, the terminal respectively determines a plurality of state points contained in each subway line, and acquires the line position corresponding to each state point for subsequently determining the target position of the target object in the subway line. In practical applications, the state point may be determined based on a plurality of line segments obtained by dividing the corresponding subway line.
In some embodiments, the terminal may determine the plurality of status points included in each subway line respectively by: for each subway line, the following processing is respectively executed: acquiring a division mode corresponding to the subway line, and dividing the subway line into a plurality of line sections according to the division mode; and determining the starting point and the end point of the subway line and the dividing point between every two adjacent line sections as the state points in the subway line.
Here, the terminal may determine the plurality of status points included in each subway line by performing the following processing for each subway line, respectively: firstly, a dividing mode corresponding to a subway line is obtained, the subway line is divided into a plurality of line sections according to the dividing mode, specifically, the dividing mode can be that the subway line is divided equidistantly according to a preset distance (for example, 10 meters, the preset distance can be determined according to an empirical value) as a standard, and also can be that the subway line is divided unequally according to a distance between subway stations related to the subway line, for example, when the distance between the subway stations is large (for example, larger than a certain distance threshold), the length of the divided line sections can be enlarged, that is, the length of the divided line sections is longer than that of the subway line which is smaller (for example, smaller than a certain distance threshold) than the distance between the subway stations.
After dividing the subway line into a plurality of line sections according to the dividing mode, the terminal determines the starting point and the end point of the subway line and the dividing point between every two adjacent line sections as the state points in the subway line. In practical implementation, the state points may include subway stations, and in practical application, the interval length between two adjacent state points may be smaller than the interval length between two adjacent subway stations, so as to ensure that the determined target position in the subway line where the target object is located is more accurate, that is, the smaller the interval length between the state points is, the more accurate the determined target position is, and in practical application, the interval length between the state points may be determined by a more empirical value, which not only enables the determined target position to be more accurate, but also avoids occupying more computing resources, and improves the positioning processing efficiency.
In some embodiments, the terminal may determine the line position corresponding to each status point by: obtaining the longitude and latitude information of the corresponding lines of each subway; searching longitude and latitude information corresponding to each state point from the longitude and latitude information of the line; and determining the longitude and latitude information corresponding to each state point as the line position corresponding to the corresponding state point.
Here, in the embodiment of the application, only the line longitude and latitude information of the subway line needs to be acquired, so that the labor cost is reduced. Here, when determining the line position corresponding to each state point, it is necessary to acquire the line longitude and latitude information corresponding to each subway line, and then search the longitude and latitude information corresponding to each state point from the line longitude and latitude information, so that the longitude and latitude information corresponding to each state point is determined as the line position corresponding to the corresponding state point.
Step 103: and determining the target position of the target object in the subway line based on the line position of each state point and the current position of the target object.
Here, after determining the line position of each status point, the terminal determines the target position of the target object in the subway line by combining the line position based on each status point and the current position of the target object.
In some embodiments, the terminal may determine the target position of the target object in the subway line based on the line position of each status point and the current position of the target object as follows: for each subway line, determining the position score of the corresponding state point based on the line position of each state point and the current position of the target object; and determining the line position corresponding to the state point with the maximum position score as the target position of the target object in the subway line. Wherein the position score is used for indicating the possible degree of the target object in the corresponding state point.
After determining the line position of each state point and the current position of the target object, the terminal firstly determines the position score of the corresponding state point according to the line position of each state point and the current position of the target object for each subway line; then, since the position score of the state point is used to indicate the possible degree that the target object is in the corresponding state point, the line position corresponding to the state point with the largest position score is determined as the target position where the target object is located in the subway line.
In practical applications, the position score may be a position probability, i.e. a probability that the target object is at the corresponding state point.
When the current position of the target object is obtained by performing network positioning for the first time, in some embodiments, the terminal may determine the position score of each state point based on the line position of each state point and the current position of the target object by: for each state point, the following processing is executed: acquiring a positioning error corresponding to the current position of the target object, and determining the distance between the line position of the state point and the current position of the target object; acquiring a first mapping relation among the positioning error, the distance and the position fraction; and determining the position score of the state point based on the first mapping relation by combining the positioning error and the distance.
Here, the current position of the target object is obtained by performing network positioning for the first time, that is, at the initial positioning time, to obtain the current position of the target object for positioning at the first positioning time point. At this time, based on the position obtained by the positioning of the first positioning time point, a first calculation process of the target position of the subway line where the target object is located is executed, that is, the positioning processing of the first positioning time point of the target position of the subway line where the target object is located is realized.
In practical application, the terminal may first obtain a positioning error corresponding to a current location of the target object, and determine a distance between the route position of the state point and the current location of the target object, where the distance may be an euclidean distance between the route position of the state point and the current location of the target object. A first mapping relationship between the positioning error, the distance and the position score is then obtained, and the position score of the state point is determined based on the first mapping relationship in combination with the positioning error and the distance.
In practical implementation, the first mapping relationship may be as follows:
Figure BDA0003303557010000151
wherein a and b are constant parameters; dist is the euclidean distance between the network positioning position (i.e. the current position of the target object) and the state point position, and Acc is the positioning error corresponding to the current position of the target object.
When the subway vehicle where the target object is located is in a driving state, in some embodiments, the terminal may determine the position score of the corresponding state point based on the line position of each state point and the current position of the target object by: determining a corresponding advancing direction of the subway vehicle, and determining a target subway line corresponding to the advancing direction from at least one subway line; determining at least one target state point which is not passed by the target object in the traveling direction from a plurality of state points contained in the target subway line; and determining the position score of the corresponding target state point based on the line position of each target state point and the current position of the target object.
Here, the state of the target object may be recognized, and the state may include a stationary state, a driving state, and a walking state. When the subway vehicle where the target object is located is identified to be in a driving state, the traveling direction corresponding to the subway vehicle can be determined, and then the target subway line corresponding to the traveling direction is determined from the at least one subway line, so that at least one target state point which is not passed by the target object in the traveling direction can be determined from a plurality of state points included in the target subway line. Specifically, based on the route position of each target state point and the current position of the target object, the position score of the corresponding target state point is determined.
In some embodiments, the terminal may determine the corresponding travel direction of the metro vehicle by: acquiring at least one historical position of a target object before a current time point; performing straight line fitting on at least one historical position and the current position of the target object, and performing projection processing on the straight line obtained by fitting to the subway line to obtain a projection result; and taking the projection positive direction of the projection result as the corresponding advancing direction of the subway vehicle.
Here, first, the terminal obtains at least one historical position where the target object is located before the current time point, for example, historical positions of positioning time points that are before and after the current time point by the nearest target number, where the current time point is, for example, 00:00:15, and every two positioning time points are spaced by 1 second, so that the nearest 3 positioning time points before the current time point 00:00:15 are, for example, 00:00:14, 00:00:13, and 00:00:12, respectively. And then, performing straight line fitting on at least one historical position and the current position of the target object, specifically performing straight line fitting in a least square method, and performing projection processing on the straight line obtained by fitting to the subway line to obtain a projection result, so that the projection positive direction of the projection result is taken as the corresponding advancing direction of the subway vehicle.
In some embodiments, the terminal may determine the position score of each target state point based on the route position of the target state point and the current position of the target object by: acquiring a positioning error corresponding to the current position of a target object; determining a position score of a first target state point at the head in the traveling direction based on the positioning error, the distance between the line position of the first target state point and the current position of the target object; aiming at each second target state point which is not the first target state point, acquiring a noise parameter and a target position fraction of a third target state point which is positioned in front of the second target state point and has a target distance with the second target state point; a position score of the second target state point is determined based on the target position score and the noise parameter.
Here, for a first target state point at the top in the traveling direction, a positioning error corresponding to the position where the target object is currently located is acquired, and then based on the positioning error, the distance between the line position of the first target state point and the position where the target object is currently located, a position score of the first target state point is determined based on the following formula:
Figure BDA0003303557010000161
wherein a and b are constant parameters; dist is an euclidean distance between a network positioning position (i.e., the current position of the target object) and a line position of the state point, and Acc is a positioning error corresponding to the current position of the target object.
For each second target state point which is not the first target state point, acquiring a noise parameter (for reducing errors) and a target position score of a third target state point which is located before the second target state point and is away from the second target state point by a target distance, and then determining a position score of the second target state point based on the target position score and the noise parameter and based on the following formula:
Figure BDA0003303557010000171
wherein the content of the first and second substances,
Figure BDA0003303557010000172
is the position score of the second target state point,
Figure BDA0003303557010000173
is the target position score of the third target state point, NgaussIs a noise parameter.
When the current position of the target object is obtained by performing network positioning for the non-first time, in some embodiments, the terminal may determine the position score of each state point based on the line position of each state point and the current position of the target object in the following manner, see fig. 4, where fig. 4 is a schematic diagram of a process for determining the position score of the state point provided in the embodiment of the present application:
for each state point, the following processing is executed: step 201: acquiring a positioning error corresponding to the current position of a target object; step 202: determining an error parameter corresponding to the state point based on the positioning error, the distance between the line position of the state point and the current position of the target object; step 203: acquiring a transfer score of the state point corresponding to the current time point and a historical position score of the state point at the target time point; step 204: a position score for the state point is determined based on the error parameter, the transition score, and the historical position score.
Wherein the target time point is located before the current time point; the transition score is used for indicating the possible degree of the target object moving from each other state point to the state point from the target time point to the current time point.
Here, in the positioning processing of the target position at the first positioning time point, the position score of each state point at the first positioning time point is determined, and at this time, the calculation of the position score of each state point for the subsequent positioning time point may be performed based on the position score of the state point at the previous positioning time point. Specifically, the position score for each state point at the non-first-time-location time point may be determined as follows:
firstly, the terminal obtains a positioning error corresponding to the current position of the target object, and then determines an error parameter corresponding to the state point based on the positioning error, the distance (which may be an euclidean distance) between the line position of the state point and the current position of the target object. In practical applications, the error parameter can be calculated by the following formula:
Figure BDA0003303557010000174
wherein, PAError parameters corresponding to the state points; a and b are constant parameters; distEThe euclidean distance between the network positioning position (i.e. the current position of the target object) and the line position of the state point, Acc is the network positioning position error.
Secondly, acquiring a transfer score of the state point corresponding to the current time point and a historical position score of the state point at the target time point; wherein the target time point is before the current time point, and the transition score is used for indicating the possible degree of the target object moving from each other state point to the state point from the target time point to the current time point.
In some embodiments, the terminal may obtain the transition score of the state point corresponding to the current time point in the following manner, referring to fig. 5, where fig. 5 is a schematic diagram of a process for determining the transition score of the state point corresponding to the current time point, provided in this embodiment of the present application, and the process includes: step 301: acquiring the running speed of a subway vehicle where a target object is located and the time interval from a target time point to a current time point; step 302: determining the distance between the line position of the state point and the line positions of other state points; step 303: acquiring a second mapping relation among the driving speed, the time interval, the distance and the transfer fraction; step 304: and determining the transfer score of the state point corresponding to the current time point based on the second mapping relation by combining the traveling speed, the time interval and the distance.
In practical implementation, the second mapping relationship may be as follows:
Figure BDA0003303557010000181
wherein, PBThe transfer score corresponding to the current time point of the state point is obtained; distRFor that point and some other point S0Distance (i.e., route distance, indicating the distance traveled on the route); v ^ is the running speed of the subway vehicle; Δ t is the time interval from the target time point to the current time point; c and d are constant parameters.
Third, a position score for the state point is determined based on the error parameter, the transition score, and the historical position score. In practical applications, based on the error parameter, the transition score, and the historical position score, the position score of the state point may be determined by the following formula:
Figure BDA0003303557010000182
wherein the content of the first and second substances,
Figure BDA0003303557010000183
is the position score of the state point;
Figure BDA0003303557010000184
historical position scores of the state points at the target time points; sigma PBRepresenting the sum of the transition scores corresponding to the state points at the current time point.
In some embodiments, when the terminal acquires an in-out signal corresponding to a subway station, at least one target state point corresponding to the subway station in the plurality of state points can be acquired;
accordingly, when the current location of the target object is obtained by performing network positioning for the first time, in some embodiments, the terminal may determine the location score of each status point based on the route location of each status point and the current location of the target object by: for each target state point, the following processing is respectively executed: acquiring historical position scores of target state points at a target time point, wherein the target time point is positioned before a current time point; determining the distance between the line position of the target state point and the current position of the target object; acquiring a third mapping relation among historical position scores, distances and position scores; and determining the position score of the target state point based on the third mapping relation by combining the historical position score and the distance.
Here, in practical application, the corresponding station-in and station-out signals of the subway station can be monitored. When the station in-out signal corresponding to the subway station is acquired, at least one target state point corresponding to the subway station in the plurality of state points can be acquired, so that the position score of each target state point is calculated.
In the positioning process of the target position at the first positioning time point, the position score of each target state point at the first positioning time point is determined, and at this time, the calculation of the position score of each target state point at the subsequent positioning time point may be calculated based on the position score of the target state point at the previous positioning time point. Specifically, the position score for each target state point at the non-first-time-location time point may be determined as follows:
firstly, acquiring a historical position score of a target state point at a target time point; then, determining the distance between the line position of the target state point and the current position of the target object; then acquiring a third mapping relation among historical position scores, distances and position scores; and determining the position score of the target state point based on the third mapping relation by combining the historical position score and the distance.
In practical implementation, the third mapping relationship may be as follows:
Figure BDA0003303557010000191
wherein, PtIs the position score of the target state point; pt-1Is a historical location score; a and b are constant parameters; distEAnd the line distance represents the line position of the target state point and the current position of the target object.
In some embodiments, the terminal may present the target position of the target object in the corresponding subway line by: presenting a map interface comprising at least one subway line, and presenting a positioning function item in the map interface; and presenting the target position of the target object in the corresponding subway line in response to the triggering operation aiming at the positioning function item.
Here, the terminal may provide a map interface for positioning the target object, where the map interface includes at least one subway line, and also presents a positioning function item, and the user may trigger a positioning instruction for the target object by triggering the positioning function item. When the terminal receives the trigger operation aiming at the positioning function item, the terminal responds to the trigger operation aiming at the positioning function item to perform positioning processing on the target position of the target object in the subway line, and after the target position of the target object in the subway line is determined, the target position of the target object in the corresponding subway line can be presented in the map interface. In practical applications, the target position may be detailed latitude and longitude information, or may be a target position in the map, where the target object is located in the subway line, indicated by an identifier (the identifier is used for indicating the target object).
By way of example, referring to fig. 6, fig. 6 is a schematic display diagram of a target position of a target object in a subway line according to an embodiment of the present application. Here, the map interface includes at least one subway line, and a positioning function item "positioning" is also presented, as shown in a diagram a in fig. 6; in response to the triggering operation for the "positioning" of the positioning function item, the target position of the target object in the corresponding subway line can be presented in the map interface, i.e. the target position of the target object in the subway line is indicated in the map by the identifier (which is used for indicating the target object), as shown in the diagram B in fig. 6.
It should be noted that, in the foregoing embodiment, the time point is a positioning time point, and in the present application, the target object may be periodically positioned when each positioning time point arrives according to a preset positioning period, so as to ensure that the target position in the subway line where the target object is located is displayed in real time.
By applying the embodiment of the application, the current position of the target object is obtained, and at least one subway line related to the position is determined; then respectively determining a plurality of state points contained in each subway line and line positions corresponding to the state points through a plurality of line sections contained in the subway line; and determining the target position of the target object in the subway line based on the line position of each state point and the current position of the target object. Therefore, the whole positioning process only needs to acquire the line position corresponding to the state point contained in the subway line, and the labor cost is saved; and the line position of the state point contained in the subway line associated with the current position is combined with the current position to determine the target position of the target object in the subway line, so that the accuracy of positioning in the subway scene is improved.
An exemplary application of the embodiment of the present application in an actual application scenario is described below by taking a target object as an example.
With the rapid development of subway traffic, more and more users rely on subway traffic for going out. In the related technology, a positioning scheme in a subway scene is to perform positioning through base station signals, and perform auxiliary positioning by using information such as subway line parameter values, total subway operation time, average acceleration time between every two adjacent stations in the operation process of the subway, average stop station time and the like. However, the scheme needs detailed operation data of the subway vehicles, the data are difficult to obtain, the number of subway lines is large, and the labor cost is high; and poor base station signals can cause the problem of positioning deviation.
Based on this, the embodiments of the present application provide a positioning method in a subway scene to at least solve the above-mentioned problems. According to the method and the device, detailed data of subway operation do not need to be acquired, only the longitude and latitude information of the subway line is needed, an accurate positioning result in a subway scene can be acquired, the labor cost is reduced, and meanwhile the positioning accuracy in the subway scene (including subway stations and subway vehicle running tunnels) is improved. The positioning method in the subway scene provided by the embodiment of the application can be applied to an intelligent terminal, and particularly can be applied to UI display of a map client installed on a terminal (such as a mobile phone), can output the target position of the subway where the user is located to the user in real time, can be represented by longitude and latitude information, and can also indicate the target position of the subway where the user is currently located through identification (corresponding to the user) in a map interface, as shown in a B diagram in fig. 6.
Next, a positioning method in a subway scene provided by the embodiment of the present application is described in detail. The method is mainly applied to the intelligent terminal. Fig. 7 is an architecture diagram of a positioning system in a subway scene provided in an embodiment of the present application, including: 1) a motion state identification module; 2) an in-out station identification module; 3) and a positioning module. Wherein the content of the first and second substances,
1) the motion state identification module calculates the motion state of the current device by using Inertial Measurement Units (IMU), an accelerometer, a gyroscope, a magnetometer and real-time recording data configured for most intelligent terminals, and includes: static, walking, driving.
2) And the in-and-out station identification module is used for identifying door opening and closing alarm sound during the operation of the subway and starting and braking behaviors by using the real-time recording data stream of the accelerometer and the terminal so as to judge the in-and-out station behaviors of the subway.
3) And the positioning module is used for obtaining a final positioning result of the subway line where the user is located by using the network positioning result obtained by the output of the two modules, the Wi-Fi and base station information scanned by the terminal and the position information (such as longitude and latitude information) of the subway line.
The following mainly describes a specific implementation of the positioning module.
First, the current location of the user (i.e., the network location) is determined by network location. In practical application, the network positioning error is usually in the range of several tens of meters to several kilometers, and the subway line passing through within the range of n meters (i.e. the target area) is recalled according to the network positioning position obtained by positioning, wherein n is determined by the precision range given by network positioning, and the poorer the network positioning precision, the larger the recall range. As shown in fig. 8, fig. 8 is a schematic diagram of a target area provided in the embodiment of the present application. Here, the subway line information of the X-line and the Y-line, that is, the longitude and latitude information of the subway line of the X-line and the Y-line and the longitude and latitude information of the subway station are recalled.
And secondly, processing the recalled line longitude and latitude information of the subway line into a state point string. As shown in fig. 9, fig. 9 is a schematic diagram of a status point included in a subway line provided in the embodiment of the present application. Dividing the subway line into state point strings by taking a preset interval (such as 10 meters) as a standard, marking the state points within a preset distance (such as 150 meters) from the subway station as station class state points, and taking the rest as common class state points.
Third, the starting point of the positioning process (i.e., the position located at the first positioning time point when the navigation positioning starts to be used) is calculated. The position probabilities of all state points at the initial time point (i.e. the first positioning time point) can be calculated by the following method, i.e. the position scores are used for describing the probability that the user is at the corresponding state point, and finally, the state point with the highest position probability is selected as the starting point.
Figure BDA0003303557010000221
Wherein a and b are constant parameters; dist is the euclidean distance between the network location position (i.e. the current location of the user determined by the network location) and the state point position, and Acc is the network location position error (i.e. the location error corresponding to the current location of the target object).
The above steps describe the initialization process when starting the navigation positioning, i.e. the target position located at the initial time point (the first positioning time point). And at the subsequent positioning time point, the positioning module can continuously receive the input of the motion state module, the station entering and exiting identification module and the network positioning position, so that the positioning processing is carried out.
Scheme 1) the positioning module performs positioning processing based on the network positioning position (i.e. the current position of the user) output by the network positioning module. In practical applications, the processing may be performed periodically at intervals of 30 seconds.
When a new network location time point is reached, the location probabilities (i.e., the location scores) for all state points are updated. At this time, the position probability of the state point at the current positioning time point may be determined as follows:
Figure BDA0003303557010000231
wherein the content of the first and second substances,
Figure BDA0003303557010000232
the position probability of the last positioning time point is obtained;
PAthe observation probability (i.e. the error parameter corresponding to the above-mentioned state point) of the current positioning time point is irrelevant to the position probability of the last positioning time point, so as to perform the correction function, and avoid the increase of the deviation of the subsequent position probability caused by the deviation of the position probability of the last positioning time point, for example, byThe following formula is calculated:
Figure BDA0003303557010000233
wherein a and b are constant parameters; distEThe euclidean distance between the network positioning location (i.e. the user's approximate location determined by the network positioning) and the state point location, Acc is the positioning error of the network positioning location.
PBThe transition probability (i.e. the transition score) represents that the user has reached a certain state point S in the time interval from the last network positioning time point to the current positioning time point0The probability of moving to the current state point can be calculated by the following method:
Figure BDA0003303557010000234
wherein the current state point and S0Distance of route (distance traveled on route) DistR(ii) a Multiplying the estimated speed v ^ of the current subway by the time difference delta t from the last network positioning time point to the current positioning time point to obtain the estimated subway running distance; the greater the difference between these two distances, the lower the transition probability for that state point. Wherein c and d are constant parameters. Sigma PBAnd the sum of transition probabilities of all the state points transitioning to the state point in the time interval from the last network positioning time point to the current positioning time point of the user is represented.
Finally, the target position of the user at the current positioning time point is
Figure BDA0003303557010000235
The largest state point.
And 2) the positioning module performs positioning processing based on the motion state output by the motion state identification module. In practical applications, the processing may be performed periodically at 1 second intervals.
In practical application, the motion state identification module transmits a motion state to the positioning module every second, wherein the motion state comprises still, walking and driving. As shown in fig. 10, fig. 10 is a schematic view of a positioning process in different motion states according to an embodiment of the present application, including:
step 401: acquiring a motion state;
when a static state is received, step 405 is executed: and (6) ending. Namely, the anchor point does not move and is not processed.
When the driving state is received, step 404 is executed: and (6) position estimation. Specifically, the subway estimated speed v ^ is used for estimating the current traveling direction, and the position probability of the state point at the moment is
Figure BDA0003303557010000241
Wherein the content of the first and second substances,
Figure BDA0003303557010000242
and the position probability of the state point which is before the current state point and has the route distance to the current state point of v ^ 1 s. N is a radical ofgaussIs a gaussian noise term. Here, the traveling direction of the subway vehicle may be determined by: the inside of the positioning module can cache nearly 5 network positioning positions, the 5 positioning points are used for fitting a straight line in a least square mode, and the straight line obtained through fitting is used as the traveling direction of the projection of the straight line to the subway line.
When the walking state is received, step 402 is executed: judging whether a transfer station exists within a range of 500 meters near the current position or not when transfer judgment is needed; if yes, go to step 403, reload the transfer route and the current route, and initialize the position probability of the state point, if no, go to step 405: and (6) ending.
And in the scheme 3), the positioning module carries out positioning processing based on the inbound and outbound signals output by the inbound and outbound identification module. In practical application, when the inbound and outbound signals are received, the position probability of the station class state point is amplified. The position probability of each site class state point can be determined by:
Figure BDA0003303557010000243
wherein a and b are constant parameters; distEA route distance, P, representing the route position of the site-like status point and the network location positiontIs the position probability, P, of the station class state point at the current positioning time pointt-1And the position probability of the station class state point at the last positioning time point is obtained.
In practical application, the positioning module receives the information output by which module at the current positioning time point, and performs positioning based on the information output by which module, for example, performs positioning based on the above scheme 1) when receiving the network positioning position output by the network positioning module; positioning based on the scheme 2) after receiving the motion state output by the motion state identification module; and (3) positioning based on the scheme 3) after receiving the inbound and outbound signals output by the inbound and outbound identification module. If the information output by a plurality of modules is received, the positioning processing can be carried out according to the sequence of scheme 2) scheme >1) scheme > 3).
By applying the embodiment of the application, detailed data of subway operation does not need to be acquired, only the longitude and latitude information of the subway line is needed, and a large amount of manual acquisition is not needed, so that a relatively accurate positioning result in a subway scene can be acquired, the labor cost is reduced, and the positioning accuracy in the subway scene is improved.
Continuing with the exemplary structure of the implementation of the positioning device 555 in the subway scene provided by the embodiments of the present application as a software module, in some embodiments, as shown in fig. 2, the software module stored in the positioning device 555 in the subway scene in the memory 550 may include:
an obtaining module 5551, configured to obtain a current location of a target object, and determine at least one subway line associated with the location;
a first determining module 5552, configured to determine, through a plurality of line segments included in the subway line, a plurality of state points included in each subway line and a line position corresponding to each state point respectively;
a second determining module 5553, configured to determine a target position where the target object is located in the subway line based on the line position of each of the state points and the current position where the target object is located.
In some embodiments, the second determining module 5552 is further configured to determine, for each of the subway lines, a position score of the corresponding status point based on the line position of each of the status points and the current position of the target object, where the position score is used to indicate a possible degree of the target object at the corresponding status point;
and determining the line position corresponding to the state point with the maximum position score as the target position of the target object in the subway line.
In some embodiments, the obtaining module 5551 is further configured to determine a target area centered at the position and having a radius of a target distance;
and determining at least one subway line passing through the target area as the subway line associated with the position.
In some embodiments, the first determining module 5552 is further configured to, for each of the subway lines, respectively perform the following processing:
acquiring a division mode corresponding to a subway line, and dividing the subway line into a plurality of line sections according to the division mode;
and determining the starting point and the end point of the subway line and the dividing point between every two adjacent line sections as the state points in the subway line.
In some embodiments, the first determining module 5552 is further configured to obtain route longitude and latitude information corresponding to each subway route;
searching longitude and latitude information corresponding to each state point from the longitude and latitude information of the circuit;
and determining the longitude and latitude information corresponding to each state point as the line position corresponding to the corresponding state point.
In some embodiments, when the current position of the target object is obtained by performing network positioning for the first time, the second determining module 5553 is further configured to perform the following processing for each of the state points:
acquiring a positioning error corresponding to the current position of the target object, and determining the distance between the line position of the state point and the current position of the target object;
obtaining a first mapping relation among the positioning error, the distance and the position fraction;
determining a position score of the state point based on the first mapping relation in combination with the positioning error and the distance.
In some embodiments, when the subway vehicle where the target object is located is in a driving state, the second determining module 5553 is further configured to determine a traveling direction corresponding to the subway vehicle, and determine a target subway line corresponding to the traveling direction from the at least one subway line;
determining at least one target state point which is not passed by the target object in the traveling direction from a plurality of state points included in the target subway line;
and determining the position score of the corresponding target state point based on the line position of each target state point and the current position of the target object.
In some embodiments, the second determining module 5553 is further configured to obtain at least one historical location where the target object is located before the current time point;
performing straight line fitting on the at least one historical position and the current position of the target object, and performing projection processing on the straight line obtained by fitting to a subway line to obtain a projection result;
and taking the projection positive direction of the projection result as the corresponding advancing direction of the subway vehicle.
In some embodiments, the second determining module 5553 is further configured to obtain a positioning error corresponding to a current position of the target object;
for a first target state point which is at the top in the travel direction, determining a position score of the first target state point based on the positioning error, the distance between the line position of the first target state point and the position where the target object is currently located;
aiming at each second target state point which is not the first target state point, acquiring a noise parameter and a target position score of a third target state point which is positioned in front of the second target state point and is away from the second target state point by a target distance; determining a position score for the second target state point based on the target position score and the noise parameter.
In some embodiments, when the current position of the target object is obtained by non-first network positioning, the second determining module 5553 is further configured to perform the following processing for each of the state points:
acquiring a positioning error corresponding to the current position of the target object;
determining an error parameter corresponding to the state point based on the positioning error, the distance between the line position of the state point and the current position of the target object;
acquiring a transfer score of the state point corresponding to the current time point and a historical position score of the state point at a target time point;
determining a location score for the state point based on the error parameter, the transition score, and the historical location score;
wherein the target time point is located before the current time point; the transition score is used for indicating the possible degree of the target object moving from each other state point to the state point from the target time point to the current time point.
In some embodiments, the second determining module 5553 is further configured to obtain a driving speed of a subway vehicle where the target object is located, and a time interval from the target time point to a current time point;
determining the distance between the line position of the state point and the line positions of the other state points;
acquiring a second mapping relation among the travelling speed, the time interval, the distance and the transfer fraction;
and determining a transfer score of the state point corresponding to the current time point based on the second mapping relation by combining the traveling speed, the time interval and the distance.
In some embodiments, the second determining module 5553 is further configured to, when an in-out signal corresponding to a subway station is obtained, obtain at least one target status point corresponding to the subway station from the plurality of status points;
when the current position of the target object is obtained by performing network positioning for the first time, the second determining module 5553 is further configured to perform the following processing for each target state point:
acquiring a historical position score of the target state point at a target time point, wherein the target time point is positioned before the current time point;
determining the distance between the line position of the target state point and the current position of the target object;
obtaining a third mapping relation among the historical position score, the distance and the position score;
and determining the position score of the target state point based on the third mapping relation by combining the historical position score and the distance.
In some embodiments, the obtaining module 5551 is further configured to perform at least one of the following:
positioning processing based on a global positioning system is carried out on the target object to obtain the current position of the target object;
and positioning the target object based on network positioning service to obtain the current position of the target object.
In some embodiments, the apparatus further comprises:
the system comprises a presentation module, a positioning module and a display module, wherein the presentation module is used for presenting a map interface comprising at least one subway line and presenting a positioning function item in the map interface;
and presenting the target position of the target object in the corresponding subway line in response to the triggering operation aiming at the positioning function item.
By applying the embodiment of the application, the current position of the target object is obtained, and at least one subway line related to the position is determined; then respectively determining a plurality of state points contained in each subway line and line positions corresponding to the state points through a plurality of line sections contained in the subway line; and determining the target position of the target object in the subway line based on the line position of each state point and the current position of the target object. Therefore, the whole positioning process only needs to acquire the line position corresponding to the state point contained in the subway line, and the labor cost is saved; and the line position of the state point contained in the subway line associated with the current position is combined with the current position to determine the target position of the target object in the subway line, so that the accuracy of positioning in the subway scene is improved.
An embodiment of the present application further provides an electronic device, where the electronic device includes:
a memory for storing executable instructions;
and the processor is used for realizing the positioning method in the subway scene provided by the embodiment of the application when the executable instructions stored in the memory are executed.
Embodiments of the present application also provide a computer program product or a computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the positioning method in the subway scene provided by the embodiment of the application.
The embodiment of the present application further provides a computer-readable storage medium, which stores executable instructions, and when the executable instructions are executed by a processor, the positioning method in the subway scene provided by the embodiment of the present application is implemented.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A method of positioning in a subway scene, the method comprising:
acquiring the current position of a target object, and determining at least one subway line associated with the position;
respectively determining a plurality of state points contained in each subway line and a line position corresponding to each state point through a plurality of line sections contained in the subway line;
and determining the target position of the target object in the subway line based on the line position of each state point and the current position of the target object.
2. The method of claim 1, wherein determining the target location of the target object in the subway line based on the line location of each of the status points and the current location of the target object comprises:
for each subway line, determining a position score of the corresponding state point based on the line position of each state point and the current position of the target object, wherein the position score is used for indicating the possible degree of the target object at the corresponding state point;
and determining the line position corresponding to the state point with the maximum position score as the target position of the target object in the subway line.
3. The method of claim 2, wherein when the current position of the target object is obtained by first network positioning, the determining the position score of the corresponding state point based on the route position of each state point and the current position of the target object comprises:
for each of the state points, the following processing is performed:
acquiring a positioning error corresponding to the current position of the target object, and determining the distance between the line position of the state point and the current position of the target object;
obtaining a first mapping relation among the positioning error, the distance and the position fraction;
determining a position score of the state point based on the first mapping relation in combination with the positioning error and the distance.
4. The method of claim 2, wherein when the subway vehicle where the target object is located is in a driving state, the determining the position score of the corresponding state point based on the line position of each state point and the current position where the target object is located comprises:
determining a corresponding advancing direction of the subway vehicle, and determining a target subway line corresponding to the advancing direction from the at least one subway line;
determining at least one target state point which is not passed by the target object in the traveling direction from a plurality of state points included in the target subway line;
and determining the position score of the corresponding target state point based on the line position of each target state point and the current position of the target object.
5. The method of claim 4, wherein the determining the corresponding direction of travel of the metro vehicle comprises:
acquiring at least one historical position of the target object before the current time point;
performing straight line fitting on the at least one historical position and the current position of the target object, and performing projection processing on the straight line obtained by fitting to a subway line to obtain a projection result;
and taking the projection positive direction of the projection result as the corresponding advancing direction of the subway vehicle.
6. The method of claim 4, wherein determining a location score for each of the target status points based on the route location of the respective target status point and the current location of the target object comprises:
acquiring a positioning error corresponding to the current position of the target object;
for a first target state point which is at the top in the travel direction, determining a position score of the first target state point based on the positioning error, the distance between the line position of the first target state point and the position where the target object is currently located;
aiming at each second target state point which is not the first target state point, acquiring a noise parameter and a target position score of a third target state point which is positioned in front of the second target state point and is away from the second target state point by a target distance; determining a position score for the second target state point based on the target position score and the noise parameter.
7. The method of claim 2, wherein when the current location of the target object is obtained by non-first network positioning, the determining the location score of the corresponding status point based on the route location of each status point and the current location of the target object comprises:
for each of the state points, the following processing is performed:
acquiring a positioning error corresponding to the current position of the target object;
determining an error parameter corresponding to the state point based on the positioning error, the distance between the line position of the state point and the current position of the target object;
acquiring a transfer score of the state point corresponding to the current time point and a historical position score of the state point at a target time point;
determining a location score for the state point based on the error parameter, the transition score, and the historical location score;
wherein the target time point is located before the current time point; the transition score is used for indicating the possible degree of the target object moving from each other state point to the state point from the target time point to the current time point.
8. The method of claim 7, wherein the obtaining the transition score of the state point corresponding to the current time point comprises:
acquiring the running speed of the subway vehicle where the target object is located and the time interval from the target time point to the current time point;
determining the distance between the line position of the state point and the line positions of the other state points;
acquiring a second mapping relation among the travelling speed, the time interval, the distance and the transfer fraction;
and determining a transfer score of the state point corresponding to the current time point based on the second mapping relation by combining the traveling speed, the time interval and the distance.
9. The method of claim 2, wherein the method further comprises:
when an in-out signal corresponding to a subway station is acquired, acquiring at least one target state point corresponding to the subway station from the plurality of state points;
when the current position of the target object is obtained by performing network positioning for the first time, determining the position score of the corresponding state point based on the line position of each state point and the current position of the target object includes:
for each target state point, the following processing is respectively executed:
acquiring a historical position score of the target state point at a target time point, wherein the target time point is positioned before the current time point;
determining the distance between the line position of the target state point and the current position of the target object;
obtaining a third mapping relation among the historical position score, the distance and the position score;
and determining the position score of the target state point based on the third mapping relation by combining the historical position score and the distance.
10. The method according to claim 1, wherein said determining a plurality of status points included in each of said subway lines respectively through a plurality of line segments included in said subway line comprises:
for each subway line, the following processing is respectively executed:
acquiring a division mode corresponding to a subway line, and dividing the subway line into a plurality of line sections according to the division mode;
and determining the starting point and the end point of the subway line and the dividing point between every two adjacent line sections as the state points in the subway line.
11. The method of claim 1, wherein the method further comprises:
presenting a map interface comprising at least one subway line, and presenting a positioning function item in the map interface;
and presenting the target position of the target object in the subway line in response to the triggering operation aiming at the positioning function item.
12. A positioning device in a subway scene, said device comprising:
the acquisition module is used for acquiring the current position of the target object and determining at least one subway line associated with the position;
the first determining module is used for respectively determining a plurality of state points contained in each subway line and a line position corresponding to each state point through a plurality of line sections included in the subway line;
and the second determining module is used for determining the target position of the target object in the corresponding subway line based on the line position of each state point and the current position of the target object.
13. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor for implementing the positioning method in a subway scenario as claimed in any one of claims 1 to 11 when executing the executable instructions stored in the memory.
14. A computer-readable storage medium storing executable instructions, wherein the executable instructions, when executed by a processor, implement the method for location determination in a subway scenario as claimed in any one of claims 1 to 11.
15. A computer program product comprising a computer program or instructions, characterized in that the computer program or instructions, when executed by a processor, implement the positioning method in a subway scenario as claimed in any one of claims 1 to 11.
CN202111197085.3A 2021-10-14 2021-10-14 Positioning method, device, equipment, medium and program product in subway scene Active CN113949734B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111197085.3A CN113949734B (en) 2021-10-14 2021-10-14 Positioning method, device, equipment, medium and program product in subway scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111197085.3A CN113949734B (en) 2021-10-14 2021-10-14 Positioning method, device, equipment, medium and program product in subway scene

Publications (2)

Publication Number Publication Date
CN113949734A true CN113949734A (en) 2022-01-18
CN113949734B CN113949734B (en) 2024-03-01

Family

ID=79330419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111197085.3A Active CN113949734B (en) 2021-10-14 2021-10-14 Positioning method, device, equipment, medium and program product in subway scene

Country Status (1)

Country Link
CN (1) CN113949734B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090046099A (en) * 2007-11-05 2009-05-11 주식회사 셀리지온 Subway navigation method and system
CN102519456A (en) * 2011-11-28 2012-06-27 华为终端有限公司 Navigation method and mobile terminal of subway line
CN110446255A (en) * 2019-07-29 2019-11-12 深圳数位传媒科技有限公司 A kind of subway scene localization method and device based on communication base station
US20200158522A1 (en) * 2017-07-21 2020-05-21 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for determining a new route in a map
US10694321B1 (en) * 2019-04-09 2020-06-23 Sprint Communications Company L.P. Pattern matching in point-of-interest (POI) traffic analysis
CN111735457A (en) * 2020-06-30 2020-10-02 北京百度网讯科技有限公司 Indoor navigation method and device, electronic equipment and readable storage medium
CN112665606A (en) * 2021-01-29 2021-04-16 北京百度网讯科技有限公司 Walking navigation method, device, equipment and storage medium
CN113079461A (en) * 2021-03-24 2021-07-06 Oppo广东移动通信有限公司 Positioning method, positioning device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090046099A (en) * 2007-11-05 2009-05-11 주식회사 셀리지온 Subway navigation method and system
CN102519456A (en) * 2011-11-28 2012-06-27 华为终端有限公司 Navigation method and mobile terminal of subway line
US20200158522A1 (en) * 2017-07-21 2020-05-21 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for determining a new route in a map
US10694321B1 (en) * 2019-04-09 2020-06-23 Sprint Communications Company L.P. Pattern matching in point-of-interest (POI) traffic analysis
CN110446255A (en) * 2019-07-29 2019-11-12 深圳数位传媒科技有限公司 A kind of subway scene localization method and device based on communication base station
CN111735457A (en) * 2020-06-30 2020-10-02 北京百度网讯科技有限公司 Indoor navigation method and device, electronic equipment and readable storage medium
CN112665606A (en) * 2021-01-29 2021-04-16 北京百度网讯科技有限公司 Walking navigation method, device, equipment and storage medium
CN113079461A (en) * 2021-03-24 2021-07-06 Oppo广东移动通信有限公司 Positioning method, positioning device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113949734B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
CN108955713B (en) Method and device for displaying driving track
CN107449433A (en) The feedback cycle for being used for vehicle observation based on map
CN106959690B (en) Method, device and equipment for searching unmanned vehicle and storage medium
JP5362337B2 (en) Information distribution system, information distribution server, and program
CN109100537B (en) Motion detection method, apparatus, device, and medium
CN112732857B (en) Road network processing method, road network processing device, electronic equipment and storage medium
CN107871400B (en) Road network information updating method and device
CN112368547B (en) Context-aware navigation voice assistant
US20220357181A1 (en) Collecting user-contributed data relating to a navigable network
CN112748453B (en) Road side positioning method, device, equipment and storage medium
CN111352142A (en) Indoor parking positioning method and device, electronic equipment and medium
CN111947665B (en) Navigation control method, device and equipment and computer storage medium
CN111785000B (en) Vehicle state data uploading method and device, electronic equipment and storage medium
US20220219699A1 (en) On-board apparatus, driving assistance method, and driving assistance system
CN113012461A (en) Navigation method, apparatus, device and medium thereof
CN110726414B (en) Method and apparatus for outputting information
CN116853282A (en) Vehicle control method, device, computer equipment and storage medium
CN116443032A (en) Method, system, equipment and storage medium for predicting future long-term vehicle speed
CN110113716B (en) Path state information acquisition method and device and storage medium
CN114689074B (en) Information processing method and navigation method
CN113949734A (en) Positioning method, device, equipment, medium and program product in subway scene
CN113850909B (en) Point cloud data processing method and device, electronic equipment and automatic driving equipment
CN115675528A (en) Automatic driving method and vehicle based on similar scene mining
CN112866915B (en) Navigation information processing method and device, electronic equipment and storage medium
CN115083037A (en) Method and device for updating map network data, electronic equipment and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant