CN114942021A - Terminal positioning method, device, terminal, medium and computer program product - Google Patents

Terminal positioning method, device, terminal, medium and computer program product Download PDF

Info

Publication number
CN114942021A
CN114942021A CN202111539192.XA CN202111539192A CN114942021A CN 114942021 A CN114942021 A CN 114942021A CN 202111539192 A CN202111539192 A CN 202111539192A CN 114942021 A CN114942021 A CN 114942021A
Authority
CN
China
Prior art keywords
terminal
target
attribute information
building
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111539192.XA
Other languages
Chinese (zh)
Inventor
金文灿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Publication of CN114942021A publication Critical patent/CN114942021A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)

Abstract

The application discloses a terminal positioning method, a terminal positioning device, a terminal, a medium and a computer program product. The method comprises the following steps: acquiring an environment image of the environment around the terminal, and identifying a target building from the environment image; acquiring target attribute information corresponding to the target building, wherein the target attribute information comprises a positioning position of the target building; acquiring the current position of the terminal and the target distance between the terminal and the target building according to the environment image; and determining the positioning position of the terminal according to the current position, the target distance and the positioning position of the target building. By adopting the method, the positioning accuracy of the terminal can be improved.

Description

Terminal positioning method, device, terminal, medium and computer program product
RELATED APPLICATIONS
The priority of the chinese patent application, entitled "terminal positioning method, apparatus, terminal, medium, and computer program product," filed on/at 08.02/2021, having application number 202110181805.0, is hereby incorporated by reference in its entirety.
Technical Field
The present application relates to the field of positioning technologies, and in particular, to a method, an apparatus, a terminal, a medium, and a computer program product for positioning a terminal.
Background
With the rapid development of positioning technology, more and more terminals are introduced with positioning functions, and users are more and more dependent on location-based services.
Taking the network car booking service as an example, after the user clicks the car taking application program of the user terminal, the user terminal sends the GPS (Global Position System) positioning data of the user terminal to the server, and thus, the server can perform service processes such as matching of a vehicle and navigation of the user Position according to the GPS positioning data.
However, the quality of the GPS signal is easily affected by factors such as the surrounding environment, and the accuracy of the GPS positioning data is low, so that the above-mentioned method of positioning the terminal by using the GPS often results in inaccurate positioning position of the terminal and low positioning accuracy of the terminal.
Disclosure of Invention
In view of this, the present application discloses a terminal positioning method, apparatus, terminal, medium and computer program product, which may be used to improve the positioning accuracy of the terminal.
In a first aspect, an embodiment of the present application provides a terminal positioning method, where the method includes:
acquiring an environment image of the environment around the terminal, and identifying a target building from the environment image;
acquiring target attribute information corresponding to the target building, wherein the target attribute information comprises a positioning position of the target building;
acquiring the current position of the terminal and the target distance between the terminal and the target building according to the environment image; and
and determining the positioning position of the terminal according to the current position, the target distance and the positioning position of the target building.
In a second aspect, an embodiment of the present application provides a terminal positioning apparatus, where the apparatus includes:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an environment image of the environment around a terminal and identifying a target building from the environment image;
a second obtaining module, configured to obtain target attribute information corresponding to the target building, where the target attribute information includes a location position of the target building;
the positioning module is used for acquiring the current position of the terminal and the target distance between the terminal and the target building according to the environment image; and
and determining the positioning position of the terminal according to the current position, the target distance and the positioning position of the target building.
In a third aspect, an embodiment of the present application provides a terminal, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the method according to the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method of the first aspect.
In a fifth aspect, the present application provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the method of the first aspect.
According to the terminal positioning method, the terminal positioning device, the terminal, the medium and the computer program product, the environment image of the environment around the terminal is obtained, the target building is identified from the environment image, and then the target attribute information including the positioning position of the target building is obtained, so that after the current position of the terminal and the target distance between the terminal and the target building are obtained according to the environment image, the positioning position of the terminal can be determined directly according to the current position, the target distance and the positioning position of the target building without depending on a GPS (global positioning system), and the situation that the positioning position of the terminal is inaccurate due to unstable quality of GPS signals is avoided. The embodiment of the application improves the positioning accuracy of the terminal.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the disclosed drawings without creative efforts.
Fig. 1-1 is a diagram of an application environment of a terminal location method in an embodiment;
fig. 1-2 are diagrams of an application environment of a terminal location method in another embodiment;
fig. 2 is a flowchart illustrating a terminal positioning method according to an embodiment;
FIG. 3 is a schematic diagram of an exemplary location relationship of a target building to a terminal;
fig. 4 is a schematic flowchart of a terminal acquiring a current location of the terminal in another embodiment;
FIG. 5 is a top view of an exemplary destination building and terminal positional relationship;
FIG. 6 is a schematic flow chart of step 403 in another embodiment;
fig. 7 is a schematic flowchart of a terminal acquiring a target distance between the terminal and a target building in another embodiment;
FIG. 8 is a schematic illustration of an exemplary ground target building and terminal field of view angle;
FIG. 9 is a schematic diagram of an exemplary location relationship of a target building to a terminal;
FIG. 10 is a schematic flowchart of step 703 in another embodiment;
FIG. 11 is a schematic top view of an exemplary target building and terminal;
FIG. 12 is a schematic view of the rotation angle of the target building of FIG. 11;
fig. 13 is a schematic flowchart of a terminal determining target attribute information in another embodiment;
fig. 14 is a flowchart illustrating a terminal location method according to another embodiment;
FIG. 15 is a block diagram of a terminal positioning device in accordance with an embodiment;
FIG. 16 is a block diagram showing the structure of a positioning module in another embodiment;
FIG. 17 is a block diagram showing the construction of a second acquisition module in another embodiment;
fig. 18 is an internal structural view of a terminal in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clearly understood, the embodiments of the present application are described in further detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the embodiments of the application and are not intended to limit the embodiments of the application.
First, before specifically describing the technical solution of the embodiment of the present application, a technical background or a technical evolution context on which the embodiment of the present application is based is described.
In the field of network car booking, in the process that a user performs network car booking through a terminal, the terminal needs to report the position of the terminal to a server, so that the server can match a proper vehicle for the user based on the position of the terminal, the server can add the position of the terminal into a navigation map and send the navigation map to a driver end of the matched vehicle, and the driver can receive the user according to the position of the terminal displayed in the navigation map. However, since the terminal generally uses the GPS for positioning, and the quality of the GPS signal is easily affected by factors such as the surrounding environment, and the accuracy of the GPS positioning is poor, the position of the terminal in the navigation map often does not coincide with the actual position of the user. How to improve the positioning accuracy of the terminal becomes a difficult problem to be solved urgently at present. In addition, it should be noted that, from determining how to improve the positioning accuracy of the terminal and the technical solutions introduced in the following embodiments, the applicant pays a lot of creative efforts.
The following describes technical solutions related to the embodiments of the present application with reference to a scenario in which the embodiments of the present application are applied.
The terminal positioning method provided by the embodiment of the application can be applied to the application environment shown in fig. 1-1. As shown in fig. 1-1, the terminal 102 acquires an environmental image of the environment around the terminal 102 and identifies a target building 104 from the environmental image; the terminal 102 acquires target attribute information corresponding to the target building 104, wherein the target attribute information comprises a positioning position of the target building 104; the terminal 102 acquires the current position of the terminal 102 and the target distance between the terminal 102 and the target building 104 according to the environment image, and determines the positioning position of the terminal 102 according to the current position, the target distance and the positioning position of the target building 104.
The terminal positioning method provided by the embodiment of the application can also be applied to the application environments shown in fig. 1-2. As shown in fig. 1-2, the terminal 102 acquires an environment image of the environment around the terminal 102, and the terminal 102 transmits the environment image to the server 106; the server 106 identifies the target building 104 from the environment image; the server 106 obtains target attribute information corresponding to the target building 104, where the target attribute information includes a positioning position of the target building 104; the server 106 obtains the current position of the terminal 102 and the target distance between the terminal 102 and the target building 104 according to the environment image, and determines the positioning position of the terminal 102 according to the current position, the target distance and the positioning position of the target building 104.
The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices; the server 106 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In one embodiment, as shown in fig. 2, a method for positioning a terminal is provided, which is described by taking the method as an example for being applied to the terminal in fig. 1-1, and includes the following steps:
step 201, the terminal acquires an environment image of the environment around the terminal, and identifies a target building from the environment image.
The terminal may be a terminal installed with an application for providing a service based on a location, such as a network appointment application, a map navigation application, and the like. When the user uses the application program, the terminal needs to report the position of the terminal to the server, so that the terminal needs to be positioned.
Taking a network car booking as an example, after a user opens a network car booking application program at a terminal, the terminal needs to report the position of the terminal to a server, and thus, the server can provide network car booking service for the user based on the position of the terminal.
Optionally, an image acquisition component may be disposed in the terminal, and the terminal captures a current surrounding environment of the terminal through the image acquisition component to obtain an environment image. Optionally, the environment image may also be captured by other devices with image capturing function, and the device with image capturing function may send the environment image to the terminal, where the device with image capturing function may be, for example, a portable wearable device, a tablet computer, or the like.
In a possible implementation manner, when the terminal is provided with the image acquisition component, after the terminal detects that the user opens the application program providing the service based on the location, the terminal may shoot the current surrounding environment of the terminal through the image acquisition component to obtain an environment image.
In another possible implementation, after the terminal detects that the user opens the application program providing the service based on the location, the terminal may further output a text prompt message or a voice prompt message to prompt the user to direct an image capturing component of the terminal to a direction in which a building exists in the surrounding environment, and then the terminal captures an environment image.
The terminal then identifies from the environment image a target building, which may be any building in the environment surrounding the terminal. Optionally, the terminal may perform target detection on the environment image by using a target detection algorithm to obtain a position frame, the terminal intercepts a region corresponding to the position frame from the environment image, and inputs the intercepted region into the classification model, if the classification result is that the environment image includes a building, the terminal determines to identify the target building from the environment image, and the terminal continues to execute step 202.
Step 202, the terminal obtains target attribute information corresponding to the target building, wherein the target attribute information comprises a positioning position of the target building.
In a possible implementation manner, a building information database may be preset in the terminal, where attribute information of all buildings in a geographic area range to which the terminal belongs is stored in the building information database, and the geographic area range may be flexibly set in the embodiment, for example, the geographic area range may be a city, a province, and the like, and is not limited herein.
In the embodiment of the present application, the attribute information of each building may include a corresponding location position of the building and a building feature, the corresponding location position of the building may be a geographic coordinate of the building, and the building feature may be a color feature, a texture feature, a size feature, and the like inherent to the building.
As an embodiment, the terminal may obtain target attribute information corresponding to the target building according to the environment image and the building information database, and the terminal may input the environment image into a preset neural network model to extract features of the building in the environment image, and then perform similarity comparison using the extracted features of the building and the features of the buildings in the building information database, and determine the attribute information corresponding to the building feature with the largest similarity as the target attribute information. It is understood that the target attribute information includes a location position of the target building, i.e., geographical coordinates of the target building.
In another possible implementation, the building information database may also be preset in the server, so that the terminal may send the environment image to the server, and the server determines the target attribute information corresponding to the target building according to the environment image and sends the target attribute information to the terminal, thereby reducing the data processing amount of the terminal and avoiding excessive occupation of the computing resources of the terminal.
The process of determining the target attribute information corresponding to the target building by the server according to the environment image may refer to the process of determining the target attribute information from the building information database by the terminal, and is not repeated here.
And step 203, the terminal acquires the current position of the terminal and the target distance between the terminal and the target building according to the environment image, and determines the positioning position of the terminal according to the current position, the target distance and the positioning position of the target building.
In the embodiment of the present application, the current position of the terminal refers to an angle offset of the current position of the terminal with respect to a basic position, where the basic position is east, south, west, and north.
Alternatively, the terminal may acquire a known position of a reference object according to the environment image, acquire an included angle formed between the terminal and a side edge of the reference object, and determine the current position of the terminal by combining the known position of the reference object and the included angle, where the reference object may be, for example, a target building.
Optionally, the current orientation of the terminal may also be user-input, and as an embodiment, the terminal may present an orientation input page including an environment image, prompting the user to input the current orientation of the terminal.
Optionally, a plurality of state sensors, such as a direction sensor, a gyroscope, an inclination sensor, and the like, may also be preset in the terminal, and the terminal may use the acquired environment image as a trigger condition, and the trigger terminal processes sensor data acquired by the plurality of state sensors to obtain a current orientation of the terminal, and the like.
In this embodiment of the application, the target distance is an actual distance between the terminal and the target building, optionally, the target distance may be measured by the terminal through a distance measurement sensor provided in the terminal, and as an implementation, the terminal may use the obtained environment image as a trigger condition, and the trigger terminal measures the target distance through the distance measurement sensor provided in the terminal.
After the terminal acquires the current position of the terminal and the target distance between the terminal and the target building, the positioning position of the terminal is determined according to the current position, the target distance and the positioning position of the target building.
In one possible embodiment, referring to fig. 3, fig. 3 is a schematic diagram of the location of an exemplary target building and terminal. Assuming that the obtained location position of the target building is (a, b) shown in fig. 3, the location position of the terminal is (x, y) shown in fig. 3, and the terminal determines a point on the line corresponding to the current position of the terminal, so that the distance between the point and (a, b) is equal to the obtained target distance, then the point is the location position (x, y) of the terminal.
For example, assuming that the current orientation of the terminal is 60 ° east, then y ═ x tan60 °, y may be replaced with x × tan60 °, and since the distance between (x, y) and (a, b) is equal to the target distance, x may be calculated according to the distance formula between two points, and y is further obtained.
Thus, according to the above embodiment, positioning of the terminal without depending on GPS is realized. Taking a network appointment as an example, after the terminal obtains the location position of the terminal through the above embodiment, the terminal can send the location position to the server, after the terminal sends the vehicle using request, the server can match a suitable vehicle for the user based on the location position of the terminal, and after the server matches the suitable vehicle, the location position of the terminal is added in the navigation map and sent to the driver end of the matched vehicle, so that the driver can pick up the user according to the location of the terminal displayed in the navigation map.
The embodiment does not need to rely on a GPS to position the terminal, thereby avoiding the condition of inaccurate positioning position of the terminal caused by unstable quality of GPS signals; in the embodiment, the environment image of the environment around the terminal is acquired, the target building is identified from the environment image, and then the target attribute information including the positioning position of the target building is acquired, so that after the current position of the terminal and the target distance between the terminal and the target building are acquired according to the environment image, the positioning position of the terminal can be determined directly according to the current position, the target distance and the positioning position of the target building, the influence of the quality of a GPS signal on the positioning accuracy is avoided, and the positioning accuracy of the terminal is improved.
In one embodiment, based on the embodiment shown in fig. 2, referring to fig. 4, the present embodiment relates to a process of how the terminal obtains the current position of the terminal. As shown in fig. 4, the process includes steps 401, 402, and 403:
step 401, the terminal acquires a view angle of the terminal.
The field of view angle of the terminal may be an intrinsic field of view angle of an image capture assembly of the terminal, e.g., 150 °, 160 °, etc.
In step 402, the terminal obtains a rotation angle and a first image scale of the target building based on the environment image.
The rotation angle refers to the angular offset of the target structure with respect to the base azimuth. Optionally, the terminal may input the environment image into a preset neural network model to obtain a rotation angle of the target building in the environment image; alternatively, the rotation angle of the target building may also be user-input, and is not particularly limited herein.
The first image scale is used for indicating the angle ratio between a first included angle formed between the terminal and one side edge of the target building and the view field angle, namely the first included angle is the rotation angle of the terminal relative to the target building.
The angle represented by the first included angle is described below with reference to the drawings. Referring to fig. 5, fig. 5 is a top view of a positional relationship between an exemplary target building and a terminal, as shown in fig. 5, an angle formed by two dotted lines extending from the terminal is a view angle of the terminal (angle a shown in fig. 5), an angle C is a first included angle, a length a is a length of an environment image, and a length C is a length from one side edge of the target building in the environment image to one side edge of the environment image.
Since angle C is angle D + angle E, (180 ° -angle a)/2, and angle E is (length C/length a) angle a, the following equation 1 can be derived:
Figure BDA0003413580620000071
as can be seen from equation 1, there is an angular proportional relationship between the first angle and the angle of the field of view, i.e. a first image scale, the value of which is determined by the above-mentioned length a and the above-mentioned length c.
In the embodiment of the application, the length (namely, the length a) of the environment image is a fixed parameter of an image acquisition assembly of the terminal, and the terminal can read the length a; the length c may be obtained by inputting the environment image into a preset neural network model by the terminal and predicting through the neural network model, so that the terminal may obtain the first image proportion.
And step 403, the terminal acquires the current orientation of the terminal according to the view field angle, the first image proportion and the rotation angle.
After the terminal determines the view angle, the first image proportion and the rotation angle of the terminal through the steps 401 and 402, the terminal can obtain a first included angle formed between the terminal and one side edge of the target building according to the formula 1, so that the terminal can use the target building as a reference target of the terminal, and according to the rotation angle of the reference target and the first included angle between the terminal and the reference target, the terminal can accurately and quickly determine the current position of the terminal, and the positioning speed of the terminal is improved.
In a possible implementation manner of step 403, referring to fig. 6, the terminal may perform the process of implementing step 403 in step 4031 and step 4032 shown in fig. 6:
step 4031, the terminal obtains a first included angle according to the field angle and the first image proportion.
4032, the terminal adds the first included angle and the rotation angle to obtain the current position of the terminal.
And substituting the view field angle and the first image proportion into the formula 1 by the terminal to obtain a first included angle, and adding the first included angle and the rotation angle by the terminal to obtain the current position of the terminal.
For example, with continued reference to fig. 3, assuming that the location position of the target building is (a, b) shown in fig. 3, the location position of the terminal is (x, y) shown in fig. 3, the rotation angle of the target building is 45 ° to the east, the first angle is 15 °, and the current orientation of the terminal is 60 ° to the east.
Thus, the current position of the terminal can be conveniently determined through the implementation mode, the calculation amount in the whole process is small, no complex operation is involved, the influence on the performance of the terminal is avoided,
in one embodiment, based on the embodiment shown in fig. 2, referring to fig. 7, the present embodiment relates to a process of how the terminal obtains the target distance between the terminal and the target building. As shown in fig. 7, the process includes steps 701, 702, and 703:
in step 701, the terminal acquires a view field angle of the terminal.
The field of view angle of the terminal may be an intrinsic field of view angle of an image capture component of the terminal, e.g., 150 °, 160 °, and so on.
In step 702, the terminal acquires the rotation angle and the second image scale of the target building based on the environment image.
The manner of acquiring the rotation angle of the target building by the terminal is similar to the step 402, and is not described herein again.
The second image scale is used for indicating an angle ratio between a second included angle formed between the terminal and two side edges of the target building and the view field angle. In the embodiment of the present application, the second image scale may be a ratio of a length of the target building in the environment image to a length of the environment image.
The angle represented by the second included angle is described below with reference to the drawings. Referring to fig. 8, fig. 8 is a schematic view illustrating a view angle between an exemplary target building and a terminal, where as shown in fig. 8, an angle formed by two dotted lines extending from the terminal is a view angle of the terminal, and an angle formed by two solid lines extending from the terminal is a second included angle (an angle B shown in fig. 8) formed between the terminal and two side edges of the target building.
The length a corresponding to the view field angle is the length of the environment image, the length a is a fixed parameter of an image acquisition assembly of the terminal, and the terminal can read the length a; the length b corresponding to the second included angle is the length of the target building in the environment image, and the length b can be obtained by inputting the environment image into a preset neural network model through the terminal and predicting through the neural network model, so that the terminal obtains the second image proportion.
And 703, the terminal acquires a target distance according to the view field angle, the second image proportion, the rotation angle and the target attribute information.
And the terminal multiplies the second image proportion by the view field angle of the terminal to obtain a second included angle formed between the terminal and the two side edges of the target building.
Referring to fig. 9, fig. 9 is a schematic diagram illustrating a position relationship between a target building and a terminal. After the terminal acquires the second included angle, the terminal can also acquire the actual length represented by the length b of the target building in the environment image; in a possible embodiment, the target attribute information corresponding to the target building further includes actual size information of the target building, and the terminal may use a real length in the actual size information as the actual length characterized by the length b, or may use a real width in the actual size information as the actual length characterized by the length b, or may further perform weighted summation on the real length and the real width in the actual size information, and use the summation result as the actual length characterized by the length b.
Then, assuming that the actual distances from the terminal to any edge of the two sides of the target building are equal and equal to the target distance, the terminal follows the formula of cosine theorem according to the triangle shown by the dotted line in fig. 9
Figure BDA0003413580620000091
The terminal to target building can be calculatedThe actual distance between any edge of two sides of the object, namely the target distance.
Therefore, the terminal can quickly determine the target distance between the terminal and the target building based on the second included angle without carrying out complex operation, so that the positioning speed of the terminal is improved, and the calculation burden of the terminal is reduced.
In a possible implementation manner of step 703, referring to fig. 10, the terminal may execute a process of implementing step 703 as shown in fig. 10 by step 7031, step 7032, and step 7033:
step 7031, the terminal obtains a second angle according to the view angle and the second image proportion.
Step 7032, the terminal obtains the actual horizontal projection distance between the two side edges of the target building according to the rotation angle and the target attribute information.
As described above, the terminal multiplies the field angle by the second image scale to obtain the second angle.
In this embodiment, the target attribute information corresponding to the target building may further include actual size information of the target building, and the actual size information may include a real length and a real width of the target building.
Hereinafter, the implementation of step 7032 will be described with reference to the drawings. Referring to fig. 11-12, fig. 11 is a schematic top view of an exemplary target building and terminal, and fig. 12 is a schematic view of a rotation angle of the target building of fig. 11.
As shown in fig. 12, the angle V is the rotation angle of the target building, then the angle between the real length L of the target building and the horizontal plane is the complement of the angle V, i.e. equal to 90 ° -V, and the angle between the real width W of the target building and the horizontal plane is equal to the rotation angle of the target building, i.e. equal to V. Thus, equation 2 is obtained according to the trigonometric function operation rule:
d ═ L cos (90 ° -V) + W cos V formula 2
d is the actual horizontal projection distance between the two side edges of the target building.
And 7033, the terminal calculates the target distance between the terminal and the target building according to the second included angle, the actual horizontal projection distance and a cosine theorem formula.
The terminal uses the actual horizontal projection distance as the opposite side of the second included angle formed between the terminal and the two side edges of the target building, similar to the above embodiment, assuming that the actual distance between the terminal and any one of the two side edges of the target building is equal to the target distance, then according to the triangle shown by the dotted line in fig. 9, the terminal uses the cosine theorem formula
Figure BDA0003413580620000092
The actual distance from the terminal to any edge of two sides of the target building can be obtained, and then the target distance is obtained.
Therefore, the terminal calculates the actual horizontal projection distance between the two side edges of the target building according to the rotation angle and the target attribute information, and then calculates the target distance between the terminal and the target building according to the second included angle, the actual horizontal projection distance and the cosine theorem formula, so that the situation that in the implementation mode of the embodiment 703, under the condition that the difference between the actual length and the actual width of the target building is large, the error of the target distance is large because the terminal directly and independently uses the actual width in the actual size information as the actual length represented by the length b is avoided, and the accuracy of the target distance is improved.
In one embodiment, based on the above-mentioned embodiment shown in fig. 2, referring to fig. 13, this embodiment relates to a process of acquiring, by a terminal, target attribute information corresponding to a target building. As shown in fig. 13, step 202 may include step 2021, step 2022, and step 2023:
step 2021, the terminal obtains an initial positioning position of the terminal.
The initial position location may be a GPS position location, a beidou position location, or the like.
Step 2022, the terminal screens out a plurality of candidate attribute information from the building information database according to the initial positioning position.
In the embodiment of the application, in order to avoid too much attribute information in the building information database, the terminal filters the building information database by adopting the initial positioning position of the terminal so as to narrow the range.
The building information database stores attribute information of all buildings in the geographical area range to which the terminal belongs, and the attribute information comprises the positioning positions of all the buildings. The terminal may calculate a distance between an initial location position of the terminal and a location position of each building, and determine attribute information whose distance is smaller than a preset distance threshold as candidate attribute information.
In step 2023, the terminal determines target attribute information corresponding to the target building from the plurality of candidate attribute information according to the environment image.
In this way, the terminal searches for the target attribute information corresponding to the target building from the candidate attribute information determined in step 2022, where the distance between the candidate building corresponding to each candidate attribute information and the terminal is smaller than the preset distance threshold.
Therefore, the terminal does not need to search the target attribute information corresponding to the target building in the full building information database through feature matching, and the efficiency of searching the target attribute information corresponding to the target building by the terminal is greatly saved.
In a possible implementation manner of step 2023, each candidate attribute information includes a plurality of contour point coordinates of the corresponding candidate building and a pixel value corresponding to each contour point coordinate, and step 2023 may include:
step a, the terminal inputs the environment image into the neural network model to obtain the coordinates of each target contour point of the target building.
And b, extracting the target pixel values corresponding to the coordinates of the target contour points from the environment image by the terminal according to the coordinates of the target contour points.
And c, for each candidate attribute information, the terminal respectively carries out similarity calculation on each target pixel value and each pixel value included in the candidate attribute information.
And d, the terminal detects whether the similarity corresponding to the target pixel value is greater than a preset similarity threshold value or not for each target pixel value.
And e, if the number of the target pixel values with the similarity larger than the similarity threshold meets the preset condition, the terminal determines the candidate attribute information as the target attribute information.
For each candidate attribute information, the terminal respectively calculates the similarity between the target pixel value corresponding to the coordinate of each target contour point and each pixel value included in the candidate attribute information, and the calculation of the similarity may adopt an euclidean distance method, a manhattan distance method, or the like, which is not limited specifically herein.
Taking the euclidean distance method as an example, the terminal calculates the euclidean distance between a target pixel value and a pixel value, and if the euclidean distance is smaller than a threshold, it is determined that the similarity between the target pixel value and the pixel value is greater than a preset similarity threshold.
And if the number of the target pixel values with the similarity larger than the similarity threshold value meets a preset condition, the terminal determines the candidate attribute information as the target attribute information. Therefore, the target attribute information can be accurately and quickly determined in the reduced search range by performing similarity calculation on the pixel values.
Hereinafter, the acquisition process of the neural network model according to the above embodiment will be described.
In the embodiment of the application, a plurality of building sample pictures with different illumination conditions and different rotation angles can be collected in advance, and then a rotation angle label, a size label of a sample building in the building sample picture and a coordinate label of each contour point of the sample building are added to each building sample picture to obtain a training sample set.
The terminal may train the initial neural network model framework by using the training sample set to obtain the neural network model of the above embodiment, or the server may train the initial neural network model framework by using the training sample set to obtain the neural network model of the above embodiment, and send the neural network model to the terminal.
The initial neural network model framework may be a residual network or other network model framework, and is not limited herein.
In another possible implementation, building sample pictures added with three kinds of labels may also be used for individual training, that is, each kind of label trains a neural network model individually, so that training efficiency may be improved, and the acquisition mode and the training mode of the neural network are not particularly limited.
An embodiment of the present application is described below with reference to a specific travel scenario, and specifically with reference to fig. 14, the method includes the following steps:
step 1001, the terminal acquires an environment image of an environment around the terminal, and identifies a target building from the environment image.
If the passenger needs to use the network car booking service, after the passenger opens the network car booking application program at the terminal, the terminal prompts the user to enable the image acquisition assembly of the terminal to face the direction of the building existing in the surrounding environment, then the terminal acquires the environment image, and the target building is confirmed to be identified from the environment image.
Step 1002, the terminal obtains target attribute information corresponding to a target building, wherein the target attribute information includes a positioning position of the target building.
Specifically, an initial positioning position of the terminal is obtained, the terminal screens out a plurality of candidate attribute information from a building information database according to the initial positioning position, and the distance between a candidate building corresponding to each candidate attribute information and the terminal is smaller than a preset distance threshold.
The terminal inputs the environment image into the neural network model to obtain the coordinates of each target contour point of the target building; the terminal extracts a target pixel value corresponding to the coordinate of each target contour point from the environment image according to the coordinate of each target contour point; for each candidate attribute information, the terminal respectively carries out similarity calculation on each target pixel value and each pixel value included in the candidate attribute information; for each target pixel value, the terminal detects whether the similarity corresponding to the target pixel value is greater than a preset similarity threshold value; and if the number of the target pixel values with the similarity larger than the similarity threshold value meets the preset condition, the terminal determines the candidate attribute information as the target attribute information.
And step 1003, the terminal acquires the current orientation of the terminal according to the environment image.
Specifically, the terminal acquires a view field angle of the terminal; the terminal acquires the rotation angle of the target building and a first image proportion based on the environment image, wherein the first image proportion is used for indicating the angle proportion between a first included angle formed between the terminal and one side edge of the target building and a view field angle; the terminal obtains a first included angle according to the view field angle and the first image proportion; and the terminal adds the first included angle and the rotation angle to obtain the current position of the terminal.
And step 1004, the terminal acquires a target distance between the terminal and the target building according to the environment image.
Specifically, the terminal acquires a view field angle of the terminal; the terminal acquires a rotation angle of the target building and a second image proportion based on the environment image, wherein the second image proportion is used for indicating an angle proportion between a second included angle formed between the terminal and two side edges of the target building and a view field angle; the terminal obtains a second included angle according to the view field angle and the second image proportion; the terminal acquires the actual horizontal projection distance between the two side edges of the target building according to the rotation angle and the target attribute information; and the terminal calculates the target distance between the terminal and the target building according to the second included angle, the actual horizontal projection distance and a cosine theorem formula.
Step 1005, the terminal determines the positioning position of the terminal according to the current position, the target distance and the positioning position of the target building.
After the terminal obtains the positioning position of the terminal through the embodiment, the terminal sends the positioning position to the server of the online taxi appointment platform, after the terminal sends a taxi appointment request like the server, the server matches a proper vehicle for a user based on the positioning position of the terminal, and after the server matches the proper vehicle, the positioning position of the terminal is added in a navigation map and sent to a driver end of the matched vehicle, so that the driver can receive the user according to the position of the terminal displayed in the navigation map.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the above-mentioned flowcharts may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or the stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the steps or the stages in other steps.
In one embodiment, as shown in fig. 15, there is provided a terminal positioning device including:
a first obtaining module 10, configured to obtain an environment image of an environment around a terminal, and identify a target building from the environment image;
a second obtaining module 20, configured to obtain target attribute information corresponding to the target building, where the target attribute information includes a location position of the target building;
a positioning module 30, configured to obtain a current position of the terminal and a target distance between the terminal and the target building according to the environment image; and
and determining the positioning position of the terminal according to the current position, the target distance and the positioning position of the target building.
In one embodiment, on the basis of the embodiment shown in fig. 15, referring to fig. 16, the positioning module 30 may include a first obtaining unit 301, a second obtaining unit 302, a third obtaining unit 303, a fourth obtaining unit 304, a fifth obtaining unit 305, a sixth obtaining unit 306, and a positioning unit 307, wherein:
a first acquisition unit 301, configured to acquire a field angle of the terminal;
a second obtaining unit 302, configured to obtain, based on the environment image, a rotation angle of the target building and a first image scale, where the first image scale is used to indicate an angle ratio between a first included angle formed between the terminal and one side edge of the target building and the view field angle;
a third obtaining unit 303, configured to obtain a current position of the terminal according to the view angle, the first image ratio, and the rotation angle.
A fourth obtaining unit 304, configured to obtain a field angle of the terminal;
a fifth obtaining unit 305, configured to obtain, based on the environment image, a rotation angle of the target building and a second image scale indicating an angle ratio between a second included angle formed between the terminal and two side edges of the target building and the view field angle;
a sixth obtaining unit 306, configured to obtain the target distance according to the view angle, the second image scale, the rotation angle, and the target attribute information.
And a positioning unit 307, configured to perform offset processing on the positioning position of the target building by using the current position and the target distance, so as to obtain the positioning position of the terminal.
In an embodiment, based on the embodiment shown in fig. 16, the third obtaining unit 303 is specifically configured to obtain the first included angle according to the field angle and the first image scale; and adding the first included angle and the rotation angle to obtain the current position of the terminal.
In an embodiment, based on the embodiment shown in fig. 16, the sixth obtaining unit 306 is specifically configured to obtain the second included angle according to the field angle and the second image scale; acquiring the actual horizontal projection distance between the two side edges of the target building according to the rotation angle and the target attribute information; and calculating the target distance between the terminal and the target building according to the second included angle, the actual horizontal projection distance and a cosine theorem formula.
In one embodiment, based on the embodiment shown in fig. 15, referring to fig. 17, the second obtaining module 20 may include:
a position obtaining unit 201, configured to obtain an initial positioning position of the terminal;
a screening unit 202, configured to screen a plurality of candidate attribute information from the building information database according to the initial positioning position, where a distance between a candidate building corresponding to each candidate attribute information and the terminal is smaller than a preset distance threshold;
a determining unit 203, configured to determine, according to the environment image, the target attribute information corresponding to the target building from the plurality of candidate attribute information.
In an embodiment, based on the embodiment shown in fig. 17, each candidate attribute information includes a plurality of contour point coordinates of a corresponding candidate building and a pixel value corresponding to each contour point coordinate, and the determining unit 203 is specifically configured to input the environment image into a neural network model to obtain coordinates of each target contour point of the target building; extracting target pixel values corresponding to the coordinates of the target contour points from the environment image according to the coordinates of the target contour points; for each candidate attribute information, performing similarity calculation on each target pixel value and each pixel value included in the candidate attribute information; for each target pixel value, detecting whether the similarity corresponding to the target pixel value is greater than a preset similarity threshold value; and if the number of the target pixel values with the similarity larger than the similarity threshold value meets a preset condition, determining the candidate attribute information as the target attribute information.
For specific limitations of the terminal positioning device, reference may be made to the above limitations of the terminal positioning method, which are not described herein again. The modules in the terminal positioning device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the terminal, and can also be stored in a memory in the electronic device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 18 is a block diagram of a terminal 1800, shown in one embodiment. For example, the terminal 1800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and so forth.
Referring to fig. 18, the terminal 1800 may include one or more of the following components: processing components 1802, memory 1804, power components 1806, multimedia components 1808, audio components 1810, input/output (I/O) interfaces 1812, sensor components 1814, and communication components 1816. Wherein the memory has stored thereon a computer program or instructions for execution on the processor.
The processing component 1802 generally controls the overall operation of the terminal 1800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1802 may include one or more processors 1818 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 1802 may include one or more modules that facilitate interaction between the processing component 1802 and other components. For example, the processing component 1802 can include a multimedia module to facilitate interaction between the multimedia component 1808 and the processing component 1802.
The memory 1804 is configured to store various types of data to support operation at the terminal 1800. Examples of such data include instructions for any application or method operating on the terminal 1800, contact data, phonebook data, messages, pictures, videos, and the like. The memory 1804 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power components 1806 provide power to the various components of the terminal 1800. The power components 1806 can include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the terminal 1800.
The multimedia component 1808 includes a touch-sensitive display screen providing an output interface between the terminal 1800 and the user. In some embodiments, the touch display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1808 includes a front camera and/or a rear camera, i.e., implements the functions of the image capture component. The front camera and/or the rear camera can receive external multimedia data when the terminal 1800 is in an operating mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Audio component 1810 is configured to output and/or input audio signals. For example, the audio component 1810 can include a Microphone (MIC) that can be configured to receive external audio signals when the terminal 1800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1804 or transmitted via the communication component 1816. In some embodiments, audio component 1810 also includes a speaker for outputting audio signals.
The I/O interface 1312 provides an interface between the processing component 1802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 1814 includes one or more status sensors for providing various aspects of status assessment for the terminal 1800. For example, the sensor component 1814 can detect an open/closed state of the terminal 1800, the relative positioning of components such as the display and keypad of the terminal 1800, the sensor component 1814 can also detect a change in the position of the terminal 1800 or a component of the terminal 1800, the presence or absence of user contact with the terminal 1800, orientation or acceleration/deceleration of the terminal 1800, and a change in the temperature of the terminal 1800. Sensor assembly 1814 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 1814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1816 is configured to facilitate communications between the terminal 1800 and other devices in a wired or wireless manner. The terminal 1800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1816 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the terminal 1800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described terminal positioning methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 1804 comprising instructions, executable by the processor 1818 of the terminal 1800 to perform the terminal location methods described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program which, when executed by a processor, may carry out the above-mentioned method. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, part or all of the above methods can be implemented wholly or partially according to the procedures or functions described in the embodiments of the present application.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express a few embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, variations and modifications can be made without departing from the concept of the embodiments of the present application, and these embodiments are within the scope of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the appended claims.

Claims (17)

1. A terminal positioning method, characterized in that the method comprises:
acquiring an environment image of the environment around the terminal, and identifying a target building from the environment image;
acquiring target attribute information corresponding to the target building, wherein the target attribute information comprises a positioning position of the target building;
acquiring the current position of the terminal and the target distance between the terminal and the target building according to the environment image; and
and determining the positioning position of the terminal according to the current position, the target distance and the positioning position of the target building.
2. The method according to claim 1, wherein the obtaining the current orientation of the terminal from the environment image comprises:
acquiring a view field angle of the terminal;
acquiring a rotation angle of the target building and a first image proportion based on the environment image, wherein the first image proportion is used for indicating an angle proportion between a first included angle formed between the terminal and one side edge of the target building and the view field angle;
and acquiring the current position of the terminal according to the view field angle, the first image proportion and the rotation angle.
3. The method according to claim 2, wherein the obtaining the current orientation of the terminal according to the field-of-view angle, the first image scale and the rotation angle comprises:
acquiring the first included angle according to the view field angle and the first image proportion;
and adding the first included angle and the rotation angle to obtain the current position of the terminal.
4. The method of claim 1, wherein obtaining the target distance between the terminal and the target building from the environment image comprises:
acquiring a view field angle of the terminal;
acquiring a rotation angle of the target building and a second image proportion based on the environment image, wherein the second image proportion is used for indicating a second included angle formed between the terminal and two side edges of the target building and an angle proportion between the view field angles;
and acquiring the target distance according to the view field angle, the second image proportion, the rotation angle and the target attribute information.
5. The method of claim 4, wherein the obtaining the target distance according to the field of view angle, the second image scale, the rotation angle, and the target attribute information comprises:
acquiring the second included angle according to the view field angle and the second image proportion;
acquiring the actual horizontal projection distance between the two side edges of the target building according to the rotation angle and the target attribute information;
and calculating the target distance between the terminal and the target building according to the second included angle, the actual horizontal projection distance and a cosine theorem formula.
6. The method of claim 1, wherein the obtaining target attribute information corresponding to the target building comprises:
acquiring an initial positioning position of the terminal;
screening a plurality of candidate attribute information from a building information database according to the initial positioning position, wherein the distance between a candidate building corresponding to each candidate attribute information and the terminal is smaller than a preset distance threshold value;
and determining the target attribute information corresponding to the target building from the candidate attribute information according to the environment image.
7. The method according to claim 6, wherein each of the candidate attribute information includes a plurality of contour point coordinates of a corresponding candidate building and a pixel value corresponding to each of the contour point coordinates, and the determining the target attribute information corresponding to the target building from the plurality of candidate attribute information according to the environment image includes:
inputting the environment image into a neural network model to obtain coordinates of each target contour point of the target building;
extracting target pixel values corresponding to the coordinates of the target contour points from the environment image according to the coordinates of the target contour points;
for each candidate attribute information, performing similarity calculation on each target pixel value and each pixel value included in the candidate attribute information;
for each target pixel value, detecting whether the similarity corresponding to the target pixel value is greater than a preset similarity threshold value;
and if the number of the target pixel values with the similarity larger than the similarity threshold value meets a preset condition, determining the candidate attribute information as the target attribute information.
8. A terminal positioning apparatus, characterized in that the apparatus comprises:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an environment image of the environment around a terminal and identifying a target building from the environment image;
a second obtaining module, configured to obtain target attribute information corresponding to the target building, where the target attribute information includes a location position of the target building;
the positioning module is used for acquiring the current position of the terminal and the target distance between the terminal and the target building according to the environment image; and
and determining the positioning position of the terminal according to the current position, the target distance and the positioning position of the target building.
9. The apparatus of claim 8, wherein the positioning module comprises:
the first acquisition unit is used for acquiring a view field angle of the terminal;
a second obtaining unit, configured to obtain, based on the environment image, a rotation angle of the target building and a first image scale indicating an angle ratio between a first included angle formed between the terminal and one side edge of the target building and the view field angle;
and the third acquisition unit is used for acquiring the current position of the terminal according to the view field angle, the first image proportion and the rotation angle.
10. The apparatus according to claim 9, wherein the third obtaining unit is specifically configured to obtain the first included angle according to the field angle and the first image scale; and adding the first included angle and the rotation angle to obtain the current position of the terminal.
11. The apparatus of claim 8, wherein the positioning module comprises:
the fourth acquisition unit is used for acquiring the view field angle of the terminal;
a fifth acquiring unit, configured to acquire a rotation angle of the target building and a second image scale based on the environment image, where the second image scale is used to indicate an angle scale between a second included angle formed between the terminal and two side edges of the target building and the view field angle;
a sixth obtaining unit, configured to obtain the target distance according to the view angle, the second image ratio, the rotation angle, and the target attribute information.
12. The apparatus according to claim 11, wherein the sixth obtaining unit is specifically configured to obtain the second included angle according to the field angle and the second image scale; acquiring the actual horizontal projection distance between the two side edges of the target building according to the rotation angle and the target attribute information; and calculating the target distance between the terminal and the target building according to the second included angle, the actual horizontal projection distance and a cosine theorem formula.
13. The apparatus of claim 8, wherein the second obtaining module comprises:
a position obtaining unit, configured to obtain an initial positioning position of the terminal;
the screening unit is used for screening a plurality of candidate attribute information from a building information database according to the initial positioning position, and the distance between a candidate building corresponding to each candidate attribute information and the terminal is smaller than a preset distance threshold value;
and the determining unit is used for determining the target attribute information corresponding to the target building from the candidate attribute information according to the environment image.
14. The apparatus according to claim 13, wherein each of the candidate attribute information includes a plurality of contour point coordinates of a corresponding candidate building and a pixel value corresponding to each of the contour point coordinates, and the determining unit is specifically configured to input the environment image into a neural network model to obtain coordinates of each target contour point of the target building; extracting a target pixel value corresponding to the coordinate of each target contour point from the environment image according to the coordinate of each target contour point; for each candidate attribute information, performing similarity calculation on each target pixel value and each pixel value included in the candidate attribute information; for each target pixel value, detecting whether the similarity corresponding to the target pixel value is greater than a preset similarity threshold value; and if the number of the target pixel values with the similarity larger than the similarity threshold value meets a preset condition, determining the candidate attribute information as the target attribute information.
15. A terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor realizes the steps of the method of any one of claims 1 to 7 when executing the computer program.
16. A storage medium on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
17. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, realizes the steps of the method of any one of claims 1 to 7.
CN202111539192.XA 2021-02-08 2021-12-15 Terminal positioning method, device, terminal, medium and computer program product Pending CN114942021A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021101818050 2021-02-08
CN202110181805 2021-02-08

Publications (1)

Publication Number Publication Date
CN114942021A true CN114942021A (en) 2022-08-26

Family

ID=82905899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111539192.XA Pending CN114942021A (en) 2021-02-08 2021-12-15 Terminal positioning method, device, terminal, medium and computer program product

Country Status (1)

Country Link
CN (1) CN114942021A (en)

Similar Documents

Publication Publication Date Title
CN107957266B (en) Positioning method, positioning device and storage medium
US9953506B2 (en) Alarming method and device
CN111983635B (en) Pose determination method and device, electronic equipment and storage medium
US20170083741A1 (en) Method and device for generating instruction
CN108010060B (en) Target detection method and device
US11288531B2 (en) Image processing method and apparatus, electronic device, and storage medium
EP2455713A1 (en) Building Directory Aided Navigation
CN109359056B (en) Application program testing method and device
CN105758319B (en) The method and apparatus for measuring target object height by mobile terminal
US20200394420A1 (en) Method, apparatus, and storage medium for obtaining object information
CN110751659A (en) Image segmentation method and device, terminal and storage medium
CN110463177A (en) The bearing calibration of file and picture and device
CN110930351A (en) Light spot detection method and device and electronic equipment
CN104008129A (en) Position information processing method, device and terminal
CN105188027A (en) Nearby user searching method and device
CN113344999A (en) Depth detection method and device, electronic equipment and storage medium
CN108241678B (en) Method and device for mining point of interest data
CN110990728B (en) Method, device, equipment and storage medium for managing interest point information
CN113345000A (en) Depth detection method and device, electronic equipment and storage medium
US20230048952A1 (en) Image registration method and electronic device
WO2022110801A1 (en) Data processing method and apparatus, electronic device, and storage medium
CN113673603A (en) Method for matching element points and related device
CN114942021A (en) Terminal positioning method, device, terminal, medium and computer program product
CN111859003B (en) Visual positioning method and device, electronic equipment and storage medium
CN113063421A (en) Navigation method and related device, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination