WO2023231425A1 - 定位方法、电子设备、存储介质及程序产品 - Google Patents

定位方法、电子设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2023231425A1
WO2023231425A1 PCT/CN2023/073002 CN2023073002W WO2023231425A1 WO 2023231425 A1 WO2023231425 A1 WO 2023231425A1 CN 2023073002 W CN2023073002 W CN 2023073002W WO 2023231425 A1 WO2023231425 A1 WO 2023231425A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
position information
reference object
indoor
image
Prior art date
Application number
PCT/CN2023/073002
Other languages
English (en)
French (fr)
Inventor
陈大伟
陈诗军
李俊强
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023231425A1 publication Critical patent/WO2023231425A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the present application relates to the field of positioning technology, in particular to a positioning method, electronic equipment, computer storage media and computer program products.
  • indoor positioning based on 5G base stations has received more and more attention.
  • indoor positioning requires accurate mapping of the position coordinates of the positioning base station without considering the specific positioning technology.
  • small errors in the base station location will lead to large positioning errors.
  • indoor positioning base stations are often distributed in indoor ceilings, wall lines, columns and other locations that are not easily touched. It is difficult to measure directly, and the base station may also be blocked by other objects, which also makes positioning measurement more difficult.
  • the layout of indoor scene items may be updated, which involves frequent correction of the base station position.
  • Embodiments of the present application provide a positioning method, electronic equipment, computer storage media and computer program products, which can effectively reduce the workload of positioning targets and improve the accuracy of positioning results.
  • inventions of the present application provide a positioning method.
  • the positioning method includes:
  • the target position information of the target to be located in the indoor image is determined.
  • embodiments of the present application also provide an electronic device, including: at least one processor; at least one memory for storing at least one program; when at least one of the programs is executed by at least one of the processors, the Positioning method as described previously.
  • embodiments of the present application also provide a computer-readable storage medium in which a processor-executable program is stored.
  • the processor-executable program is executed by the processor, it is used to implement the above-mentioned steps. Positioning method.
  • embodiments of the present application also provide a computer program product, a computer program or the computer instruction stored in a computer-readable storage medium, a processor of a computer device reads the computer program or the computer instructions from the computer-readable storage medium, and the processor executes the computer program or the computer instructions such that The computer device performs the positioning method as described above.
  • Indoor positioning can be achieved only by obtaining a reference image showing the target to be positioned and at least one target reference object, and a pre-generated indoor image. It can greatly reduce the workload of indoor positioning, that is, matching the reference image with the indoor image, thereby obtaining the first position information of the target reference object in the reference image and the second position of the target reference object in the indoor image.
  • the embodiments of the present application can effectively reduce the workload of positioning the target and improve the accuracy of positioning results, thereby filling the technical gaps in related methods.
  • Figure 1 is a flow chart of a positioning method provided by an embodiment of the present application.
  • Figure 2 is a flow chart for obtaining a reference image showing a target to be located and at least one target reference object in a positioning method provided by an embodiment of the present application;
  • Figure 3 is a flow chart of obtaining a reference image showing a target to be located and a target reference object in a positioning method provided by an embodiment of the present application;
  • Figure 4 is a flow chart for obtaining the relative position information of the target to be located relative to the target reference object in the reference image in the positioning method provided by one embodiment of the present application;
  • Figure 5 is a flow chart for generating indoor images in the positioning method provided by an embodiment of the present application.
  • Figure 6 is a flow chart of using geometric calculation to determine the target position information of the target to be located in the indoor image in the positioning method provided by one embodiment of the present application;
  • Figure 7 is a flow chart of using geometric calculation to determine the target position information of a target to be located in an indoor image in a positioning method provided by another embodiment of the present application;
  • Figure 8 is a schematic diagram of the application scenario of the positioning method provided by an embodiment of the present application.
  • Figure 9 is a schematic diagram of an application scenario of a positioning method provided by another embodiment of the present application.
  • Figure 10 is a flow chart of a positioning method provided by another embodiment of the present application.
  • Figure 11 is a flow chart for obtaining positioning position information of a target to be positioned in an indoor image in a positioning method provided by an embodiment of the present application;
  • Figure 12 is a schematic flowchart of the execution of a positioning method provided by an embodiment of the present application.
  • Figure 13 is a schematic diagram of an electronic device provided by an embodiment of the present application.
  • the positioning method of one embodiment includes: acquiring a reference image showing a target to be located and at least one target reference object, acquiring the first position information of the target reference object in the reference image, and acquiring the relative position of the target to be located in the reference image. Relative position information of the target reference object; obtaining second position information of the target reference object in the pre-generated indoor image; determining the target of the target to be located in the indoor image based on the first position information, relative position information and second position information location information.
  • there is no need to use other special positioning equipment and there is no need to consider the layout influence of the indoor environment.
  • Indoor positioning can be achieved only by obtaining a reference image showing the target to be positioned and at least one target reference object and a pre-generated indoor image.
  • a reference image showing the target to be positioned and at least one target reference object and a pre-generated indoor image.
  • Indoor positioning can be achieved only by obtaining a reference image showing the target to be positioned and at least one target reference object and a pre-generated indoor image.
  • greatly reduce the workload of indoor positioning that is, match the reference image with the indoor image, thereby obtaining the first position information of the target reference object in the reference image and the second position information of the target reference object in the indoor image.
  • to obtain the reference mapping information between the target reference object in the reference image and the target reference object in the indoor image so that the relative position of the target to be located in the reference image relative to the target reference object is based on the reference mapping information and the determined Position information, accurately obtain the target position information of the target to be located in the indoor image. Therefore, the embodiments of the present application can effectively reduce the workload of positioning the target and
  • Figure 1 is a flow chart of a positioning method provided by an embodiment of the present application.
  • the positioning method may include but is not limited to step S110 to step S130.
  • Step S110 Obtain a reference image showing the target to be located and at least one target reference object, obtain the first position information of the target reference object in the reference image, and obtain the relative position information of the target to be located in the reference image relative to the target reference object.
  • a reference image showing the target to be located and at least one target reference object is obtained to determine the first position information of the target reference object in the reference image from the obtained reference image, and from the obtained reference image Determine the relative position information of the target to be located relative to the target reference object in the reference image. That is to say, the different position information of the target reference object in the reference image can be obtained, so that in subsequent steps, based on the different positions The information further determines the indoor location of the target to be located.
  • the target to be located may be, but is not limited to, a base station, a transmitting terminal, an access terminal, a network controller, a modulator, a service unit, etc. to be calibrated.
  • the target to be located is a transmitting terminal or an access terminal, it may be, but is not limited to, a transmitting terminal or an access terminal. It is user equipment (UE), user unit, user station, mobile station, mobile station, remote station, remote terminal, mobile device, user terminal, wireless communication equipment, user agent or user device, etc.
  • UE user equipment
  • the type of the target reference object is not limited here.
  • the target reference object may be, but is not limited to, a pillar, corner, wall line, door, window, etc. in the indoor space where the target to be located is located.
  • the number of target reference objects is not limited here. For example, when a target reference object is determined, only one target reference object needs to be displayed in the acquired reference image. Similarly, when two target reference objects are determined, the two target reference objects need to be displayed simultaneously in the acquired reference image. target reference objects, that is to say, all determined target reference objects need to be displayed in the obtained reference image.
  • the reference image acquisition method, or the presentation form of the reference image may be in various ways, which is not limited here.
  • the reference image can be obtained by taking a photo, and the reference image at this time is the photo taken.
  • the target to be located and the selected target reference object are included in the camera, mobile phone and other shooting equipment. Take a shot In the scene, the target to be located and the selected target reference object can be displayed in the photos taken.
  • the photos taken can be processed, but are not limited to, using image processing, pattern recognition and other technologies to extract the targets in the photos.
  • the first position information of the reference object is not limited to, using image processing, pattern recognition and other technologies to extract the targets in the photos.
  • the first location information may be presented in multiple forms, which is not limited here.
  • the first position information may be the physical coordinate information of the target reference object in the reference image.
  • the physical coordinate information here refers to the coordinate information of the target reference object in the world coordinate system, that is, the target reference object is in the world coordinate system.
  • absolute coordinates under; for another example, the first position information can be the pixel coordinate information of the target reference object in the reference image, where the pixel coordinate information refers to the coordinate information of the target reference object in the reference image coordinate system, that is, The relative coordinates of the target reference object in the reference image coordinate system.
  • the pixel coordinate information of the target reference object in the reference image can be converted into the pixel coordinate information of the target reference object in the world coordinate system.
  • the first position information can be presented in a variety of ways in specific application scenarios. Those skilled in the art can select and set the corresponding presentation mode of the first position information according to the specific application scenarios.
  • the relative position information may be presented in multiple forms, which is not limited here.
  • the relative position information can be the distance between the target reference object and the target to be located, where the distance here is the relative distance between the target reference object and the target to be located in the reference image; for another example, the relative position information can be is the relative position of the target reference object to the target projection.
  • the target projection here is the projection of the target to be located in the target reference object.
  • the relative position information can be the distance between the target reference object and the target to be located and the target.
  • the relative position of the reference object relative to the target projection that is to say, the relative position information can be represented by the combination of these two relative position relationships.
  • the relative position relationship may be determined using, but not limited to, image processing, pattern recognition and other technologies.
  • Step S110 includes but is not limited to steps S111 and S112.
  • Step S111 Determine at least one target reference object in the same plane as the target to be located;
  • Step S112 Take photos of the target to be located and at least one target reference object to obtain a reference image showing the target to be located and the target reference object.
  • this step by determining at least one target reference object that is in the same plane as the target to be located, it is convenient to take pictures of the target to be located and the at least one target reference object. That is, since the target to be located and the target reference object are in the same plane, both There is no plane difference between them, so when taking photos of the above two, you can easily include them in the shooting range through a simplified lens, so as to take photos that meet the requirements within the shooting range. A reference image showing the target to be located and the target reference object is obtained.
  • the physical coordinates of the target to be located and the target reference object are at the same level, that is, the two positions will not have different spatial dimensions. This facilitates In subsequent steps, the geometric method can be used to accurately and reliably calculate the target position information of the target to be located in the indoor image.
  • the relative spatial relationship between the target to be located and the target reference object can be more diverse, and those skilled in the art can select and set according to specific scenarios, which is not limited here.
  • Step S112 includes but is not limited to step S1121.
  • Step S1121 Take a frontal photo of the target to be located and at least one target reference object to obtain a reference image showing the target to be located and the target reference object.
  • a comparison can be obtained Intuitively display the reference image of the target to be located and the target reference object. That is to say, taking a frontal photo can present the entire part of the target to be located and the target reference object (including details that may not be easy to pay attention to) more completely, so that When the reference image is subsequently parsed to obtain relevant position information parameters, such as the first position information and relative position information, high parsing accuracy is ensured, which is conducive to obtaining more accurate position information parameters.
  • relevant position information parameters such as the first position information and relative position information
  • angles and ways to take pictures of the target to be positioned and the target reference object such as taking pictures from all sides, a set angle, etc.
  • those skilled in the art can also take pictures according to specific The scene is used to select the angle and method of taking pictures of the positioning target and the target reference object, which are not limited here.
  • Step S110 includes but is not limited to steps S113 and S114.
  • Step S113 Obtain the pixel coordinate information of the target reference object and the pixel coordinate information of the target to be located from the reference image;
  • Step S114 Determine the relative position information of the target reference object to the target to be located based on the pixel coordinate information of the target reference object and the pixel coordinate information of the target to be located.
  • this step by respectively obtaining the pixel coordinate information of the target reference object and the pixel coordinate information of the target to be located from the reference image, it is possible to determine the pixel coordinate information based on the difference between the pixel coordinate information of the target reference object and the pixel coordinate information of the target to be located.
  • To determine the relative position information since the pixel coordinate information of the target reference object and the pixel coordinate information of the target to be located can be accurately determined, that is to say, the accuracy of the obtained pixel coordinate information is high, so based on the pixel coordinate information The final determined relative position information has a higher accuracy rate.
  • Step S120 Obtain the second position information of the target reference object in the pre-generated indoor image.
  • the indoor image is an image corresponding to the indoor space, it can be, for example, but not limited to, a CAD drawing, a Pro/Engineer drawing, and other two-dimensional or three-dimensional drawings, etc., that is to say, it is an image of the actual presentation of the indoor space.
  • the indoor image is pre-generated, so the second position information of the target reference object in the pre-generated indoor image can be obtained from the indoor image, so that the second position information and the first position information can be further combined in subsequent steps. Compare.
  • indoor images can be generated or drawn in various ways, which are not limited here.
  • the indoor image may be, but is not limited to, generated based on the following steps S200 and S300.
  • Step S200 Obtain indoor position parameters of the indoor space.
  • Step S300 Generate an indoor image corresponding to the indoor space according to the indoor location parameters.
  • the indoor position parameters are position parameters associated with the indoor space
  • the overall distribution of the indoor space can be determined, so that the indoor position corresponding to the indoor space can be accurately generated based on the indoor position parameters.
  • the indoor image is used to determine the second position information of the target reference object in the indoor image based on the generated indoor image.
  • two-dimensional or three-dimensional CAD drawing software may be used, but is not limited to, to perform step S300 to generate an indoor image that meets the requirements, or other drawing software similar to CAD drawing software and having similar functions may be used to draw, There are no restrictions here.
  • indoor location parameters include at least one of the following:
  • the overall indoor position parameters for example, selecting the position parameters of indoor wall lines, corners, columns, doors, windows and other large-scale objects to complete the drawing of indoor images, are not limited here.
  • the second location information may be presented in multiple forms, which is not limited here.
  • the second position information may be the physical coordinate information of the target reference object in the indoor image.
  • the physical coordinate information here refers to the coordinate information of the target reference object in the world coordinate system, that is, the target reference object is in the world coordinate system.
  • absolute coordinates under; for another example, the second position information can be the pixel coordinate information of the target reference object in the indoor image, where the pixel coordinate information refers to the coordinate information of the target reference object in the indoor image coordinate system, that is The relative coordinates of the target reference object in the indoor image coordinate system.
  • the pixel coordinate information of the target reference object in the indoor image can be converted into the pixel coordinate information of the target reference object in the indoor image.
  • the physical coordinate information in the indoor image that is to say, the second position information can be presented in a variety of ways in specific application scenarios. Those skilled in the art can select and set the corresponding presentation mode of the second position information according to the specific application scenarios.
  • step S110 also includes but is not limited to step S140.
  • Step S140 Select at least one target reference object in the indoor image.
  • the indoor image is generated in advance, at least one required target reference object can be selected in the indoor image in advance, so that when the reference image is subsequently obtained, the target reference object does not need to be re-selected or additionally selected. Therefore, the workload in obtaining the reference image can be reduced, which is conducive to obtaining the reference image that meets the requirements more effectively and reliably.
  • Step S130 Determine the target position information of the target to be located in the indoor image based on the first position information, relative position information and second position information.
  • Indoor positioning can be achieved only by obtaining a reference image showing the target to be positioned and at least one target reference object, and a pre-generated indoor image, which can greatly Reduce the workload of indoor positioning, that is, match the reference image with the indoor image, thereby obtaining the first position information of the target reference object in the reference image and the second position information of the target reference object in the indoor image, To obtain the reference mapping information between the target reference object in the reference image and the target reference object in the indoor image, so that the relative position of the target to be located in the reference image relative to the target reference object is based on the reference mapping information and the determined information, and accurately obtain the target position information of the target to be located in the indoor image. Therefore, the embodiments of the present application can effectively reduce the workload of positioning the target and improve the accuracy of positioning results, thereby filling the technical gaps in related methods.
  • the target location information may be presented in a variety of forms, which is not limited here.
  • the target location information can be the physical coordinate information of the target to be located in the indoor image.
  • the physical coordinate information here refers to the coordinate information of the target to be located in the world coordinate system, that is, the target to be located is in the world coordinate system.
  • the pixel coordinate information of the target to be located in the indoor image can be converted into the pixel coordinate information of the target to be located in the indoor image.
  • the physical coordinate information in the target is used to more accurately determine the actual location of the target to be located, which is conducive to possible necessary repairs and replacements.
  • the target location information can be presented in a variety of ways in specific application scenarios. , those skilled in the art can select and set the corresponding presentation method of the target location information according to the specific application scenario.
  • step S130 which includes but is not limited to step S131.
  • Step S131 Based on the first position information, relative position information and second position information, use geometric calculation to determine the target position information of the target to be located in the indoor image.
  • the first position information, the relative position information and the second position information respectively indicate the target to be located, the geometric position between the target to be located and the target reference object, and the target reference object, geometric calculation methods can be used to further
  • the target position information of the target to be located in the indoor image is determined by calculating the geometric position.
  • the specific means of the geometric calculation method are not limited, and those skilled in the art can select and calculate according to the actual application scenario.
  • each position information is input into a preset geometric calculation program, and the result of the target position information of the target to be located in the indoor image is output through the geometric calculation program; another example is that the external operator uses the acquired position information to Set the corresponding geometric calculation method, etc.
  • step S131 which includes but is not limited to steps S1311 to S1312.
  • Step S1311 Determine reference mapping information between the target reference object in the reference image and the target reference object in the indoor image based on the first position information and the second position information;
  • Step S1312 Based on the first position information, the second position information, the reference mapping information and the relative position information, use geometric calculation to determine the target position information of the target to be located in the indoor image.
  • the first position information and the second position information reflect different distinguishing positions of the target reference object, it is possible to determine the difference between the target reference object in the reference image and the indoor image through the first position information and the second position information.
  • the reference mapping information between the target reference objects, and then based on the first position information, the second position information, the reference mapping information and the relative position information, the target position information of the target to be located in the indoor image can be accurately determined using geometric calculation.
  • the reference mapping information includes at least one of the following:
  • the scale relationship represents the scale for conversion between the reference image and the indoor image
  • the projection relationship represents the projection ratio for conversion between the reference image and the indoor image.
  • the reference mapping information can also be more types, that is, this Persons skilled in the art can refer to the setting method of the reference mapping information shown above to set other reference mapping information, which is not limited here.
  • Step S1312 includes but is not limited to steps S13121 to S13122.
  • Step S13121 Determine the first position coordinates of the target reference object according to the first position information, determine the relative position parameters corresponding to the target reference object according to the relative position information, and determine the second position coordinates of the target reference object according to the second position information;
  • Step S13122 Calculate the first position coordinates, the second position coordinates, the reference mapping information and the relative position parameters using geometric calculation methods to obtain the target position information of the target to be located in the indoor image.
  • the actual position of the target reference object can be obtained, so that the actual position of the target reference object can be obtained based on The actual position, reference mapping information and relative position parameters can be accurately calculated to obtain the target position information of the target to be located in the indoor image.
  • the first position coordinates may be the physical coordinates of the target reference object in the reference image, where the physical coordinates
  • the coordinates refer to the coordinates of the target reference object in the world coordinate system, that is, the absolute coordinates of the target reference object in the world coordinate system; for another example, the first position coordinates can be the pixel coordinates of the target reference object in the reference image, where The pixel coordinates at refer to the coordinates of the target reference object in the reference image coordinate system, that is, the relative coordinates of the target reference object in the reference image coordinate system.
  • the pixel coordinates of the target reference object in the reference image are converted into the physical coordinates of the target reference object in the reference image. That is to say, the first position coordinates can be presented in a variety of ways in specific application scenarios. Those skilled in the art The corresponding presentation method of the first position coordinate can be selected and set according to the specific application scenario.
  • the relative position parameters may be presented in various forms, which are not limited here.
  • the relative position parameter may be the distance parameter between the target reference object and the target to be located, or the relative position parameter of the target reference object to the target projection, etc.
  • the second position parameter may be presented in a variety of forms, which is not limited here.
  • the second position parameter may be the physical coordinates of the target reference object in the indoor image.
  • the physical coordinates here refer to the coordinates of the target reference object in the world coordinate system, that is, the absolute coordinates of the target reference object in the world coordinate system. Coordinates; for another example, the second position parameter can be the pixel coordinates of the target reference object in the indoor image.
  • the pixel coordinates here refer to the coordinates of the target reference object in the indoor image coordinate system, that is, the target reference object is in the indoor image. Relative coordinates in the coordinate system.
  • the pixel coordinates of the target reference object in the indoor image can be converted into the physical coordinates of the target reference object in the indoor image. That is to say, the second position parameter can be presented in a variety of ways in specific application scenarios, and those skilled in the art can select and set the corresponding presentation mode of the second position parameter according to the specific application scenario.
  • the target to be positioned in this example is set to a base station to be positioned in a real indoor positioning environment. Without loss of generality, the base station to be positioned is assumed to be a particle.
  • AB is the wall line
  • the base station X to be positioned is on the wall line AB.
  • indoor CAD drawings are used to selectively extract indoor wall lines, columns, walls and other position parameters to draw an indoor map.
  • the matching includes matching the reference object information in the photo and the reference object information in the indoor map.
  • the target to be positioned in this example is set to a base station to be positioned in a real indoor positioning environment. Without loss of generality, the base station to be positioned is assumed to be a particle.
  • the base station X to be calibrated is located at a certain point on the wall.
  • a and C are the two vertex points on the door frame of door 1
  • B is the vertex point on the door frame of door 2.
  • indoor CAD drawings are used to selectively extract indoor wall lines, doors, columns, walls and other position parameters to draw an indoor map.
  • the matching includes matching the reference object information in the photo and the reference object information in the indoor map. Among them, determining the scale relationship includes:
  • the relative position parameters of X in the photo and the reference object that is, obtain the pixel lengths of XA, XB, and XC in the photo, then the relative position parameters are g: XA, XB, and XC (pixel points);
  • the actual position of X in the indoor map that is, determine the positions of XA, XB, and XC in the indoor map based on the relative position relationship g between
  • the actual length of is expressed as f ⁇ g(m). That is to say, by obtaining the real position coordinates of A, B, and C through the indoor map and combining it with geometric knowledge, the actual position of the base station X to be calibrated in the indoor map can be determined.
  • one embodiment of the present application also includes but is not limited to steps S150 to S180.
  • Step S150 Reacquire reference images showing the target to be located and the target reference object from different angles, and reacquire the first position information of the target reference object and the relative position information of the target reference object to the target to be located from the reference image;
  • Step S160 Re-obtain the second position information of the target reference object from the pre-generated indoor image
  • Step S170 Re-determine the target position information of the target to be located in the indoor image based on the first position information, relative position information and second position information;
  • Step S180 Obtain the positioning position information of the target to be positioned in the indoor image based on the obtained plurality of target position information.
  • the target to be located and the target reference are displayed again from different angles.
  • Reference images of objects can be used to obtain different reference images. Since indoor images are pre-generated and determined, the target position information of the target to be located in the indoor image can be recalculated based on the different reference images and indoor images obtained.
  • the positioning information in indoor images is relatively more accurate and can reduce measurement errors that may be caused by a small number of measurements.
  • reference images showing the target to be located and the target reference object can be reacquired from different angles. But it is not limited to: taking frontal photos of the target to be positioned and the target reference object as shown in step S1121, or taking side photos of the target to be positioned and the target reference object, or taking photos of the target to be positioned and the target reference object according to other preset angles, etc. , is not limited here.
  • different reference images in addition to obtaining different reference images from different angles, different reference images can also be obtained but are not limited to by changing the type, quantity, etc. of the target reference objects, or by those skilled in the art according to specific scenarios. There are no restrictions on how to obtain different reference images.
  • the number of times to determine the target position information of the target to be located in the indoor image is not limited. Generally speaking, it can be considered as many times as possible within the range that the workload can bear, in order to achieve the purpose of accurately measuring the calculation results. .
  • steps S150 to S170 can refer to the specific implementation of steps S110 to S130.
  • Implementation Since the specific implementation of steps S110 to S130 has been described in detail in the foregoing embodiments, to avoid redundancy, the specific implementation of steps S150 to S170 will not be described in detail here.
  • Step S180 also includes but is not limited to step S181.
  • Step S181 Obtain the average value of multiple target location information based on the obtained multiple target location information, and obtain the positioning location information of the target to be located in the indoor image.
  • the mean calculation method is used, that is, by obtaining the average value of multiple target position information, the position of the target to be located in the indoor image under multiple measurement calculations can be obtained.
  • Positioning position information that is to say, using the average of multiple target position information as the final positioning position information of the target to be positioned in the indoor image can reduce the positioning error of the target to be positioned and improve its positioning accuracy.
  • the positioning position information of the target to be located in the indoor image can also be obtained by, but not limited to, variance calculation, standard deviation calculation, probability distribution calculation, etc., which is not limited here. .
  • the target to be positioned in this example is set to a base station to be positioned in a real indoor positioning environment. Without loss of generality, the base station to be positioned is assumed to be a particle.
  • AB is the wall line
  • the base station X to be positioned is on the wall line AB.
  • the matching includes matching the reference object information in the photo and the reference object information in the indoor map.
  • the target to be positioned in this example is set to a base station to be positioned in a real indoor positioning environment. Without loss of generality, the base station to be positioned is assumed to be a particle.
  • the base station X to be calibrated is located at a certain point on the wall, and A, C and B are the vertex points on the two door frames respectively.
  • determining the scale relationship includes:
  • the relative position parameters of X in the photo and the reference object that is, obtain the pixel lengths of XA, XB, and XC in the photo, then the relative position parameters are g: XA, XB, and XC (pixel points);
  • the actual position of X in the indoor map that is, determine the positions of XA, XB, and XC in the indoor map based on the relative position relationship g between
  • the actual length of is expressed as f ⁇ g(m). That is to say, by obtaining the real position coordinates of A, B, and C through the indoor map and combining it with geometric knowledge, the actual position of the base station X to be calibrated in the indoor map can be determined.
  • Step C100 Use indoor CAD drawings to extract indoor contour parameters to draw indoor images, so as to further use the drawn indoor images for comparison calculations;
  • Step C200 Select the target reference object to take a photo.
  • the photo contains the complete target reference object, so that the photo can display the target to be located and the target reference object at the same time, so that information about the target to be located and the target reference object can be accurately and reliably extracted from the photo.
  • Step C300 Process the taken photos, match the position information of the target reference object obtained through processing with the indoor image, and obtain the mapping relationship between the target reference object in the photo and the target reference object in the indoor image, so as to further facilitate mapping based on The relationship determines the distinguishing parameters of the target reference object in indoor images and photos;
  • Step C400 Determine the relative positional relationship between the target to be located and the target reference object in the photo, so as to further perform conversion calculations based on the relative positional relationship;
  • Step C500 Determine the actual position of the target to be located in the indoor image, thereby obtaining the position information of the target to be located in the indoor image in the case of a single positioning;
  • Step C600 Change the camera angle and return to step C200 to obtain the statistical position information of the target to be located in the indoor image in the case of multiple positioning. Compared with the position information calculated in the case of single positioning, the position information calculated in the case of multiple positioning is This situation can effectively reduce possible errors in a single positioning and improve positioning accuracy.
  • one embodiment of the present application also discloses an electronic device 100, including: at least one processor 110; at least one memory 120 for storing at least one program; when at least one program is processed by at least one When executed, the processor 110 implements the positioning method as in any previous embodiment.
  • an embodiment of the present application also discloses a computer-readable storage medium in which computer-executable instructions are stored, and the computer-executable instructions are used to execute the positioning method as in any of the previous embodiments.
  • an embodiment of the present application also discloses a computer program product, which includes a computer program or computer instructions.
  • the computer program or computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer program from the computer-readable storage medium.
  • the computer program or computer instructions are obtained, and the processor executes the computer program or computer instructions, so that the computer device performs the positioning method as in any of the previous embodiments.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disk (DVD) or other optical disk storage, magnetic cassettes, tapes, disk storage or other magnetic storage devices, or may Any other medium used to store the desired information and that can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Structural Engineering (AREA)
  • Computational Mathematics (AREA)
  • Civil Engineering (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种定位方法及电子设备、存储介质、程序产品。其中,一种定位方法,包括:获取显示有待定位目标和至少一个目标参照物的参考图像,获取目标参照物在参考图像中的第一位置信息,以及获取待定位目标在参考图像中相对于目标参照物的相对位置信息;获取目标参照物在预生成的室内图像中的第二位置信息;根据第一位置信息、相对位置信息和第二位置信息,确定待定位目标在室内图像中的目标位置信息。

Description

定位方法、电子设备、存储介质及程序产品
相关申请的交叉引用
本申请基于申请号为202210608266.9、申请日为2022年05月31日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及定位技术领域,尤其是一种定位方法、电子设备、计算机存储介质及计算机程序产品。
背景技术
随着5G的大规模商用以及移动互联网的快速发展,基于5G基站的室内定位得到越来越多的关注。为了获取高精度的定位效果,在不考虑具体定位技术前提下,室内定位需要准确测绘出定位基站的位置坐标。但由于室内环境遮挡严重,信号传播衰减较大,基站位置的很小的误差都会引入较大的定位误差,而且室内定位基站往往分布在室内天花板、墙线、立柱等不易被触碰的位置,很难直接被测量到,同时基站也可能被其他物体所遮挡,这也给定位测量带来了较大难度,另外,室内场景物品布局可能出现更新,这涉及到基站位置的频繁校正,对于每次重新测量将会带来较大的工作量,因此可知上述因素都会影响到对于基站位置进行定位的结果精度。当前的基站定标设备有激光测距仪、全站仪等,这些设备需要反复校正才能够在具体场景中使用,且需要使用者详细地熟悉设备的使用性能,这带来了额外的工作量。
发明内容
本申请实施例提供了一种定位方法、电子设备、计算机存储介质及计算机程序产品,能够有效减小定位目标的工作量,提高定位结果精度。
第一方面,本申请实施例提供了一种定位方法,所述定位方法包括:
获取显示有待定位目标和至少一个目标参照物的参考图像,获取所述目标参照物在所述参考图像中的第一位置信息,以及获取所述待定位目标在所述参考图像中相对于所述目标参照物的相对位置信息;
获取所述目标参照物在预生成的室内图像中的第二位置信息;
根据所述第一位置信息、所述相对位置信息和所述第二位置信息,确定所述待定位目标在所述室内图像中的目标位置信息。
第二方面,本申请实施例还提供了一种电子设备,包括:至少一个处理器;至少一个存储器,用于存储至少一个程序;当至少一个所述程序被至少一个所述处理器执行时实现如前面所述的定位方法。
第三方面,本申请实施例还提供了一种计算机可读存储介质,其中存储有处理器可执行的程序,所述处理器可执行的程序被处理器执行时用于实现如前面所述的定位方法。
第四方面,本申请实施例还提供了一种计算机程序产品,计算机程序或所述计算机指令 存储在计算机可读存储介质中,计算机设备的处理器从所述计算机可读存储介质读取所述计算机程序或所述计算机指令,所述处理器执行所述计算机程序或所述计算机指令,使得所述计算机设备执行如前面所述的定位方法。
本申请实施例中,无需使用其他专用定位设备,也无需考虑室内环境的布置影响,仅通过获取显示有待定位目标和至少一个目标参照物的参考图像以及预生成的室内图像即可实现室内定位,能够大大减小室内定位的工作量,也就是说,将参考图像与室内图像进行匹配,从而通过获取目标参照物在参考图像中的第一位置信息以及目标参照物在室内图像中的第二位置信息,以得到参考图像中的目标参照物与室内图像中的目标参照物之间的参照映射信息,以便于基于该参照映射信息和已确定的待定位目标在参考图像中相对于目标参照物的相对位置信息,准确地获取到待定位目标在室内图像中的目标位置信息。因此,本申请实施例能够有效减小定位目标的工作量,提高定位结果精度,从而可以弥补相关方法中的技术空白。
附图说明
图1是本申请一个实施例提供的定位方法的流程图;
图2是本申请一个实施例提供的定位方法中,获取显示有待定位目标和至少一个目标参照物的参考图像的流程图;
图3是本申请一个实施例提供的定位方法中,得到显示有待定位目标和目标参照物的参考图像的流程图;
图4是本申请一个实施例提供的定位方法中,获取待定位目标在参考图像中相对于目标参照物的相对位置信息的流程图;
图5是本申请一个实施例提供的定位方法中,生成室内图像的流程图;
图6是本申请一个实施例提供的定位方法中,采用几何计算方式确定待定位目标在室内图像中的目标位置信息的流程图;
图7是本申请另一个实施例提供的定位方法中,采用几何计算方式确定待定位目标在室内图像中的目标位置信息的流程图;
图8是本申请一个实施例提供的定位方法的应用场景示意图;
图9是本申请另一个实施例提供的定位方法的应用场景示意图;
图10是本申请另一个实施例提供的定位方法的流程图;
图11是本申请一个实施例提供的定位方法中,获取待定位目标在室内图像中的定位位置信息的流程图;
图12是本申请一个实施例提供的定位方法的执行流程示意图;
图13是本申请一个实施例提供的电子设备的示意图。
具体实施方式
为了使本申请的目的、技术方法及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
需要说明的是,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于流程图中的顺序执行所示出或描述的步骤。说明书和权利要求书及上述附图中的术语“第一”、 “第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
本申请提供了一种定位方法、电子设备、计算机存储介质及计算机程序产品。其中一个实施例的定位方法,包括:获取显示有待定位目标和至少一个目标参照物的参考图像,获取目标参照物在参考图像中的第一位置信息,以及获取待定位目标在参考图像中相对于目标参照物的相对位置信息;获取目标参照物在预生成的室内图像中的第二位置信息;根据第一位置信息、相对位置信息和第二位置信息,确定待定位目标在室内图像中的目标位置信息。该实施例中,无需使用其他专用定位设备,也无需考虑室内环境的布置影响,仅通过获取显示有待定位目标和至少一个目标参照物的参考图像以及预生成的室内图像即可实现室内定位,能够大大减小室内定位的工作量,也就是说,将参考图像与室内图像进行匹配,从而通过获取目标参照物在参考图像中的第一位置信息以及目标参照物在室内图像中的第二位置信息,以得到参考图像中的目标参照物与室内图像中的目标参照物之间的参照映射信息,以便于基于该参照映射信息和已确定的待定位目标在参考图像中相对于目标参照物的相对位置信息,准确地获取到待定位目标在室内图像中的目标位置信息。因此,本申请实施例能够有效减小定位目标的工作量,提高定位结果精度,从而可以弥补相关方法中的技术空白。
下面结合附图,对本申请实施例作进一步阐述。
如图1所示,图1是本申请一个实施例提供的定位方法的流程图,该定位方法可以包括但不限于步骤S110至步骤S130。
步骤S110:获取显示有待定位目标和至少一个目标参照物的参考图像,获取目标参照物在参考图像中的第一位置信息,以及获取待定位目标在参考图像中相对于目标参照物的相对位置信息。
本步骤中,通过获取显示有待定位目标和至少一个目标参照物的参考图像,以便于从获取到的参考图像中确定目标参照物在参考图像中的第一位置信息,以及从获取到的参考图像中确定待定位目标在参考图像中相对于目标参照物的相对位置信息,也就是说,可以获取到目标参照物在参考图像中的不同的位置信息,以便于在后续步骤中根据该不同的位置信息进一步确定待定位目标在室内的位置。
在一实施例中,待定位目标可以为多种,此处并不限定。例如,待定位目标可以但不限于为待定标的基站、发送终端、接入终端、网络控制器、调制器以及服务单元等,当待定位目标为发送终端或接入终端时,其可以但不限于为用户设备(User Equipment,UE)、用户单元、用户站、移动站、移动台、远方站、远程终端、移动设备、用户终端、无线通信设备、用户代理或用户装置等。
在一实施例中,目标参照物的类型在此处并未限定。例如,目标参照物可以但不限于为待定位目标所在的室内空间中的柱子、墙角、墙线、门、窗等。
在一实施例中,目标参照物的数量在此处并未限定。例如,当确定一个目标参照物时,则获取的参考图像中只需显示一个目标参照物即可,类似地,当确定两个目标参照物时,则获取的参考图像中需要同时显示出该两个目标参照物,也就是说,获取到的参考图像中需要将所有确定的目标参照物均显示出来。
在一实施例中,参考图像的获取方式,或者说参考图像的呈现形式可以为多种,此处并未限定。例如,可以通过拍照的方式获取参考图像,此时的参考图像即为拍摄的照片,具体地,在拍摄时,将待定位目标和所选定的目标参照物包含在相机、手机等拍摄设备的拍摄取 景里,那么所拍摄得到的照片内就能够显示待定位目标和所选定的目标参照物,其中,可以但不限于采用图像处理、模式识别等技术处理所拍摄的照片,提取照片内的目标参照物的第一位置信息。
在一实施例中,第一位置信息的呈现形式可以为多种,此处并未限定。例如,第一位置信息可以为目标参照物在参考图像中的物理坐标信息,此处的物理坐标信息指的是目标参照物在世界坐标系下的坐标信息,也即目标参照物在世界坐标系下的绝对坐标;又如,第一位置信息可以为目标参照物在参考图像中的像素坐标信息,此处的像素坐标信息指的是目标参照物在参考图像坐标系下的坐标信息,也即目标参照物在参考图像坐标系下的相对坐标,在确定参考图像在世界坐标系下的坐标信息的情况下,可以据此将目标参照物在参考图像中的像素坐标信息转化为目标参照物在参考图像中的物理坐标信息,也就是说,第一位置信息在具体应用场景中可以有多种呈现方式,本领域技术人员可以根据具体应用场景选择第一位置信息相应的呈现方式进行设置。
在一实施例中,相对位置信息的呈现形式可以为多种,此处并未限定。例如,相对位置信息可以为目标参照物与待定位目标之间的距离,此处的距离即为在参考图像中的目标参照物与待定位目标之间的相对距离;又如,相对位置信息可以为目标参照物对于目标投影的相对位置,此处的目标投影即为待定位目标在目标参照物中的投影;又如,相对位置信息可以为目标参照物与待定位目标之间的距离以及目标参照物对于目标投影的相对位置,也就是说,可以通过此两种相对位置关系的配合表征相对位置信息。
在一实施例中,可以但不限于采用图像处理、模式识别等技术确定相对位置关系。
如图2所示,本申请的一个实施例,对步骤S110中的“获取显示有待定位目标和至少一个目标参照物的参考图像”进行进一步说明,步骤S110包括但不限于步骤S111和S112。
步骤S111:确定与待定位目标处于同一平面内的至少一个目标参照物;
步骤S112:对待定位目标和至少一个目标参照物进行拍照,得到显示有待定位目标和目标参照物的参考图像。
本步骤中,通过确定与待定位目标处于同一平面内的至少一个目标参照物,以便于对待定位目标和至少一个目标参照物进行拍照,即由于待定位目标和目标参照物处于同一平面内,两者之间不存在平面上的差异,因此对上述两者进行拍照时,可以很容易地通过简易化的镜头拍摄将两者包含在拍摄范围内,从而在拍摄范围内拍出符合要求的照片,得到显示有待定位目标和目标参照物的参考图像。
在一实施例中,由于待定位目标和目标参照物处于同一平面内,因此待定位目标和目标参照物的物理坐标处于同一层级,也即两者位置不会出现空间维度不同的情况,这样便于在后续步骤中利用几何方式能够准确可靠地计算待定位目标在室内图像中的目标位置信息。
在一实施例中,待定位目标与目标参照物的相对空间关系还可以为更多种,本领域技术人员可以根据具体场景进行选择设置,此处并未限定。
如图3所示,本申请的一个实施例,对步骤S112进行进一步说明,步骤S112包括但不限于步骤S1121。
步骤S1121:对待定位目标和至少一个目标参照物进行正面拍照,得到显示有待定位目标和目标参照物的参考图像。
本步骤中,通过对待定位目标和至少一个目标参照物进行正面拍照,可以获得能够较为 直观地显示待定位目标和目标参照物的参考图像,也就是说,通过正面拍照能够将待定位目标和目标参照物的整体部分(包括可能不容易关注到的细节)较为完整地呈现出来,这样后续对参考图像进行解析以获取相关的位置信息参数,比如第一位置信息、相对位置信息时,保证具有很高的解析精确度,有利于从中获取到更为精确的位置信息参数。
在一实施例中,对待定位目标和目标参照物拍照的角度及方式还可以有更多种,例如从各个侧面、设定的某个角度等进行拍照,或者,本领域技术人员也可以根据具体应用场景来选择对待定位目标和目标参照物拍照的角度及方式,此处并未限定。
如图4所示,本申请的一个实施例,对步骤S110中的“获取待定位目标在参考图像中相对于目标参照物的相对位置信息”进行进一步说明,步骤S110包括但不限于步骤S113和S114。
步骤S113:从参考图像中获取目标参照物的像素坐标信息和待定位目标的像素坐标信息;
步骤S114:根据目标参照物的像素坐标信息和待定位目标的像素坐标信息,确定目标参照物对于待定位目标的相对位置信息。
本步骤中,通过从参考图像中分别获取目标参照物的像素坐标信息和待定位目标的像素坐标信息,从而能够根据目标参照物的像素坐标信息和待定位目标的像素坐标信息之间的差异以确定相对位置信息,由于目标参照物的像素坐标信息和待定位目标的像素坐标信息能够被准确地所确定,也就是说,获取到的像素坐标信息的准确率较高,因此基于像素坐标信息而最终确定的相对位置信息的准确率较高。
步骤S120:获取目标参照物在预生成的室内图像中的第二位置信息。
本步骤中,由于室内图像为室内空间所对应的图像,例如可以但不限于为CAD图、Pro/Engineer图以及其他二维或三维图等,也就是说,为室内空间的实际呈现的图像,且室内图像为预先生成好的,因此可以从室内图像中获取目标参照物在预生成的室内图像中的第二位置信息,以便于在后续步骤中进一步地将第二位置信息与第一位置信息进行比较。
在一实施例中,室内图像的生成或绘制方式可以为多种,此处不限定。例如,如图5所示,室内图像可以但不限于基于如下步骤S200和步骤S300生成。
步骤S200:获取室内空间的室内位置参数。
步骤S300:根据室内位置参数生成与室内空间对应的室内图像。
本步骤中,由于室内位置参数为与室内空间相关联的位置参数,因此通过获取室内空间内的室内位置参数,可以确定室内空间的整体分布,从而能够根据室内位置参数准确地生成与室内空间对应的室内图像,以便于根据生成的室内图像确定目标参照物在室内图像中的第二位置信息。
在一实施例中,可以但不限于采用二维或三维的CAD绘图软件执行步骤S300以生成符合要求的室内图像,或者,采用与CAD绘图软件类似的、具有相似功能的其他绘图软件进行绘制,此处并未限制。
在一实施例中,室内位置参数可以为多种,此处不限定。例如,室内位置参数包括如下至少之一:
墙线位置参数;
门位置参数;
立柱位置参数;
墙壁位置参数;
在一实施例中,室内位置参数可以为不同的多个,不同的室内位置参数体现了室内空间的不同构成方面,对于具体场景中的位置参数,可以根据实际情况选择其中的一个或多个作为整体的室内位置参数,例如,选择室内墙线、墙角、立柱、门、窗等大尺度物体的位置参数以完成室内图像的绘制,此处并未限制。
在一实施例中,第二位置信息的呈现形式可以为多种,此处并未限定。例如,第二位置信息可以为目标参照物在室内图像中的物理坐标信息,此处的物理坐标信息指的是目标参照物在世界坐标系下的坐标信息,也即目标参照物在世界坐标系下的绝对坐标;又如,第二位置信息可以为目标参照物在室内图像中的像素坐标信息,此处的像素坐标信息指的是目标参照物在室内图像坐标系下的坐标信息,也即目标参照物在室内图像坐标系下的相对坐标,在确定室内图像在世界坐标系下的坐标信息的情况下,可以据此将目标参照物在室内图像中的像素坐标信息转化为目标参照物在室内图像中的物理坐标信息,也就是说,第二位置信息在具体应用场景中可以有多种呈现方式,本领域技术人员可以根据具体应用场景选择第二位置信息相应的呈现方式进行设置。
本申请的一个实施例,步骤S110之前还包括但不限于步骤S140。
步骤S140:在室内图像中选择至少一个目标参照物。
本步骤中,由于室内图像为预先生成的,因此可以同样地预先在室内图像中选择所需求的至少一个目标参照物,以便于在后续获取参考图像时可以不用再重新或另外选择目标参照物,因此能够减小获取参考图像时的工作量,即有利于更有效可靠地获取到符合要求的参考图像。
步骤S130:根据第一位置信息、相对位置信息和第二位置信息,确定待定位目标在室内图像中的目标位置信息。
本步骤中,无需使用其他专用定位设备,也无需考虑室内环境的布置影响,仅通过获取显示有待定位目标和至少一个目标参照物的参考图像以及预生成的室内图像即可实现室内定位,能够大大减小室内定位的工作量,也就是说,将参考图像与室内图像进行匹配,从而通过获取目标参照物在参考图像中的第一位置信息以及目标参照物在室内图像中的第二位置信息,以得到参考图像中的目标参照物与室内图像中的目标参照物之间的参照映射信息,以便于基于该参照映射信息和已确定的待定位目标在参考图像中相对于目标参照物的相对位置信息,准确地获取到待定位目标在室内图像中的目标位置信息。因此,本申请实施例能够有效减小定位目标的工作量,提高定位结果精度,从而可以弥补相关方法中的技术空白。
在一实施例中,目标位置信息的呈现形式可以为多种,此处并未限定。例如,目标位置信息可以为待定位目标在室内图像中的物理坐标信息,此处的物理坐标信息指的是待定位目标在世界坐标系下的坐标信息,也即待定位目标在世界坐标系下的绝对坐标;又如,目标位置信息可以为待定位目标在室内图像中的像素坐标信息,此处的像素坐标信息指的是待定位目标在室内图像坐标系下的坐标信息,也即待定位目标在室内图像坐标系下的相对坐标,在确定室内图像在世界坐标系下的坐标信息的情况下,可以据此将待定位目标在室内图像中的像素坐标信息转化为待定位目标在室内图像中的物理坐标信息,以便于能够更加准确地确定待定位目标的实际位置,有利于对其进行可能需要的维修、更换,也就是说,目标位置信息在具体应用场景中可以有多种呈现方式,本领域技术人员可以根据具体应用场景选择目标位置信息相应的呈现方式进行设置。
本申请的一个实施例,对步骤S130进行进一步说明,步骤S130包括但不限于步骤S131。
步骤S131:根据第一位置信息、相对位置信息和第二位置信息,采用几何计算方式确定待定位目标在室内图像中的目标位置信息。
本步骤中,由于第一位置信息、相对位置信息和第二位置信息分别指示了待定位目标、待定位目标与目标参照物之间以及目标参照物的几何位置,因此可以采用几何计算方式以进一步通过计算几何位置确定待定位目标在室内图像中的目标位置信息。
在一实施例中,几何计算方式的具体手段不限定,本领域技术人员可以根据实际应用场景进行选择计算。例如,将各个位置信息输入到预先设定好的几何计算程序中,通过几何计算程序输出待定位目标在室内图像中的目标位置信息的结果;又如,由外部操作人员根据获取到的位置信息设定对应的几何计算方式等。
如图6所示,本申请的一个实施例,对步骤S131进行进一步说明,步骤S131包括但不限于步骤S1311至S1312。
步骤S1311:根据第一位置信息和第二位置信息,确定参考图像中的目标参照物与室内图像中的目标参照物之间的参照映射信息;
步骤S1312:根据第一位置信息、第二位置信息、参照映射信息和相对位置信息,采用几何计算方式确定待定位目标在室内图像中的目标位置信息。
本步骤中,由于第一位置信息和第二位置信息体现了目标参照物的不同的区别位置,因此通过第一位置信息和第二位置信息可以确定参考图像中的目标参照物与室内图像中的目标参照物之间的参照映射信息,进而根据第一位置信息、第二位置信息、参照映射信息和相对位置信息,采用几何计算方式可以准确地确定待定位目标在室内图像中的目标位置信息。
在一实施例中,参照映射信息包括如下至少之一:
比例尺关系;
投影关系。
其中,比例尺关系表征参考图像与室内图像之间进行换算的比例尺,投影关系表征参考图像与室内图像之间进行换算的投影比例,可以理解地是,参照映射信息还可以为更多种,即本领域技术人员可以参照如上所示的参照映射信息的设置方式设置其他的参照映射信息,此处并未限定。
如图7所示,本申请的一个实施例,对步骤S1312进行进一步说明,步骤S1312包括但不限于步骤S13121至S13122。
步骤S13121:根据第一位置信息确定目标参照物的第一位置坐标,以及根据相对位置信息确定与目标参照物对应的相对位置参数,以及根据第二位置信息确定目标参照物的第二位置坐标;
步骤S13122:采用几何计算方式对第一位置坐标、第二位置坐标、参照映射信息和相对位置参数计算,得到待定位目标在室内图像中的目标位置信息。
本步骤中,通过确定目标参照物的第一位置坐标、与目标参照物对应的相对位置参数以及目标参照物的第二位置坐标,可以得到目标参照物的实际位置,从而能够基于目标参照物的实际位置、参照映射信息和相对位置参数,准确地计算得到待定位目标在室内图像中的目标位置信息。
在一实施例中,第一位置坐标可以为目标参照物在参考图像中的物理坐标,此处的物理 坐标指的是目标参照物在世界坐标系下的坐标,也即目标参照物在世界坐标系下的绝对坐标;又如,第一位置坐标可以为目标参照物在参考图像中的像素坐标,此处的像素坐标指的是目标参照物在参考图像坐标系下的坐标,也即目标参照物在参考图像坐标系下的相对坐标,在确定参考图像在世界坐标系下的坐标的情况下,可以据此将目标参照物在参考图像中的像素坐标转化为目标参照物在参考图像中的物理坐标,也就是说,第一位置坐标在具体应用场景中可以有多种呈现方式,本领域技术人员可以根据具体应用场景选择第一位置坐标相应的呈现方式进行设置。
在一实施例中,相对位置参数的呈现形式可以为多种,此处并未限定。例如,相对位置参数可以为目标参照物与待定位目标之间的距离参数,也可以为目标参照物对于目标投影的相对位置参数等。
在一实施例中,第二位置参数的呈现形式可以为多种,此处并未限定。例如,第二位置参数可以为目标参照物在室内图像中的物理坐标,此处的物理坐标指的是目标参照物在世界坐标系下的坐标,也即目标参照物在世界坐标系下的绝对坐标;又如,第二位置参数可以为目标参照物在室内图像中的像素坐标,此处的像素坐标指的是目标参照物在室内图像坐标系下的坐标,也即目标参照物在室内图像坐标系下的相对坐标,在确定室内图像在世界坐标系下的坐标信息的情况下,可以据此将目标参照物在室内图像中的像素坐标转化为目标参照物在室内图像中的物理坐标,也就是说,第二位置参数在具体应用场景中可以有多种呈现方式,本领域技术人员可以根据具体应用场景选择第二位置参数相应的呈现方式进行设置。
以下给出一种具体示例以说明上述各实施例的工作原理及流程。
示例一:
需要说明的是,为了清晰阐述本申请实施例的方法,该示例中的待定位目标设定为真实室内定位环境下的待定位基站,同时不失一般性,将待定位基站假设成一质点。
如图8所示,在一个室内房间中,AB为墙线,待定位基站X处于墙线AB上。
首先,利用室内CAD图选择性地提取室内墙线、立柱、墙壁等位置参数,绘制室内地图。
然后,选定X所在的墙线AB作为参照物,利用像机正对墙线方向对X进行拍照,拍照时确保照片里需要包含X的同时还完整包括X所在的墙线AB,同理,由于门1和门2不作为参照物,因此照片里不用一定包括门1和门2。
然后,处理所拍摄的照片,提取照片内墙线AB的信息,将该信息与第一步中所得到的室内地图进行匹配,得到照片中的参照物与室内地图中的参照物之间的映射关系。此处的匹配,包括对照片中的参照物信息和室内地图中的参照物信息进行匹配。
然后,确定照片中的X与参照物的相对位置参数,并获取照片中AX、XB的像素长度,定义为X在墙线AB中的相对位置参数;
然后,确定X在室内地图中的实际位置,即根据照片中X与参照物的相对位置参数、照片中的参照物与室内地图中的参照物之间的映射关系,获取X在室内地图中的实际位置。
在一种实施方式中,记A(x1,y1)、B(x2,y2),则根据几何知识可以得到待定标基站X在室内地图中的实际位置为:
以下给出另一种具体示例以说明上述各实施例的工作原理及流程。
示例二:
需要说明的是,为了清晰阐述本申请实施例的方法,该示例中的待定位目标设定为真实室内定位环境下的待定位基站,同时不失一般性,将待定位基站假设成一质点。
如图9所示,在一个室内房间中,待定标基站X位于墙面某点,A、C为门1的门框上的两个顶角点,B为门2的门框上的顶角点。
首先,利用室内CAD图选择性地提取室内墙线、门、立柱、墙壁等位置参数,绘制室内地图。
然后,选定X所在墙面、门1和门2两个门框作为参照物,利用像机正对墙线方向对X进行拍照,拍照时确保照片里需要包含X的同时还完整包括所有的参照物,即包括墙面、门1和门2。
然后,处理所拍摄的照片,提取照片内墙面及各个门框信息,将该信息与第一步中所得到的室内地图进行匹配,得到照片中的参照物与室内地图中的参照物之间的比例尺关系。此处的匹配,包括对照片中的参照物信息和室内地图中的参照物信息进行匹配。其中,确定比例尺关系包括:
测量照片中AC的像素点长度,同时根据室内地图获取AC的真实长度m,则比例尺关系为f:m/像素点。
然后,确定照片中X与参照物的相对位置参数,即获取照片中XA、XB、XC的像素长度,则相对位置参数为g:XA、XB、XC(像素点);
然后,确定X在室内地图中的实际位置,即根据照片中X与参照物的相对位置关系g、照片中的参照物与室内地图中的比例尺关系f,确定XA、XB、XC在室内地图中的实际长度,表示为f·g(m),也就是说,通过室内地图获取A、B、C的真实位置坐标结合几何知识,可以确定待定标基站X在室内地图中的实际位置。
如图10所示,本申请的一个实施例,还包括但不限于步骤S150至S180。
步骤S150:从不同角度重新获取显示有待定位目标和目标参照物的参考图像,从参考图像中重新获取目标参照物的第一位置信息,以及目标参照物对于待定位目标的相对位置信息;
步骤S160:从预生成的室内图像中重新获取目标参照物的第二位置信息;
步骤S170:根据第一位置信息、相对位置信息和第二位置信息,重新确定待定位目标在室内图像中的目标位置信息;
步骤S180:根据得到的多个目标位置信息获取待定位目标在室内图像中的定位位置信息。
本步骤中,考虑到单次采样计算所得到的待定位目标在室内图像中的目标位置信息可能存在一定的误差,因此为了减小其误差,通过从不同角度重新获取显示有待定位目标和目标参照物的参考图像,可以获取到不同的参考图像,由于室内图像是预先生成且确定的,因此基于获取到的不同的参考图像与室内图像可以进行重新计算待定位目标在室内图像中的目标位置信息,从而能够获取到多个待定位目标在室内图像中的目标位置信息,以便于根据得到的多个目标位置信息获取待定位目标在室内图像中的定位位置信息,这样获取到的待定位目标在室内图像中的定位位置信息相对更加准确,能够降低因测量次数较少而可能带来的测量误差。
在一实施例中,从不同角度重新获取显示有待定位目标和目标参照物的参考图像,可以 但不限于为:如步骤S1121所示的对待定位目标和目标参照物进行正面拍照,或者对待定位目标和目标参照物进行侧面拍照,或者对待定位目标和目标参照物按照其他预设角度进行拍照等,此处并未限定。
在一实施例中,除了从不同角度可以获取到不同的参考图像之外,还可以但不限于通过更换目标参照物的类型、数量等来获取不同的参考图像,或者由本领域技术人员根据具体场景选择获取不同的参考图像的方式,此处并未限制。
在一实施例中,确定待定位目标在室内图像中的目标位置信息的次数不限定,通常来说可以考虑在工作量能够承受的范围之内尽可能的多,以达到精确测量计算结果的目的。
在一实施例中,步骤S150至S170的具体实施方式与步骤S110至S130的具体实施方式
相类似,区别仅在于两者获取的参考图像的角度是不同的,但这并不构成对两者的定位方法的区别限制,因此步骤S150至S170的具体实施方式可以参照步骤S110至S130的具体实施方式,由于前述实施例已经对步骤S110至S130的具体实施方式进行了详细说明,为免冗余,在此对于步骤S150至S170的具体实施方式不作赘述。
如图11所示,本申请的一个实施例,对步骤S180进行进一步说明,步骤S180还包括但不限于步骤S181。
步骤S181:根据得到的多个目标位置信息获取多个目标位置信息的平均值,得到待定位目标在室内图像中的定位位置信息。
本步骤中,在已经得到多个目标位置信息的情况下,采用均值计算的方式,即通过获取多个目标位置信息的平均值,可以得到多次测量计算下的待定位目标在室内图像中的定位位置信息,也就是说,以多个目标位置信息的平均值作为最终的待定位目标在室内图像中的定位位置信息,可以降低对于待定位目标的定位误差,提升其定位精确度。
在一实施例中,除采用均值计算的方式外,还可以采用但不限于方差计算、标准差计算以及概率分布计算等方式得到待定位目标在室内图像中的定位位置信息,此处并未限定。
以下给出一种具体示例以说明上述各实施例的工作原理及流程。
示例三:
需要说明的是,为了清晰阐述本申请实施例的方法,该示例中的待定位目标设定为真实室内定位环境下的待定位基站,同时不失一般性,将待定位基站假设成一质点。
如图8所示,在一个室内房间中,AB为墙线,待定位基站X处于墙线AB上。
首先,利用室内CAD图提取室内墙线、立柱、墙壁等位置参数,绘制室内地图。
然后,选定X所在的墙线AB作为参照物,利用像机正对墙线方向对X进行拍照,拍照时确保照片里需要包含X的同时还完整包括X所在的墙线AB。
然后,处理所拍摄的照片,提取照片内墙线AB的信息,将该信息与第一步中所得到的室内地图进行匹配,得到照片中的参照物与室内地图中的参照物之间的映射关系。此处的匹配,包括对照片中的参照物信息和室内地图中的参照物信息进行匹配。
然后,确定照片中的X与参照物的相对位置参数,并获取照片中AX、XB的像素长度,定义为X在墙线AB中的相对位置参数;
然后,确定X在室内地图中的实际位置,即根据照片中X与参照物的相对位置参数、照片中的参照物与室内地图中的参照物之间的映射关系,获取X在室内地图中的实际位置。
在一种实施方式中,记A(x1,y1)、B(x2,y2),则根据几何知识可以得到待定标基站X在室内地图中的实际位置为:
然后,从不同角度对X进行拍照,然后重复该示例中的上述步骤,获取到不同角度的X在室内地图中的实际位置,对多个角度对应的X在室内地图中的实际位置求平均值,得到待定标基站X在室内地图中的平均位置,以该平均位置作为待定标基站X在室内地图中的最终定位位置,即
以下给出另一种具体示例以说明上述各实施例的工作原理及流程。
示例四:
需要说明的是,为了清晰阐述本申请实施例的方法,该示例中的待定位目标设定为真实室内定位环境下的待定位基站,同时不失一般性,将待定位基站假设成一质点。
如图9所示,在一个室内房间中,待定标基站X位于墙面某点,A、C和B分别是两个门框上的顶角点。
首先,利用室内CAD图提取室内墙线、门、立柱、墙壁等位置参数,绘制室内地图。
然后,选定X及所在墙面、两个门框作为参照物,利用像机正对墙线方向对X进行拍照,拍照时确保照片里需要包含X的同时还完整包括所有的参照物。
然后,处理所拍摄的照片,提取照片内墙面及门框信息,将该信息与第一步中所得到的室内地图进行匹配,得到照片中的参照物与室内地图中的参照物之间的比例尺关系。此处的匹配,包括对照片中的参照物信息和室内地图中的参照物信息进行匹配。其中,确定比例尺关系包括:
测量照片中AC的像素点长度,同时根据室内地图获取AC的真实长度m,则比例尺关系为f:m/像素点。
然后,确定照片中X与参照物的相对位置参数,即获取照片中XA、XB、XC的像素长度,则相对位置参数为g:XA、XB、XC(像素点);
然后,确定X在室内地图中的实际位置,即根据照片中X与参照物的相对位置关系g、照片中的参照物与室内地图中的比例尺关系f,确定XA、XB、XC在室内地图中的实际长度,表示为f·g(m),也就是说,通过室内地图获取A、B、C的真实位置坐标结合几何知识,可以确定待定标基站X在室内地图中的实际位置。
然后,从不同角度对X进行拍照,然后重复该示例中的上述步骤,获取到不同角度的X在室内地图中的实际位置,对多个角度对应的X在室内地图中的实际位置求平均值,得到待定标基站X在室内地图中的平均位置,以该平均位置作为待定标基站X在室内地图中的最终定位位置。
以下给出另一个示例以说明本申请实施例的整体工作原理及流程。
如图12所示,按照如下的步骤C100至C600实现本申请实施例的工作流程:
步骤C100:利用室内CAD图提取室内轮廓参数以绘制室内图像,以便于进一步利用所绘制的室内图像进行比对计算;
步骤C200:选定目标参照物进行拍照,拍照时确保照片里包含完整的目标参照物,使得照片能够同时显示待定位目标与目标参照物,以便于从照片中准确可靠地提取关于待定位目标、目标参照物的位置信息;
步骤C300:处理所拍摄的照片,将处理得到目标参照物的位置信息与室内图像进行匹配,得到照片中的目标参照物与室内图像中的目标参照物之间的映射关系,以便于进一步基于映射关系确定目标参照物在室内图像和照片之中的区别参数;
步骤C400:确定照片中待定位目标与目标参照物的相对位置关系,以便于进一步基于该相对位置关系进行转换计算;
步骤C500:确定待定位目标在室内图像中的实际位置,从而得到在单次定位情况下的待定位目标在室内图像中的位置信息;
步骤C600:改变拍照角度,返回步骤C200,从而得到在多次定位情况下的待定位目标在室内图像中的统计位置信息,相比于单次定位情况下计算得到的位置信息,多次定位的情况能够有效降低单次定位可能存在的误差,提升定位准确度。
另外,如图13所示,本申请的一个实施例还公开了一种电子设备100,包括:至少一个处理器110;至少一个存储器120,用于存储至少一个程序;当至少一个程序被至少一个处理器110执行时实现如前面任意实施例中的定位方法。
另外,本申请的一个实施例还公开了一种计算机可读存储介质,其中存储有计算机可执行指令,计算机可执行指令用于执行如前面任意实施例中的定位方法。
此外,本申请的一个实施例还公开了一种计算机程序产品,包括计算机程序或计算机指令,计算机程序或计算机指令存储在计算机可读存储介质中,计算机设备的处理器从计算机可读存储介质读取计算机程序或计算机指令,处理器执行计算机程序或计算机指令,使得计算机设备执行如前面任意实施例中的定位方法。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统可以被实施为软件、固件、硬件及其适当的组合。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。

Claims (19)

  1. 一种定位方法,包括:
    获取显示有待定位目标和至少一个目标参照物的参考图像,获取所述目标参照物在所述参考图像中的第一位置信息,以及获取所述待定位目标在所述参考图像中相对于所述目标参照物的相对位置信息;
    获取所述目标参照物在预生成的室内图像中的第二位置信息;
    根据所述第一位置信息、所述相对位置信息和所述第二位置信息,确定所述待定位目标在所述室内图像中的目标位置信息。
  2. 根据权利要求1所述的定位方法,其中,所述根据所述第一位置信息、所述相对位置信息和所述第二位置信息,确定所述待定位目标在所述室内图像中的目标位置信息,包括:
    根据所述第一位置信息、所述相对位置信息和所述第二位置信息,采用几何计算方式确定所述待定位目标在所述室内图像中的目标位置信息。
  3. 根据权利要求2所述的定位方法,其中,所述根据所述第一位置信息、所述相对位置信息和所述第二位置信息,采用几何计算方式确定所述待定位目标在所述室内图像中的目标位置信息,包括:
    根据所述第一位置信息和所述第二位置信息,确定所述参考图像中的所述目标参照物与所述室内图像中的所述目标参照物之间的参照映射信息;
    根据所述第一位置信息、所述第二位置信息、所述参照映射信息和所述相对位置信息,采用几何计算方式确定所述待定位目标在所述室内图像中的目标位置信息。
  4. 根据权利要求3所述的定位方法,其中,所述根据所述第一位置信息、所述第二位置信息、所述参照映射信息和所述相对位置信息,采用几何计算方式确定所述待定位目标在所述室内图像中的目标位置信息,包括:
    根据所述第一位置信息确定所述目标参照物的第一位置坐标,以及根据所述相对位置信息确定与目标参照物对应的相对位置参数,以及根据所述第二位置信息确定所述目标参照物的第二位置坐标;
    采用几何计算方式对所述第一位置坐标、所述第二位置坐标、所述参照映射信息和所述相对位置参数计算,得到所述待定位目标在所述室内图像中的目标位置信息。
  5. 根据权利要求1所述的定位方法,其中,所述定位方法还包括:
    从不同角度重新获取显示有待定位目标和目标参照物的参考图像,从所述参考图像中重新获取所述目标参照物的第一位置信息,以及所述目标参照物对于所述待定位目标的相对位置信息;
    从预生成的室内图像中重新获取所述目标参照物的第二位置信息;
    根据所述第一位置信息、所述相对位置信息和所述第二位置信息,重新确定所述待定位目标在所述室内图像中的目标位置信息;
    根据得到的多个所述目标位置信息获取所述待定位目标在所述室内图像中的定位位置信息。
  6. 根据权利要求5所述的定位方法,其中,所述根据得到的多个所述目标位置信息获取所述待定位目标在所述室内图像中的定位位置信息,包括:
    根据得到的多个所述目标位置信息获取多个所述目标位置信息的平均值,得到所述待定位目标在所述室内图像中的定位位置信息。
  7. 根据权利要求1所述的定位方法,其中,所述获取显示有待定位目标和至少一个目标参照物的参考图像,包括:
    确定与待定位目标处于同一平面内的至少一个目标参照物;
    对所述待定位目标和至少一个所述目标参照物进行拍照,得到显示有所述待定位目标和所述目标参照物的参考图像。
  8. 根据权利要求7所述的定位方法,其中,所述对所述待定位目标和至少一个所述目标参照物进行拍照,得到显示有所述待定位目标和所述目标参照物的参考图像,包括:
    对所述待定位目标和至少一个所述目标参照物进行正面拍照,得到显示有所述待定位目标和所述目标参照物的参考图像。
  9. 根据权利要求1所述的定位方法,其中,所述获取所述待定位目标在所述参考图像中相对于所述目标参照物的相对位置信息,包括:
    从所述参考图像中获取所述目标参照物的像素坐标信息和所述待定位目标的像素坐标信息;
    根据所述目标参照物的像素坐标信息和所述待定位目标的像素坐标信息,确定所述目标参照物对于所述待定位目标的相对位置信息。
  10. 根据权利要求1或9所述的定位方法,其中,所述相对位置信息包括如下至少之一:
    所述目标参照物与所述待定位目标之间的距离;
    所述目标参照物对于目标投影的相对位置,其中,所述目标投影为所述待定位目标在所述目标参照物中的投影。
  11. 根据权利要求1所述的定位方法,其中,所述第一位置信息包括如下至少之一:
    所述目标参照物在所述参考图像中的物理坐标信息;
    所述目标参照物在所述参考图像中的像素坐标信息。
  12. 根据权利要求1所述的定位方法,其中,所述第二位置信息包括如下至少之一:
    所述目标参照物在所述室内图像中的物理坐标信息;
    所述目标参照物在所述室内图像中的像素坐标信息。
  13. 根据权利要求3或4所述的定位方法,其中,所述参照映射信息包括如下至少之一:
    比例尺关系;
    投影关系。
  14. 根据权利要求1所述的定位方法,其中,所述获取显示有待定位目标和至少一个目标参照物的参考图像之前,还包括:
    在所述室内图像中选择至少一个所述目标参照物。
  15. 根据权利要求1所述的定位方法,其中,所述室内图像基于如下步骤生成:
    获取室内空间的室内位置参数;
    根据所述室内位置参数生成室内图像,其中,所述室内图像与所述室内空间对应。
  16. 根据权利要求15所述的定位方法,其中,所述室内位置参数包括如下至少之一:
    墙线位置参数;
    门位置参数;
    立柱位置参数;
    墙壁位置参数;
    窗位置参数。
  17. 一种电子设备,包括:
    至少一个处理器;
    至少一个存储器,用于存储至少一个程序;
    当至少一个所述程序被至少一个所述处理器执行时实现如权利要求1至16任意一项所述的定位方法。
  18. 一种计算机可读存储介质,其中存储有处理器可执行的程序,所述处理器可执行的程序被处理器执行时用于实现如权利要求1至16任意一项所述的定位方法。
  19. 一种计算机程序产品,包括计算机程序或计算机指令,所述计算机程序或所述计算机指令存储在计算机可读存储介质中,计算机设备的处理器从所述计算机可读存储介质读取所述计算机程序或所述计算机指令,所述处理器执行所述计算机程序或所述计算机指令,使得所述计算机设备执行如权利要求1至16任意一项所述的定位方法。
PCT/CN2023/073002 2022-05-31 2023-01-18 定位方法、电子设备、存储介质及程序产品 WO2023231425A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210608266.9A CN115984366A (zh) 2022-05-31 2022-05-31 定位方法、电子设备、存储介质及程序产品
CN202210608266.9 2022-05-31

Publications (1)

Publication Number Publication Date
WO2023231425A1 true WO2023231425A1 (zh) 2023-12-07

Family

ID=85958711

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/073002 WO2023231425A1 (zh) 2022-05-31 2023-01-18 定位方法、电子设备、存储介质及程序产品

Country Status (2)

Country Link
CN (1) CN115984366A (zh)
WO (1) WO2023231425A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019125227A (ja) * 2018-01-18 2019-07-25 光禾感知科技股▲ふん▼有限公司 屋内測位方法及びシステム、ならびにその屋内マップを作成するデバイス
CN110443850A (zh) * 2019-08-05 2019-11-12 珠海优特电力科技股份有限公司 目标对象的定位方法及装置、存储介质、电子装置
CN112348909A (zh) * 2020-10-26 2021-02-09 北京市商汤科技开发有限公司 一种目标定位方法、装置、设备及存储介质
CN113804100A (zh) * 2020-06-11 2021-12-17 华为技术有限公司 确定目标对象的空间坐标的方法、装置、设备和存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019125227A (ja) * 2018-01-18 2019-07-25 光禾感知科技股▲ふん▼有限公司 屋内測位方法及びシステム、ならびにその屋内マップを作成するデバイス
CN110443850A (zh) * 2019-08-05 2019-11-12 珠海优特电力科技股份有限公司 目标对象的定位方法及装置、存储介质、电子装置
CN113804100A (zh) * 2020-06-11 2021-12-17 华为技术有限公司 确定目标对象的空间坐标的方法、装置、设备和存储介质
CN112348909A (zh) * 2020-10-26 2021-02-09 北京市商汤科技开发有限公司 一种目标定位方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN115984366A (zh) 2023-04-18

Similar Documents

Publication Publication Date Title
US20210233275A1 (en) Monocular vision tracking method, apparatus and non-transitory computer-readable storage medium
US10896497B2 (en) Inconsistency detecting system, mixed-reality system, program, and inconsistency detecting method
CN111127655B (zh) 房屋户型图的构建方法及构建装置、存储介质
US10086955B2 (en) Pattern-based camera pose estimation system
CN113240769B (zh) 空间链接关系识别方法及装置、存储介质
US9842423B2 (en) Systems and methods for producing a three-dimensional face model
US10451403B2 (en) Structure-based camera pose estimation system
WO2021031781A1 (zh) 投影图像校准方法、装置及投影设备
US9858669B2 (en) Optimized camera pose estimation system
CN111563950B (zh) 纹理映射策略确定方法、装置及计算机可读存储介质
US8509522B2 (en) Camera translation using rotation from device
CN111368927A (zh) 一种标注结果处理方法、装置、设备及存储介质
US20220198743A1 (en) Method for generating location information, related apparatus and computer program product
CN113610702B (zh) 一种建图方法、装置、电子设备及存储介质
WO2023231425A1 (zh) 定位方法、电子设备、存储介质及程序产品
US9852542B1 (en) Methods and apparatus related to georeferenced pose of 3D models
WO2023088127A1 (zh) 室内导航方法、服务器、装置和终端
US11783501B2 (en) Method and apparatus for determining image depth information, electronic device, and media
JP5464671B2 (ja) 画像処理装置、画像処理方法および画像処理プログラム
US20230009413A1 (en) Analysis apparatus, communication system, non-transitory computer readable medium
US20240070336A1 (en) Computing device and method for updating a model of a building
CN117237500A (zh) 三维模型显示视角调整方法、装置、电子设备及存储介质
US20200151848A1 (en) System and Method for Surface Profiling
CN116596994A (zh) 一种基于三目视觉的目标位姿确定方法、装置、设备及存储介质
WO2023241782A1 (en) Determining real-world dimension(s) of a three-dimensional space

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23814624

Country of ref document: EP

Kind code of ref document: A1