WO2023231425A1 - 定位方法、电子设备、存储介质及程序产品 - Google Patents
定位方法、电子设备、存储介质及程序产品 Download PDFInfo
- Publication number
- WO2023231425A1 WO2023231425A1 PCT/CN2023/073002 CN2023073002W WO2023231425A1 WO 2023231425 A1 WO2023231425 A1 WO 2023231425A1 CN 2023073002 W CN2023073002 W CN 2023073002W WO 2023231425 A1 WO2023231425 A1 WO 2023231425A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- position information
- reference object
- indoor
- image
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/13—Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Definitions
- the present application relates to the field of positioning technology, in particular to a positioning method, electronic equipment, computer storage media and computer program products.
- indoor positioning based on 5G base stations has received more and more attention.
- indoor positioning requires accurate mapping of the position coordinates of the positioning base station without considering the specific positioning technology.
- small errors in the base station location will lead to large positioning errors.
- indoor positioning base stations are often distributed in indoor ceilings, wall lines, columns and other locations that are not easily touched. It is difficult to measure directly, and the base station may also be blocked by other objects, which also makes positioning measurement more difficult.
- the layout of indoor scene items may be updated, which involves frequent correction of the base station position.
- Embodiments of the present application provide a positioning method, electronic equipment, computer storage media and computer program products, which can effectively reduce the workload of positioning targets and improve the accuracy of positioning results.
- inventions of the present application provide a positioning method.
- the positioning method includes:
- the target position information of the target to be located in the indoor image is determined.
- embodiments of the present application also provide an electronic device, including: at least one processor; at least one memory for storing at least one program; when at least one of the programs is executed by at least one of the processors, the Positioning method as described previously.
- embodiments of the present application also provide a computer-readable storage medium in which a processor-executable program is stored.
- the processor-executable program is executed by the processor, it is used to implement the above-mentioned steps. Positioning method.
- embodiments of the present application also provide a computer program product, a computer program or the computer instruction stored in a computer-readable storage medium, a processor of a computer device reads the computer program or the computer instructions from the computer-readable storage medium, and the processor executes the computer program or the computer instructions such that The computer device performs the positioning method as described above.
- Indoor positioning can be achieved only by obtaining a reference image showing the target to be positioned and at least one target reference object, and a pre-generated indoor image. It can greatly reduce the workload of indoor positioning, that is, matching the reference image with the indoor image, thereby obtaining the first position information of the target reference object in the reference image and the second position of the target reference object in the indoor image.
- the embodiments of the present application can effectively reduce the workload of positioning the target and improve the accuracy of positioning results, thereby filling the technical gaps in related methods.
- Figure 1 is a flow chart of a positioning method provided by an embodiment of the present application.
- Figure 2 is a flow chart for obtaining a reference image showing a target to be located and at least one target reference object in a positioning method provided by an embodiment of the present application;
- Figure 3 is a flow chart of obtaining a reference image showing a target to be located and a target reference object in a positioning method provided by an embodiment of the present application;
- Figure 4 is a flow chart for obtaining the relative position information of the target to be located relative to the target reference object in the reference image in the positioning method provided by one embodiment of the present application;
- Figure 5 is a flow chart for generating indoor images in the positioning method provided by an embodiment of the present application.
- Figure 6 is a flow chart of using geometric calculation to determine the target position information of the target to be located in the indoor image in the positioning method provided by one embodiment of the present application;
- Figure 7 is a flow chart of using geometric calculation to determine the target position information of a target to be located in an indoor image in a positioning method provided by another embodiment of the present application;
- Figure 8 is a schematic diagram of the application scenario of the positioning method provided by an embodiment of the present application.
- Figure 9 is a schematic diagram of an application scenario of a positioning method provided by another embodiment of the present application.
- Figure 10 is a flow chart of a positioning method provided by another embodiment of the present application.
- Figure 11 is a flow chart for obtaining positioning position information of a target to be positioned in an indoor image in a positioning method provided by an embodiment of the present application;
- Figure 12 is a schematic flowchart of the execution of a positioning method provided by an embodiment of the present application.
- Figure 13 is a schematic diagram of an electronic device provided by an embodiment of the present application.
- the positioning method of one embodiment includes: acquiring a reference image showing a target to be located and at least one target reference object, acquiring the first position information of the target reference object in the reference image, and acquiring the relative position of the target to be located in the reference image. Relative position information of the target reference object; obtaining second position information of the target reference object in the pre-generated indoor image; determining the target of the target to be located in the indoor image based on the first position information, relative position information and second position information location information.
- there is no need to use other special positioning equipment and there is no need to consider the layout influence of the indoor environment.
- Indoor positioning can be achieved only by obtaining a reference image showing the target to be positioned and at least one target reference object and a pre-generated indoor image.
- a reference image showing the target to be positioned and at least one target reference object and a pre-generated indoor image.
- Indoor positioning can be achieved only by obtaining a reference image showing the target to be positioned and at least one target reference object and a pre-generated indoor image.
- greatly reduce the workload of indoor positioning that is, match the reference image with the indoor image, thereby obtaining the first position information of the target reference object in the reference image and the second position information of the target reference object in the indoor image.
- to obtain the reference mapping information between the target reference object in the reference image and the target reference object in the indoor image so that the relative position of the target to be located in the reference image relative to the target reference object is based on the reference mapping information and the determined Position information, accurately obtain the target position information of the target to be located in the indoor image. Therefore, the embodiments of the present application can effectively reduce the workload of positioning the target and
- Figure 1 is a flow chart of a positioning method provided by an embodiment of the present application.
- the positioning method may include but is not limited to step S110 to step S130.
- Step S110 Obtain a reference image showing the target to be located and at least one target reference object, obtain the first position information of the target reference object in the reference image, and obtain the relative position information of the target to be located in the reference image relative to the target reference object.
- a reference image showing the target to be located and at least one target reference object is obtained to determine the first position information of the target reference object in the reference image from the obtained reference image, and from the obtained reference image Determine the relative position information of the target to be located relative to the target reference object in the reference image. That is to say, the different position information of the target reference object in the reference image can be obtained, so that in subsequent steps, based on the different positions The information further determines the indoor location of the target to be located.
- the target to be located may be, but is not limited to, a base station, a transmitting terminal, an access terminal, a network controller, a modulator, a service unit, etc. to be calibrated.
- the target to be located is a transmitting terminal or an access terminal, it may be, but is not limited to, a transmitting terminal or an access terminal. It is user equipment (UE), user unit, user station, mobile station, mobile station, remote station, remote terminal, mobile device, user terminal, wireless communication equipment, user agent or user device, etc.
- UE user equipment
- the type of the target reference object is not limited here.
- the target reference object may be, but is not limited to, a pillar, corner, wall line, door, window, etc. in the indoor space where the target to be located is located.
- the number of target reference objects is not limited here. For example, when a target reference object is determined, only one target reference object needs to be displayed in the acquired reference image. Similarly, when two target reference objects are determined, the two target reference objects need to be displayed simultaneously in the acquired reference image. target reference objects, that is to say, all determined target reference objects need to be displayed in the obtained reference image.
- the reference image acquisition method, or the presentation form of the reference image may be in various ways, which is not limited here.
- the reference image can be obtained by taking a photo, and the reference image at this time is the photo taken.
- the target to be located and the selected target reference object are included in the camera, mobile phone and other shooting equipment. Take a shot In the scene, the target to be located and the selected target reference object can be displayed in the photos taken.
- the photos taken can be processed, but are not limited to, using image processing, pattern recognition and other technologies to extract the targets in the photos.
- the first position information of the reference object is not limited to, using image processing, pattern recognition and other technologies to extract the targets in the photos.
- the first location information may be presented in multiple forms, which is not limited here.
- the first position information may be the physical coordinate information of the target reference object in the reference image.
- the physical coordinate information here refers to the coordinate information of the target reference object in the world coordinate system, that is, the target reference object is in the world coordinate system.
- absolute coordinates under; for another example, the first position information can be the pixel coordinate information of the target reference object in the reference image, where the pixel coordinate information refers to the coordinate information of the target reference object in the reference image coordinate system, that is, The relative coordinates of the target reference object in the reference image coordinate system.
- the pixel coordinate information of the target reference object in the reference image can be converted into the pixel coordinate information of the target reference object in the world coordinate system.
- the first position information can be presented in a variety of ways in specific application scenarios. Those skilled in the art can select and set the corresponding presentation mode of the first position information according to the specific application scenarios.
- the relative position information may be presented in multiple forms, which is not limited here.
- the relative position information can be the distance between the target reference object and the target to be located, where the distance here is the relative distance between the target reference object and the target to be located in the reference image; for another example, the relative position information can be is the relative position of the target reference object to the target projection.
- the target projection here is the projection of the target to be located in the target reference object.
- the relative position information can be the distance between the target reference object and the target to be located and the target.
- the relative position of the reference object relative to the target projection that is to say, the relative position information can be represented by the combination of these two relative position relationships.
- the relative position relationship may be determined using, but not limited to, image processing, pattern recognition and other technologies.
- Step S110 includes but is not limited to steps S111 and S112.
- Step S111 Determine at least one target reference object in the same plane as the target to be located;
- Step S112 Take photos of the target to be located and at least one target reference object to obtain a reference image showing the target to be located and the target reference object.
- this step by determining at least one target reference object that is in the same plane as the target to be located, it is convenient to take pictures of the target to be located and the at least one target reference object. That is, since the target to be located and the target reference object are in the same plane, both There is no plane difference between them, so when taking photos of the above two, you can easily include them in the shooting range through a simplified lens, so as to take photos that meet the requirements within the shooting range. A reference image showing the target to be located and the target reference object is obtained.
- the physical coordinates of the target to be located and the target reference object are at the same level, that is, the two positions will not have different spatial dimensions. This facilitates In subsequent steps, the geometric method can be used to accurately and reliably calculate the target position information of the target to be located in the indoor image.
- the relative spatial relationship between the target to be located and the target reference object can be more diverse, and those skilled in the art can select and set according to specific scenarios, which is not limited here.
- Step S112 includes but is not limited to step S1121.
- Step S1121 Take a frontal photo of the target to be located and at least one target reference object to obtain a reference image showing the target to be located and the target reference object.
- a comparison can be obtained Intuitively display the reference image of the target to be located and the target reference object. That is to say, taking a frontal photo can present the entire part of the target to be located and the target reference object (including details that may not be easy to pay attention to) more completely, so that When the reference image is subsequently parsed to obtain relevant position information parameters, such as the first position information and relative position information, high parsing accuracy is ensured, which is conducive to obtaining more accurate position information parameters.
- relevant position information parameters such as the first position information and relative position information
- angles and ways to take pictures of the target to be positioned and the target reference object such as taking pictures from all sides, a set angle, etc.
- those skilled in the art can also take pictures according to specific The scene is used to select the angle and method of taking pictures of the positioning target and the target reference object, which are not limited here.
- Step S110 includes but is not limited to steps S113 and S114.
- Step S113 Obtain the pixel coordinate information of the target reference object and the pixel coordinate information of the target to be located from the reference image;
- Step S114 Determine the relative position information of the target reference object to the target to be located based on the pixel coordinate information of the target reference object and the pixel coordinate information of the target to be located.
- this step by respectively obtaining the pixel coordinate information of the target reference object and the pixel coordinate information of the target to be located from the reference image, it is possible to determine the pixel coordinate information based on the difference between the pixel coordinate information of the target reference object and the pixel coordinate information of the target to be located.
- To determine the relative position information since the pixel coordinate information of the target reference object and the pixel coordinate information of the target to be located can be accurately determined, that is to say, the accuracy of the obtained pixel coordinate information is high, so based on the pixel coordinate information The final determined relative position information has a higher accuracy rate.
- Step S120 Obtain the second position information of the target reference object in the pre-generated indoor image.
- the indoor image is an image corresponding to the indoor space, it can be, for example, but not limited to, a CAD drawing, a Pro/Engineer drawing, and other two-dimensional or three-dimensional drawings, etc., that is to say, it is an image of the actual presentation of the indoor space.
- the indoor image is pre-generated, so the second position information of the target reference object in the pre-generated indoor image can be obtained from the indoor image, so that the second position information and the first position information can be further combined in subsequent steps. Compare.
- indoor images can be generated or drawn in various ways, which are not limited here.
- the indoor image may be, but is not limited to, generated based on the following steps S200 and S300.
- Step S200 Obtain indoor position parameters of the indoor space.
- Step S300 Generate an indoor image corresponding to the indoor space according to the indoor location parameters.
- the indoor position parameters are position parameters associated with the indoor space
- the overall distribution of the indoor space can be determined, so that the indoor position corresponding to the indoor space can be accurately generated based on the indoor position parameters.
- the indoor image is used to determine the second position information of the target reference object in the indoor image based on the generated indoor image.
- two-dimensional or three-dimensional CAD drawing software may be used, but is not limited to, to perform step S300 to generate an indoor image that meets the requirements, or other drawing software similar to CAD drawing software and having similar functions may be used to draw, There are no restrictions here.
- indoor location parameters include at least one of the following:
- the overall indoor position parameters for example, selecting the position parameters of indoor wall lines, corners, columns, doors, windows and other large-scale objects to complete the drawing of indoor images, are not limited here.
- the second location information may be presented in multiple forms, which is not limited here.
- the second position information may be the physical coordinate information of the target reference object in the indoor image.
- the physical coordinate information here refers to the coordinate information of the target reference object in the world coordinate system, that is, the target reference object is in the world coordinate system.
- absolute coordinates under; for another example, the second position information can be the pixel coordinate information of the target reference object in the indoor image, where the pixel coordinate information refers to the coordinate information of the target reference object in the indoor image coordinate system, that is The relative coordinates of the target reference object in the indoor image coordinate system.
- the pixel coordinate information of the target reference object in the indoor image can be converted into the pixel coordinate information of the target reference object in the indoor image.
- the physical coordinate information in the indoor image that is to say, the second position information can be presented in a variety of ways in specific application scenarios. Those skilled in the art can select and set the corresponding presentation mode of the second position information according to the specific application scenarios.
- step S110 also includes but is not limited to step S140.
- Step S140 Select at least one target reference object in the indoor image.
- the indoor image is generated in advance, at least one required target reference object can be selected in the indoor image in advance, so that when the reference image is subsequently obtained, the target reference object does not need to be re-selected or additionally selected. Therefore, the workload in obtaining the reference image can be reduced, which is conducive to obtaining the reference image that meets the requirements more effectively and reliably.
- Step S130 Determine the target position information of the target to be located in the indoor image based on the first position information, relative position information and second position information.
- Indoor positioning can be achieved only by obtaining a reference image showing the target to be positioned and at least one target reference object, and a pre-generated indoor image, which can greatly Reduce the workload of indoor positioning, that is, match the reference image with the indoor image, thereby obtaining the first position information of the target reference object in the reference image and the second position information of the target reference object in the indoor image, To obtain the reference mapping information between the target reference object in the reference image and the target reference object in the indoor image, so that the relative position of the target to be located in the reference image relative to the target reference object is based on the reference mapping information and the determined information, and accurately obtain the target position information of the target to be located in the indoor image. Therefore, the embodiments of the present application can effectively reduce the workload of positioning the target and improve the accuracy of positioning results, thereby filling the technical gaps in related methods.
- the target location information may be presented in a variety of forms, which is not limited here.
- the target location information can be the physical coordinate information of the target to be located in the indoor image.
- the physical coordinate information here refers to the coordinate information of the target to be located in the world coordinate system, that is, the target to be located is in the world coordinate system.
- the pixel coordinate information of the target to be located in the indoor image can be converted into the pixel coordinate information of the target to be located in the indoor image.
- the physical coordinate information in the target is used to more accurately determine the actual location of the target to be located, which is conducive to possible necessary repairs and replacements.
- the target location information can be presented in a variety of ways in specific application scenarios. , those skilled in the art can select and set the corresponding presentation method of the target location information according to the specific application scenario.
- step S130 which includes but is not limited to step S131.
- Step S131 Based on the first position information, relative position information and second position information, use geometric calculation to determine the target position information of the target to be located in the indoor image.
- the first position information, the relative position information and the second position information respectively indicate the target to be located, the geometric position between the target to be located and the target reference object, and the target reference object, geometric calculation methods can be used to further
- the target position information of the target to be located in the indoor image is determined by calculating the geometric position.
- the specific means of the geometric calculation method are not limited, and those skilled in the art can select and calculate according to the actual application scenario.
- each position information is input into a preset geometric calculation program, and the result of the target position information of the target to be located in the indoor image is output through the geometric calculation program; another example is that the external operator uses the acquired position information to Set the corresponding geometric calculation method, etc.
- step S131 which includes but is not limited to steps S1311 to S1312.
- Step S1311 Determine reference mapping information between the target reference object in the reference image and the target reference object in the indoor image based on the first position information and the second position information;
- Step S1312 Based on the first position information, the second position information, the reference mapping information and the relative position information, use geometric calculation to determine the target position information of the target to be located in the indoor image.
- the first position information and the second position information reflect different distinguishing positions of the target reference object, it is possible to determine the difference between the target reference object in the reference image and the indoor image through the first position information and the second position information.
- the reference mapping information between the target reference objects, and then based on the first position information, the second position information, the reference mapping information and the relative position information, the target position information of the target to be located in the indoor image can be accurately determined using geometric calculation.
- the reference mapping information includes at least one of the following:
- the scale relationship represents the scale for conversion between the reference image and the indoor image
- the projection relationship represents the projection ratio for conversion between the reference image and the indoor image.
- the reference mapping information can also be more types, that is, this Persons skilled in the art can refer to the setting method of the reference mapping information shown above to set other reference mapping information, which is not limited here.
- Step S1312 includes but is not limited to steps S13121 to S13122.
- Step S13121 Determine the first position coordinates of the target reference object according to the first position information, determine the relative position parameters corresponding to the target reference object according to the relative position information, and determine the second position coordinates of the target reference object according to the second position information;
- Step S13122 Calculate the first position coordinates, the second position coordinates, the reference mapping information and the relative position parameters using geometric calculation methods to obtain the target position information of the target to be located in the indoor image.
- the actual position of the target reference object can be obtained, so that the actual position of the target reference object can be obtained based on The actual position, reference mapping information and relative position parameters can be accurately calculated to obtain the target position information of the target to be located in the indoor image.
- the first position coordinates may be the physical coordinates of the target reference object in the reference image, where the physical coordinates
- the coordinates refer to the coordinates of the target reference object in the world coordinate system, that is, the absolute coordinates of the target reference object in the world coordinate system; for another example, the first position coordinates can be the pixel coordinates of the target reference object in the reference image, where The pixel coordinates at refer to the coordinates of the target reference object in the reference image coordinate system, that is, the relative coordinates of the target reference object in the reference image coordinate system.
- the pixel coordinates of the target reference object in the reference image are converted into the physical coordinates of the target reference object in the reference image. That is to say, the first position coordinates can be presented in a variety of ways in specific application scenarios. Those skilled in the art The corresponding presentation method of the first position coordinate can be selected and set according to the specific application scenario.
- the relative position parameters may be presented in various forms, which are not limited here.
- the relative position parameter may be the distance parameter between the target reference object and the target to be located, or the relative position parameter of the target reference object to the target projection, etc.
- the second position parameter may be presented in a variety of forms, which is not limited here.
- the second position parameter may be the physical coordinates of the target reference object in the indoor image.
- the physical coordinates here refer to the coordinates of the target reference object in the world coordinate system, that is, the absolute coordinates of the target reference object in the world coordinate system. Coordinates; for another example, the second position parameter can be the pixel coordinates of the target reference object in the indoor image.
- the pixel coordinates here refer to the coordinates of the target reference object in the indoor image coordinate system, that is, the target reference object is in the indoor image. Relative coordinates in the coordinate system.
- the pixel coordinates of the target reference object in the indoor image can be converted into the physical coordinates of the target reference object in the indoor image. That is to say, the second position parameter can be presented in a variety of ways in specific application scenarios, and those skilled in the art can select and set the corresponding presentation mode of the second position parameter according to the specific application scenario.
- the target to be positioned in this example is set to a base station to be positioned in a real indoor positioning environment. Without loss of generality, the base station to be positioned is assumed to be a particle.
- AB is the wall line
- the base station X to be positioned is on the wall line AB.
- indoor CAD drawings are used to selectively extract indoor wall lines, columns, walls and other position parameters to draw an indoor map.
- the matching includes matching the reference object information in the photo and the reference object information in the indoor map.
- the target to be positioned in this example is set to a base station to be positioned in a real indoor positioning environment. Without loss of generality, the base station to be positioned is assumed to be a particle.
- the base station X to be calibrated is located at a certain point on the wall.
- a and C are the two vertex points on the door frame of door 1
- B is the vertex point on the door frame of door 2.
- indoor CAD drawings are used to selectively extract indoor wall lines, doors, columns, walls and other position parameters to draw an indoor map.
- the matching includes matching the reference object information in the photo and the reference object information in the indoor map. Among them, determining the scale relationship includes:
- the relative position parameters of X in the photo and the reference object that is, obtain the pixel lengths of XA, XB, and XC in the photo, then the relative position parameters are g: XA, XB, and XC (pixel points);
- the actual position of X in the indoor map that is, determine the positions of XA, XB, and XC in the indoor map based on the relative position relationship g between
- the actual length of is expressed as f ⁇ g(m). That is to say, by obtaining the real position coordinates of A, B, and C through the indoor map and combining it with geometric knowledge, the actual position of the base station X to be calibrated in the indoor map can be determined.
- one embodiment of the present application also includes but is not limited to steps S150 to S180.
- Step S150 Reacquire reference images showing the target to be located and the target reference object from different angles, and reacquire the first position information of the target reference object and the relative position information of the target reference object to the target to be located from the reference image;
- Step S160 Re-obtain the second position information of the target reference object from the pre-generated indoor image
- Step S170 Re-determine the target position information of the target to be located in the indoor image based on the first position information, relative position information and second position information;
- Step S180 Obtain the positioning position information of the target to be positioned in the indoor image based on the obtained plurality of target position information.
- the target to be located and the target reference are displayed again from different angles.
- Reference images of objects can be used to obtain different reference images. Since indoor images are pre-generated and determined, the target position information of the target to be located in the indoor image can be recalculated based on the different reference images and indoor images obtained.
- the positioning information in indoor images is relatively more accurate and can reduce measurement errors that may be caused by a small number of measurements.
- reference images showing the target to be located and the target reference object can be reacquired from different angles. But it is not limited to: taking frontal photos of the target to be positioned and the target reference object as shown in step S1121, or taking side photos of the target to be positioned and the target reference object, or taking photos of the target to be positioned and the target reference object according to other preset angles, etc. , is not limited here.
- different reference images in addition to obtaining different reference images from different angles, different reference images can also be obtained but are not limited to by changing the type, quantity, etc. of the target reference objects, or by those skilled in the art according to specific scenarios. There are no restrictions on how to obtain different reference images.
- the number of times to determine the target position information of the target to be located in the indoor image is not limited. Generally speaking, it can be considered as many times as possible within the range that the workload can bear, in order to achieve the purpose of accurately measuring the calculation results. .
- steps S150 to S170 can refer to the specific implementation of steps S110 to S130.
- Implementation Since the specific implementation of steps S110 to S130 has been described in detail in the foregoing embodiments, to avoid redundancy, the specific implementation of steps S150 to S170 will not be described in detail here.
- Step S180 also includes but is not limited to step S181.
- Step S181 Obtain the average value of multiple target location information based on the obtained multiple target location information, and obtain the positioning location information of the target to be located in the indoor image.
- the mean calculation method is used, that is, by obtaining the average value of multiple target position information, the position of the target to be located in the indoor image under multiple measurement calculations can be obtained.
- Positioning position information that is to say, using the average of multiple target position information as the final positioning position information of the target to be positioned in the indoor image can reduce the positioning error of the target to be positioned and improve its positioning accuracy.
- the positioning position information of the target to be located in the indoor image can also be obtained by, but not limited to, variance calculation, standard deviation calculation, probability distribution calculation, etc., which is not limited here. .
- the target to be positioned in this example is set to a base station to be positioned in a real indoor positioning environment. Without loss of generality, the base station to be positioned is assumed to be a particle.
- AB is the wall line
- the base station X to be positioned is on the wall line AB.
- the matching includes matching the reference object information in the photo and the reference object information in the indoor map.
- the target to be positioned in this example is set to a base station to be positioned in a real indoor positioning environment. Without loss of generality, the base station to be positioned is assumed to be a particle.
- the base station X to be calibrated is located at a certain point on the wall, and A, C and B are the vertex points on the two door frames respectively.
- determining the scale relationship includes:
- the relative position parameters of X in the photo and the reference object that is, obtain the pixel lengths of XA, XB, and XC in the photo, then the relative position parameters are g: XA, XB, and XC (pixel points);
- the actual position of X in the indoor map that is, determine the positions of XA, XB, and XC in the indoor map based on the relative position relationship g between
- the actual length of is expressed as f ⁇ g(m). That is to say, by obtaining the real position coordinates of A, B, and C through the indoor map and combining it with geometric knowledge, the actual position of the base station X to be calibrated in the indoor map can be determined.
- Step C100 Use indoor CAD drawings to extract indoor contour parameters to draw indoor images, so as to further use the drawn indoor images for comparison calculations;
- Step C200 Select the target reference object to take a photo.
- the photo contains the complete target reference object, so that the photo can display the target to be located and the target reference object at the same time, so that information about the target to be located and the target reference object can be accurately and reliably extracted from the photo.
- Step C300 Process the taken photos, match the position information of the target reference object obtained through processing with the indoor image, and obtain the mapping relationship between the target reference object in the photo and the target reference object in the indoor image, so as to further facilitate mapping based on The relationship determines the distinguishing parameters of the target reference object in indoor images and photos;
- Step C400 Determine the relative positional relationship between the target to be located and the target reference object in the photo, so as to further perform conversion calculations based on the relative positional relationship;
- Step C500 Determine the actual position of the target to be located in the indoor image, thereby obtaining the position information of the target to be located in the indoor image in the case of a single positioning;
- Step C600 Change the camera angle and return to step C200 to obtain the statistical position information of the target to be located in the indoor image in the case of multiple positioning. Compared with the position information calculated in the case of single positioning, the position information calculated in the case of multiple positioning is This situation can effectively reduce possible errors in a single positioning and improve positioning accuracy.
- one embodiment of the present application also discloses an electronic device 100, including: at least one processor 110; at least one memory 120 for storing at least one program; when at least one program is processed by at least one When executed, the processor 110 implements the positioning method as in any previous embodiment.
- an embodiment of the present application also discloses a computer-readable storage medium in which computer-executable instructions are stored, and the computer-executable instructions are used to execute the positioning method as in any of the previous embodiments.
- an embodiment of the present application also discloses a computer program product, which includes a computer program or computer instructions.
- the computer program or computer instructions are stored in a computer-readable storage medium.
- the processor of the computer device reads the computer program from the computer-readable storage medium.
- the computer program or computer instructions are obtained, and the processor executes the computer program or computer instructions, so that the computer device performs the positioning method as in any of the previous embodiments.
- Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disk (DVD) or other optical disk storage, magnetic cassettes, tapes, disk storage or other magnetic storage devices, or may Any other medium used to store the desired information and that can be accessed by a computer.
- communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Geometry (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Structural Engineering (AREA)
- Civil Engineering (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Architecture (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (19)
- 一种定位方法,包括:获取显示有待定位目标和至少一个目标参照物的参考图像,获取所述目标参照物在所述参考图像中的第一位置信息,以及获取所述待定位目标在所述参考图像中相对于所述目标参照物的相对位置信息;获取所述目标参照物在预生成的室内图像中的第二位置信息;根据所述第一位置信息、所述相对位置信息和所述第二位置信息,确定所述待定位目标在所述室内图像中的目标位置信息。
- 根据权利要求1所述的定位方法,其中,所述根据所述第一位置信息、所述相对位置信息和所述第二位置信息,确定所述待定位目标在所述室内图像中的目标位置信息,包括:根据所述第一位置信息、所述相对位置信息和所述第二位置信息,采用几何计算方式确定所述待定位目标在所述室内图像中的目标位置信息。
- 根据权利要求2所述的定位方法,其中,所述根据所述第一位置信息、所述相对位置信息和所述第二位置信息,采用几何计算方式确定所述待定位目标在所述室内图像中的目标位置信息,包括:根据所述第一位置信息和所述第二位置信息,确定所述参考图像中的所述目标参照物与所述室内图像中的所述目标参照物之间的参照映射信息;根据所述第一位置信息、所述第二位置信息、所述参照映射信息和所述相对位置信息,采用几何计算方式确定所述待定位目标在所述室内图像中的目标位置信息。
- 根据权利要求3所述的定位方法,其中,所述根据所述第一位置信息、所述第二位置信息、所述参照映射信息和所述相对位置信息,采用几何计算方式确定所述待定位目标在所述室内图像中的目标位置信息,包括:根据所述第一位置信息确定所述目标参照物的第一位置坐标,以及根据所述相对位置信息确定与目标参照物对应的相对位置参数,以及根据所述第二位置信息确定所述目标参照物的第二位置坐标;采用几何计算方式对所述第一位置坐标、所述第二位置坐标、所述参照映射信息和所述相对位置参数计算,得到所述待定位目标在所述室内图像中的目标位置信息。
- 根据权利要求1所述的定位方法,其中,所述定位方法还包括:从不同角度重新获取显示有待定位目标和目标参照物的参考图像,从所述参考图像中重新获取所述目标参照物的第一位置信息,以及所述目标参照物对于所述待定位目标的相对位置信息;从预生成的室内图像中重新获取所述目标参照物的第二位置信息;根据所述第一位置信息、所述相对位置信息和所述第二位置信息,重新确定所述待定位目标在所述室内图像中的目标位置信息;根据得到的多个所述目标位置信息获取所述待定位目标在所述室内图像中的定位位置信息。
- 根据权利要求5所述的定位方法,其中,所述根据得到的多个所述目标位置信息获取所述待定位目标在所述室内图像中的定位位置信息,包括:根据得到的多个所述目标位置信息获取多个所述目标位置信息的平均值,得到所述待定位目标在所述室内图像中的定位位置信息。
- 根据权利要求1所述的定位方法,其中,所述获取显示有待定位目标和至少一个目标参照物的参考图像,包括:确定与待定位目标处于同一平面内的至少一个目标参照物;对所述待定位目标和至少一个所述目标参照物进行拍照,得到显示有所述待定位目标和所述目标参照物的参考图像。
- 根据权利要求7所述的定位方法,其中,所述对所述待定位目标和至少一个所述目标参照物进行拍照,得到显示有所述待定位目标和所述目标参照物的参考图像,包括:对所述待定位目标和至少一个所述目标参照物进行正面拍照,得到显示有所述待定位目标和所述目标参照物的参考图像。
- 根据权利要求1所述的定位方法,其中,所述获取所述待定位目标在所述参考图像中相对于所述目标参照物的相对位置信息,包括:从所述参考图像中获取所述目标参照物的像素坐标信息和所述待定位目标的像素坐标信息;根据所述目标参照物的像素坐标信息和所述待定位目标的像素坐标信息,确定所述目标参照物对于所述待定位目标的相对位置信息。
- 根据权利要求1或9所述的定位方法,其中,所述相对位置信息包括如下至少之一:所述目标参照物与所述待定位目标之间的距离;所述目标参照物对于目标投影的相对位置,其中,所述目标投影为所述待定位目标在所述目标参照物中的投影。
- 根据权利要求1所述的定位方法,其中,所述第一位置信息包括如下至少之一:所述目标参照物在所述参考图像中的物理坐标信息;所述目标参照物在所述参考图像中的像素坐标信息。
- 根据权利要求1所述的定位方法,其中,所述第二位置信息包括如下至少之一:所述目标参照物在所述室内图像中的物理坐标信息;所述目标参照物在所述室内图像中的像素坐标信息。
- 根据权利要求3或4所述的定位方法,其中,所述参照映射信息包括如下至少之一:比例尺关系;投影关系。
- 根据权利要求1所述的定位方法,其中,所述获取显示有待定位目标和至少一个目标参照物的参考图像之前,还包括:在所述室内图像中选择至少一个所述目标参照物。
- 根据权利要求1所述的定位方法,其中,所述室内图像基于如下步骤生成:获取室内空间的室内位置参数;根据所述室内位置参数生成室内图像,其中,所述室内图像与所述室内空间对应。
- 根据权利要求15所述的定位方法,其中,所述室内位置参数包括如下至少之一:墙线位置参数;门位置参数;立柱位置参数;墙壁位置参数;窗位置参数。
- 一种电子设备,包括:至少一个处理器;至少一个存储器,用于存储至少一个程序;当至少一个所述程序被至少一个所述处理器执行时实现如权利要求1至16任意一项所述的定位方法。
- 一种计算机可读存储介质,其中存储有处理器可执行的程序,所述处理器可执行的程序被处理器执行时用于实现如权利要求1至16任意一项所述的定位方法。
- 一种计算机程序产品,包括计算机程序或计算机指令,所述计算机程序或所述计算机指令存储在计算机可读存储介质中,计算机设备的处理器从所述计算机可读存储介质读取所述计算机程序或所述计算机指令,所述处理器执行所述计算机程序或所述计算机指令,使得所述计算机设备执行如权利要求1至16任意一项所述的定位方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP23814624.5A EP4502947A4 (en) | 2022-05-31 | 2023-01-18 | POSITIONING METHOD, ELECTRONIC DEVICE, STORAGE MEDIUM AND PROGRAM PRODUCT |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210608266.9A CN115984366A (zh) | 2022-05-31 | 2022-05-31 | 定位方法、电子设备、存储介质及程序产品 |
CN202210608266.9 | 2022-05-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023231425A1 true WO2023231425A1 (zh) | 2023-12-07 |
Family
ID=85958711
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/073002 WO2023231425A1 (zh) | 2022-05-31 | 2023-01-18 | 定位方法、电子设备、存储介质及程序产品 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4502947A4 (zh) |
CN (1) | CN115984366A (zh) |
WO (1) | WO2023231425A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119094998A (zh) * | 2024-09-21 | 2024-12-06 | 南京思德展示科技股份有限公司 | 一种智慧展馆室内定位方法、装置及电子设备 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019125227A (ja) * | 2018-01-18 | 2019-07-25 | 光禾感知科技股▲ふん▼有限公司 | 屋内測位方法及びシステム、ならびにその屋内マップを作成するデバイス |
CN110443850A (zh) * | 2019-08-05 | 2019-11-12 | 珠海优特电力科技股份有限公司 | 目标对象的定位方法及装置、存储介质、电子装置 |
CN112348909A (zh) * | 2020-10-26 | 2021-02-09 | 北京市商汤科技开发有限公司 | 一种目标定位方法、装置、设备及存储介质 |
CN113804100A (zh) * | 2020-06-11 | 2021-12-17 | 华为技术有限公司 | 确定目标对象的空间坐标的方法、装置、设备和存储介质 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11765323B2 (en) * | 2017-05-26 | 2023-09-19 | Calumino Pty Ltd. | Apparatus and method of location determination in a thermal imaging system |
FI129042B (en) * | 2017-12-15 | 2021-05-31 | Oy Mapvision Ltd | Computer vision system with a computer-generated virtual reference object |
-
2022
- 2022-05-31 CN CN202210608266.9A patent/CN115984366A/zh active Pending
-
2023
- 2023-01-18 EP EP23814624.5A patent/EP4502947A4/en active Pending
- 2023-01-18 WO PCT/CN2023/073002 patent/WO2023231425A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019125227A (ja) * | 2018-01-18 | 2019-07-25 | 光禾感知科技股▲ふん▼有限公司 | 屋内測位方法及びシステム、ならびにその屋内マップを作成するデバイス |
CN110443850A (zh) * | 2019-08-05 | 2019-11-12 | 珠海优特电力科技股份有限公司 | 目标对象的定位方法及装置、存储介质、电子装置 |
CN113804100A (zh) * | 2020-06-11 | 2021-12-17 | 华为技术有限公司 | 确定目标对象的空间坐标的方法、装置、设备和存储介质 |
CN112348909A (zh) * | 2020-10-26 | 2021-02-09 | 北京市商汤科技开发有限公司 | 一种目标定位方法、装置、设备及存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4502947A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP4502947A1 (en) | 2025-02-05 |
CN115984366A (zh) | 2023-04-18 |
EP4502947A4 (en) | 2025-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10896497B2 (en) | Inconsistency detecting system, mixed-reality system, program, and inconsistency detecting method | |
CN111127655B (zh) | 房屋户型图的构建方法及构建装置、存储介质 | |
CN112399158B (zh) | 投影图像校准方法、装置及投影设备 | |
US10086955B2 (en) | Pattern-based camera pose estimation system | |
CN111210468B (zh) | 一种图像深度信息获取方法及装置 | |
CN113240769B (zh) | 空间链接关系识别方法及装置、存储介质 | |
US10451403B2 (en) | Structure-based camera pose estimation system | |
US9858669B2 (en) | Optimized camera pose estimation system | |
CN111368927A (zh) | 一种标注结果处理方法、装置、设备及存储介质 | |
CN113610702B (zh) | 一种建图方法、装置、电子设备及存储介质 | |
US8509522B2 (en) | Camera translation using rotation from device | |
WO2023231425A1 (zh) | 定位方法、电子设备、存储介质及程序产品 | |
CN112785649B (zh) | 激光雷达和相机的标定方法、装置、电子设备及介质 | |
US20220198743A1 (en) | Method for generating location information, related apparatus and computer program product | |
US12131423B2 (en) | Analysis apparatus, communication system, non-transitory computer readable medium | |
US11783501B2 (en) | Method and apparatus for determining image depth information, electronic device, and media | |
CN112465692A (zh) | 图像处理方法、装置、设备及存储介质 | |
WO2023088127A1 (zh) | 室内导航方法、服务器、装置和终端 | |
CN114882194A (zh) | 房间点云数据的处理方法和装置、电子设备和存储介质 | |
US20240070336A1 (en) | Computing device and method for updating a model of a building | |
JP2013003753A (ja) | 画像処理装置、画像処理方法および画像処理プログラム | |
HK40057074A (zh) | 空間鏈接關系識別方法及裝置、存儲介質 | |
US20200151848A1 (en) | System and Method for Surface Profiling | |
CN116596994A (zh) | 一种基于三目视觉的目标位姿确定方法、装置、设备及存储介质 | |
CN115984379A (zh) | 三维定位方法、系统、装置及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23814624 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023814624 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2023814624 Country of ref document: EP Effective date: 20241030 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18867408 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |