CN117934598A - Desktop-level rigid body positioning equipment and method based on optical positioning technology - Google Patents

Desktop-level rigid body positioning equipment and method based on optical positioning technology Download PDF

Info

Publication number
CN117934598A
CN117934598A CN202410324107.5A CN202410324107A CN117934598A CN 117934598 A CN117934598 A CN 117934598A CN 202410324107 A CN202410324107 A CN 202410324107A CN 117934598 A CN117934598 A CN 117934598A
Authority
CN
China
Prior art keywords
positioning
rigid body
optical
fisheye camera
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410324107.5A
Other languages
Chinese (zh)
Other versions
CN117934598B (en
Inventor
高飞
王英建
温向勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202410324107.5A priority Critical patent/CN117934598B/en
Publication of CN117934598A publication Critical patent/CN117934598A/en
Application granted granted Critical
Publication of CN117934598B publication Critical patent/CN117934598B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a desktop-level rigid body positioning device and a method based on an optical positioning technology. The method solves the technical problems of high cost, heavy weight and limited precision existing in optical positioning, enables the space positioning of desktop-level equipment to be possible, and provides a positioning foundation for virtual reality and augmented reality, robotics and game and entertainment industries. Aiming at the requirements of desktop application, the method adopts a compact device design, has lower production cost, and is more suitable for miniaturized products and consumer markets.

Description

Desktop-level rigid body positioning equipment and method based on optical positioning technology
Technical Field
The invention belongs to the technical field of optical positioning and computer vision, and particularly relates to a tabletop-level rigid body positioning device and method based on an optical positioning technology.
Background
In the field of rigid body space positioning, a large optical motion capturing system using infrared light and a positioning scheme using ultra-bandwidth distance measurement exist at present, but both cannot meet the double requirements of microminiature and high precision of desktop-level rigid body positioning. Therefore, in the field of desktop-level rigid body positioning, no typical product exists yet.
The three-dimensional space positioning technology of rigid bodies is one of the basic technologies in a plurality of fields, and the current space positioning technology is roughly divided into two main methods of optical positioning and distance positioning. Optical positioning technology, as its name implies, is to capture and position objects by optical principles. The method is to capture the position information of a marker fixed on a human body or an object through an optical lens to complete the capture of the motion gesture. The optical positioning is realized by means of a set of precise and complex optical cameras, and the space positioning is completed by tracking target characteristic points from different angles by a plurality of high-speed cameras through a computer vision principle. Optical positioning techniques can be categorized into passive and active types. This classification is distinguished from the identifier. The active type is that the marker emits light actively and even can be coded by itself, so that the lens can observe the marker by emitting light of the marker itself in the field of view and record and capture the motion trail of the marker. The passive optical motion capture is to emit light with specific wavelength from the lamp panel of the lens to irradiate the marker, and the marker is processed by special reflection, so that the lens can capture and record the motion track of the marker in the visual field.
The current mainstream optical positioning technology mainly uses infrared light for optical positioning, because the infrared light has less distribution in the spectrum and is not easily affected by other light sources. In addition, the optical positioning technology is mainly used in the application fields of motion capture systems, space positioning of large mobile robots and the like, and the power of infrared light is improved, so that the reflectivity can be effectively improved, and obvious light spots in the field of view of the lens are formed. In addition, mainstream motion capture systems, such as infrared optical motion capture systems from Vicon [1] and chinese Nokov [2] in the united kingdom, use passive positioning because this approach does not require active power and can add any number of identification points to increase accuracy and robustness.
However, this mainstream "infrared light+passive positioning" mode is not suitable for tabletop-level rigid body positioning because tabletop-level devices are typically used in less than 5 square meters of space where heavy, bulky infrared motion capture system devices cannot be installed. In addition, the mainstream motion capture system is quite expensive, a set of complete positioning equipment of China Nokov company needs tens of thousands or even hundreds of thousands of RMB, and the motion capture system of Vicon company in UK is more up to hundreds of thousands or even millions, and if the motion capture system is applied to common desktop-level equipment, the cost performance is extremely low.
In addition to optical positioning techniques, there are also solutions for spatial positioning using distance measurements. Such schemes typically use electromagnetic signals of a particular frequency band to measure the distance between the object to be positioned and the base station, and then use the multiple distances obtained by the multiple base stations to calculate the spatial position of the object. Positioning schemes based on this technology are known as distance measurement positioning technologies, such as Ultra-Bandwidth (Ultra-Bandwidth) based positioning systems. Although the equipment used by the method is light, easy to deploy and low in price, the UWB precision is limited, and the positioning precision of tens of centimeters or tens of centimeters can only be realized, so that the high-precision positioning requirement of desktop-level equipment can not be met, and the method is also not suitable for desktop-level rigid body positioning.
Therefore, the existing optical positioning technical scheme still has the problems of high cost, heavy weight and limited precision, and a positioning technical platform capable of providing stability, accuracy and robustness for desktop level equipment is needed.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a desktop-level rigid body positioning device and a method based on an optical positioning technology, and the desktop-level rigid body positioning device comprises hardware equipment and a software algorithm design method. The invention enables the space positioning of desktop-level equipment to be possible, and provides a positioning basis for virtual reality and augmented reality, robotics and game and entertainment industries. To solve the technical problems of high cost, heavy weight and limited precision existing in the optical positioning in the related technology.
The technical scheme of the invention is as follows:
The invention provides desktop-level rigid body positioning equipment based on an optical positioning technology, which comprises a positioning box, a frosted black glass fiber platform, a transparent acrylic platform for placing an unmanned aerial vehicle and a frosted black polyvinyl chloride strut for connecting the frosted black glass fiber platform and the transparent acrylic platform;
The positioning box comprises three identical fisheye camera modules, a computing unit module, a power supply module and a data and connecting line hiding box; the computing unit module is electrically connected with the three fisheye camera modules through USB connecting wires, so that the dual functions of power supply and data transmission are achieved; the power supply module is used for supplying power to the computing unit module; the data and connecting wire hiding box is arranged on one side of the positioning box and is used for accommodating data and connecting wires; the three fish eye camera modules of the positioning box are fixedly arranged below the frosted black glass fiber platform, four corners of the frosted black glass fiber platform are in threaded connection with four frosted black polyvinyl chloride struts, and the four frosted black polyvinyl chloride struts are vertically and fixedly connected with four corners of the transparent acrylic platform.
Specifically, the fisheye camera module consists of a camera lens, an optical filter and an imaging module; for identifying the light-emitting element of the object equipment to be positioned.
Further, the camera lens is a 185-degree fisheye lens, and the required positioning range can be observed.
Further, the optical filter is an ultraviolet optical filter and is used for filtering daily visible light wave bands, and only light rays with wave bands of 390-400nm are left after the optical filter is used for filtering.
Further, the imaging module is a hyperspectral range CMOS imaging chip and requires a resolution greater than 1080P.
Further, the position arrangement of the fisheye camera module is to determine the three-dimensional space position of the rigid body by using a triangle positioning method and distribute the three-dimensional space position into an equilateral triangle; and the occurrence of singular points and the influence on the positioning accuracy of the system are prevented.
The third aspect of the invention also provides a desktop-level rigid body positioning method based on an optical positioning technology, which comprises the following steps:
(1) Multi-object positioning initialization:
Because the ultraviolet detection points are anonymous, when a plurality of objects to be positioned are put in, no method exists for determining which object to be positioned corresponds to the ultraviolet detection points of the camera pictures; it is therefore necessary to input a relative position of the object to be positioned whose dimensions are unknown, in particular if put in The relative position of each object to be positioned in all objects needs to be input:
Wherein the method comprises the steps of Representing the relative position of the ith object to be positioned in all objects, the requirement is met by arranging marks on a transparent acrylic platform of the positioning box and requiring a user to place the object to be positioned on the marks; corresponding to three fisheye camera modules installed on the positioning box, the fisheye camera modules can detect ultraviolet rays emitted by an object to be positioned and obtain 3 groups of azimuth measurements, and the expression is as follows:
Wherein the method comprises the steps of ,/>The i-th azimuth measurement obtained on behalf of the m-th fisheye camera module, and the absolute position/>, of the fisheye camera module has been obtained by advanced design of the positioning box and external parameter calibrationWherein
Then carrying out multi-object positioning initialization, taking relative position and azimuth measurement as input, solving data association and position, and continuously iterating until convergence;
(2) Multi-target pixel tracking:
Because each fisheye camera module detects a plurality of light spots, the light spots between the fisheye camera modules need to have correct data association to perform correct position calculation; therefore, after the multi-object positioning initialization is successful, tracking is carried out on a plurality of light spots detected by each fish-eye camera module; specifically, based on the light spot detection result of the previous frame, one is calculated Data matrix/>Wherein/>Is the number of light spots, since the number of light spots of the two frames is the same; element/>, of a data matrixRepresenting the Euclidean distance of the ith spot of the frame from the jth spot of the previous frame; based on the data matrix, performing data association of light spots of the front frame and the back frame by using a classical algorithm of bipartite graph matching;
(3) Position calculation:
when the detection result of the three fisheye camera modules on one rigid body is correct, performing spatial three-dimensional position calculation to obtain the spatial three-dimensional position of the object, wherein the spatial three-dimensional position calculation is obtained by the following expression:
; wherein/> Absolute position of the object to be positioned; /(I)Is the absolute position of the fisheye camera module,/>The position is the azimuth observation position of the ith object to be calibrated; and finally obtaining a global optimal solution through closed-form solution.
Specifically, the multi-object positioning initialization in the step (1) specifically includes the following substeps:
(1.1) initializing the absolute position of the object to be positioned as the relative position of the object to be positioned;
(1.2) back calculating expected azimuth measurement of each object to be positioned by using the absolute position of the fisheye camera module and the absolute position of the object to be positioned;
(1.3) calculating the difference between the expected azimuth measurement and the actual azimuth measurement, and establishing a match according to the hungarian algorithm;
(1.4) calculating the position of the object by using the actual azimuth measurement on the matching, and updating the absolute position of the object to be positioned according to the position;
And (1.5) iterating until the difference between the expected azimuth measurement and the actual azimuth measurement is lower than a set threshold value, and returning to the absolute position of the object to be positioned after the iteration is ended.
In a third aspect of the present invention, there is provided an electronic apparatus comprising:
One or more processors;
A memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the tabletop-level rigid-body positioning method based on optical positioning technology.
In a fourth aspect of the present invention, a computer readable storage medium has stored thereon computer instructions which, when executed by a processor, implement the steps of the tabletop-level rigid body positioning method based on optical positioning technology.
The beneficial effects of the invention are as follows:
The optical positioning system adopts an advanced optical sensing technology and a precise algorithm, and has higher positioning precision and stronger environmental adaptability compared with the traditional infrared light motion capture system and UWB positioning system; by using specially designed optical elements and advanced image processing algorithms, the system can more accurately capture and analyze the optical characteristics of the surface of an object, realize high-precision three-dimensional space positioning and maintain stability even in environments with complex light rays or obstacle shielding; aiming at the requirements of desktop application, the device provided by the invention is more compact in design, lower in production cost and more suitable for miniaturized products and consumer markets.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a top view of the positioning box of the present invention;
FIG. 2 is a front view of the positioning box of the present invention;
FIG. 3 is a side view of the positioning box of the present invention;
FIG. 4 is an exploded view of a module of the positioning box of the present invention;
FIG. 5 is a fisheye camera module diagram of the invention;
FIG. 6 is a graph of the "wavelength-pass ratio" of the filter of the present invention;
Fig. 7 is a flow chart of a positioning algorithm of the present invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
1. Hardware device pertaining to the present invention
The invention provides desktop-level rigid body positioning equipment based on an optical positioning technology, which comprises a positioning box, a frosted black glass fiber platform, a transparent acrylic platform for placing an unmanned aerial vehicle and a frosted black polyvinyl chloride strut for connecting the frosted black glass fiber platform and the transparent acrylic platform; the positioning box comprises three identical fisheye camera modules, a computing unit module, a power supply module and a data and connecting line hiding box; the three fish eye camera modules of the positioning box are fixedly arranged below the frosted black glass fiber platform, four corners of the frosted black glass fiber platform are in threaded connection with four frosted black polyvinyl chloride struts, and the four frosted black polyvinyl chloride struts are vertically and fixedly connected with four corners of the transparent acrylic platform. As shown in fig. 1, 2, 3 and 4, the "top view", "front view", "side view" and "module explosion diagram" of the present invention are sequentially shown.
Each fisheye camera module needs to be specially selected: the invention uses ultraviolet light as detection light, so that the fisheye camera module capable of detecting ultraviolet light wave band is selected. Specifically, as shown in fig. 5, the fisheye camera module includes lens selection, filter selection, and imaging module selection; the lens is selected as a 185-degree fish-eye lens, and the positioning range of the invention can be effectively enlarged and the dead zone of positioning can be reduced by using the fish-eye lens. The optical filter is selected as an ultraviolet optical filter, and can effectively filter out visible light wave bands in daily life, only light rays with wave bands of 390-400nm are left, interference is reduced, and therefore positioning robustness and accuracy of the system are improved. A filter "wavelength-pass ratio" graph as shown in fig. 6. The imaging module comprises a high-spectrum-range CMOS imaging chip and the like, and has no special requirement, and high resolution, such as more than 1080P, is required.
The position arrangement of the fisheye camera module is as follows: the invention uses the triangle positioning method to determine the three-dimensional space position of the rigid body, and as the triangle formed by the three fisheye camera modules has the same length of any two sides, namely when an isosceles triangle is formed, the special condition of singular points can appear in the triangle positioning process, and the positioning precision of the system is affected. Therefore, the invention specially distributes the fish-eye camera modules into an unequal triangle.
The power supply module supplies power to the calculation unit module, and the calculation unit module is connected with the three fish-eye camera modules through USB connecting wires to play a double role of power supply and data transmission.
The data and connecting wire hiding box is arranged on one side of the positioning box and is used for containing data and connecting wires.
In addition, the invention also designs a transparent acrylic platform; the user only needs to put the object to be positioned on the platform, and ultraviolet rays emitted by the light-emitting element of the object to be positioned can penetrate through the transparent acrylic plate and be recognized by the fish eye module below.
2. Software design method
The invention comprises a desktop-level rigid body positioning method based on an optical positioning technology. The flow chart of the method is shown in fig. 7:
the positioning software design method comprises the following steps:
(1) Multi-object positioning initialization:
Since the detection point of the ultraviolet light is anonymous, when a plurality of objects to be positioned are put in, there is no way to determine which object to be positioned the ultraviolet detection point of the camera picture corresponds to. Thus, the algorithms encompassed by the present invention require the input of a relative position of the object to be positioned whose dimensions are unknown. In particular, if put into The relative position of each object to be positioned in all objects needs to be input:
Wherein the method comprises the steps of Representing the relative position of the ith object to be positioned in all objects, this need is met by placing logos on the transparent acrylic platform of the positioning box and requiring the user to place the object to be positioned on these logos. Corresponding to three fisheye camera modules installed on the positioning box, the fisheye camera modules can detect ultraviolet rays emitted by an object to be positioned and obtain 3 groups of azimuth measurements, and the expression is as follows:
Wherein the method comprises the steps of ,/>Representing the i-th azimuth measurement obtained by the m-th fisheye camera module. In addition, the invention needs to obtain the absolute position/> of the fisheye camera module by the advanced design of the positioning box and the external parameter calibrationWherein
In the process of multi-object positioning initialization, the invention takes relative position and azimuth measurement as input, solves data association and position, iterates until convergence, and the input is the absolute position { cm } of a fisheye camera module, and the relative position { of an object to be positioned{/>, And actual bearing measurementsOutput as absolute position of object to be positioned {/>}. The specific flow is as follows:
1. Initializing the absolute position of an object to be positioned as the relative position of the object to be positioned;
2. Back calculating expected azimuth measurement of each object to be positioned by utilizing the absolute position of the fisheye camera module and the absolute position of the object to be positioned;
3. calculating the difference between the expected azimuth measurement and the actual azimuth measurement, and establishing matching according to the Hungary algorithm;
4. calculating the position of the object by using the matched actual azimuth measurement, and updating the absolute position of the object to be positioned according to the position;
5. and iterating for 2-4 times until the difference between the expected azimuth measurement and the actual azimuth measurement is lower than a set threshold value, and returning to the absolute position of the object to be positioned after the iteration is ended.
(2) Multi-target pixel tracking:
Since each fisheye camera module detects multiple light spots, the light spots between the fisheye camera modules need to have the correct data correlation to perform the correct position calculation. Therefore, after the multi-object positioning initialization is successful, the invention also tracks a plurality of light spots detected by each fish-eye camera module. Specifically, since the frequency of the fisheye camera module is 50hz and the moving speed of the object is not high, it is considered that the detected light spot position does not change greatly in two consecutive frames, and the pixel tracking can be performed by using a multi-object tracking method. Specifically, on the basis of the spot detection result of the previous frame, one is calculated first Data matrix/>Wherein/>Is the number of spots. Theoretically, the number of spots in the two frames should be the same. Element/>, of a data matrixRepresenting the euclidean distance of the ith spot of the frame from the jth spot of the previous frame. Based on the data matrix, performing data association of light spots of the front frame and the back frame by using a binary image matching Hungary algorithm.
(3) Position calculation:
when the detection result of the three fisheye camera modules on one rigid body is correct, correct and high-precision space three-dimensional position calculation can be performed. Specifically, when the absolute position of the fisheye camera module is The azimuth observation of the ith object to be calibrated is/>The spatial three-dimensional position of the object is obtained by the following expression:
wherein, Absolute position of the object to be positioned; the problem can be solved in a closed mode to obtain a global optimal solution.
The invention also provides an electronic device comprising:
One or more processors;
A memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the tabletop-level rigid-body positioning method based on optical positioning technology.
The invention also provides a computer readable storage medium having stored thereon computer instructions which when executed by a processor perform the steps of the tabletop-level rigid body positioning method based on optical positioning technology.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof.

Claims (10)

1. The desktop-level rigid body positioning equipment based on the optical positioning technology is characterized by comprising a positioning box, a frosted black glass fiber platform, a transparent acrylic platform for placing an unmanned aerial vehicle and a frosted black polyvinyl chloride strut for connecting the frosted black glass fiber platform and the transparent acrylic platform;
The positioning box comprises three identical fisheye camera modules, a computing unit module, a power supply module and a data and connecting line hiding box; the computing unit module is electrically connected with the three fisheye camera modules through USB connecting wires, so that the dual functions of power supply and data transmission are achieved; the power supply module is used for supplying power to the computing unit module; the data and connecting wire hiding box is arranged on one side of the positioning box and is used for accommodating data and connecting wires; the three fish eye camera modules of the positioning box are fixedly arranged below the frosted black glass fiber platform, four corners of the frosted black glass fiber platform are in threaded connection with four frosted black polyvinyl chloride struts, and the four frosted black polyvinyl chloride struts are vertically and fixedly connected with four corners of the transparent acrylic platform.
2. The optical positioning technology-based tabletop rigid body positioning device according to claim 1, wherein the fisheye camera module is composed of a camera lens, an optical filter and an imaging module; for identifying the light-emitting element of the object equipment to be positioned.
3. The optical positioning technology-based tabletop rigid body positioning device according to claim 2, wherein the camera lens is a 185-degree fisheye lens, and the required positioning range can be observed.
4. The optical positioning technology-based desktop rigid body positioning device according to claim 2, wherein the optical filter is an ultraviolet optical filter for filtering daily visible light wave bands, and only light rays with wave bands of 390-400nm are left after the optical filter.
5. The optical positioning technology-based tabletop rigid body positioning device according to claim 2, wherein the imaging module is a hyperspectral range CMOS imaging chip and requires a resolution of greater than 1080P.
6. The optical positioning technology-based tabletop-level rigid body positioning device according to claim 1, wherein the position arrangement of the fisheye camera module is to determine the three-dimensional space position of the rigid body by using a triangular positioning method and distribute the three-dimensional space position into an equilateral triangle; and the occurrence of singular points and the influence on the positioning accuracy of the system are prevented.
7. A method of positioning a table top grade rigid body positioning device based on optical positioning technology as claimed in any one of claims 1-6, comprising the steps of:
(1) Multi-object positioning initialization:
Because the ultraviolet detection points are anonymous, when a plurality of objects to be positioned are put in, no method exists for determining which object to be positioned corresponds to the ultraviolet detection points of the camera pictures; it is therefore necessary to input a relative position of the object to be positioned whose dimensions are unknown, in particular if put in The relative position of each object to be positioned in all objects needs to be input: ; wherein/> Representing the relative position of the ith object to be positioned in all objects, the requirement is met by arranging marks on a transparent acrylic platform of the positioning box and requiring a user to place the object to be positioned on the marks; corresponding to three fisheye camera modules installed on the positioning box, the fisheye camera modules can detect ultraviolet rays emitted by an object to be positioned and obtain 3 groups of azimuth measurements, and the expression is as follows: /(I)
Wherein the method comprises the steps of,/>The i-th azimuth measurement obtained on behalf of the m-th fisheye camera module, and the absolute position/>, of the fisheye camera module has been obtained by advanced design of the positioning box and external parameter calibrationWherein/>
Then carrying out multi-object positioning initialization, taking relative position and azimuth measurement as input, and continuously iterating two stages of data association and position solving until convergence;
(2) Multi-target pixel tracking:
Because each fisheye camera module detects a plurality of light spots, the light spots between the fisheye camera modules need to have correct data association to perform correct position calculation; therefore, after the multi-object positioning initialization is successful, tracking is carried out on a plurality of light spots detected by each fish-eye camera module; specifically, based on the light spot detection result of the previous frame, one is calculated Data matrix/>Wherein/>Is the number of light spots, since the number of light spots of the two frames is the same; element/>, of a data matrixRepresenting the Euclidean distance of the ith spot of the frame from the jth spot of the previous frame; based on the data matrix, performing data association of light spots of the front frame and the back frame by using a classical algorithm of bipartite graph matching;
(3) Position calculation:
when the detection result of the three fisheye camera modules on one rigid body is correct, performing spatial three-dimensional position calculation to obtain the spatial three-dimensional position of the object, wherein the spatial three-dimensional position calculation is obtained by the following expression:
wherein, Absolute position of the object to be positioned; /(I)Is the absolute position of the fisheye camera module,/>The position is the azimuth observation position of the ith object to be calibrated; and finally obtaining a global optimal solution through closed-form solution.
8. The tabletop-level rigid body positioning method based on the optical positioning technology according to claim 7, wherein the multi-object positioning initialization in the step (1) specifically comprises the following sub-steps:
(1.1) initializing the absolute position of the object to be positioned as the relative position of the object to be positioned;
(1.2) back calculating expected azimuth measurement of each object to be positioned by using the absolute position of the fisheye camera module and the absolute position of the object to be positioned;
(1.3) calculating the difference between the expected azimuth measurement and the actual azimuth measurement, and establishing a match according to the hungarian algorithm;
(1.4) calculating the position of the object by using the actual azimuth measurement on the matching, and updating the absolute position of the object to be positioned according to the position;
And (1.5) iterating until the difference between the expected azimuth measurement and the actual azimuth measurement is lower than a set threshold value, and returning to the absolute position of the object to be positioned after the iteration is ended.
9. An electronic device, comprising:
One or more processors;
A memory for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the optical positioning technique-based tabletop-level rigid body positioning method of any of claims 7-8.
10. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of a tabletop-level rigid body positioning method based on optical positioning technique as claimed in any of claims 7-8.
CN202410324107.5A 2024-03-21 2024-03-21 Desktop-level rigid body positioning equipment and method based on optical positioning technology Active CN117934598B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410324107.5A CN117934598B (en) 2024-03-21 2024-03-21 Desktop-level rigid body positioning equipment and method based on optical positioning technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410324107.5A CN117934598B (en) 2024-03-21 2024-03-21 Desktop-level rigid body positioning equipment and method based on optical positioning technology

Publications (2)

Publication Number Publication Date
CN117934598A true CN117934598A (en) 2024-04-26
CN117934598B CN117934598B (en) 2024-06-11

Family

ID=90763426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410324107.5A Active CN117934598B (en) 2024-03-21 2024-03-21 Desktop-level rigid body positioning equipment and method based on optical positioning technology

Country Status (1)

Country Link
CN (1) CN117934598B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008151471A1 (en) * 2007-06-15 2008-12-18 Tsinghua University A robust precise eye positioning method in complicated background image
US20140315570A1 (en) * 2013-04-22 2014-10-23 Alcatel-Lucent Usa Inc. Localization systems and methods
CN111311656A (en) * 2020-02-21 2020-06-19 辽宁石油化工大学 Moving target detection method and device suitable for vehicle-mounted fisheye camera
WO2021063127A1 (en) * 2019-09-30 2021-04-08 深圳市瑞立视多媒体科技有限公司 Pose positioning method and related equipment of active rigid body in multi-camera environment
KR20210094450A (en) * 2020-01-20 2021-07-29 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. Method and apparatus for positioning vehicle, electronic device and storage medium
CN114708309A (en) * 2022-02-22 2022-07-05 广东工业大学 Vision indoor positioning method and system based on building plan prior information
US20220309835A1 (en) * 2021-03-26 2022-09-29 Harbin Institute Of Technology, Weihai Multi-target detection and tracking method, system, storage medium and application
CN117274378A (en) * 2023-09-15 2023-12-22 东北电力大学 Indoor positioning system and method based on AI vision fusion three-dimensional scene
CN117576219A (en) * 2023-10-21 2024-02-20 东北石油大学 Camera calibration equipment and calibration method for single shot image of large wide-angle fish-eye lens

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008151471A1 (en) * 2007-06-15 2008-12-18 Tsinghua University A robust precise eye positioning method in complicated background image
US20140315570A1 (en) * 2013-04-22 2014-10-23 Alcatel-Lucent Usa Inc. Localization systems and methods
WO2021063127A1 (en) * 2019-09-30 2021-04-08 深圳市瑞立视多媒体科技有限公司 Pose positioning method and related equipment of active rigid body in multi-camera environment
CN113643378A (en) * 2019-09-30 2021-11-12 深圳市瑞立视多媒体科技有限公司 Active rigid body pose positioning method in multi-camera environment and related equipment
KR20210094450A (en) * 2020-01-20 2021-07-29 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. Method and apparatus for positioning vehicle, electronic device and storage medium
CN111311656A (en) * 2020-02-21 2020-06-19 辽宁石油化工大学 Moving target detection method and device suitable for vehicle-mounted fisheye camera
US20220309835A1 (en) * 2021-03-26 2022-09-29 Harbin Institute Of Technology, Weihai Multi-target detection and tracking method, system, storage medium and application
CN114708309A (en) * 2022-02-22 2022-07-05 广东工业大学 Vision indoor positioning method and system based on building plan prior information
CN117274378A (en) * 2023-09-15 2023-12-22 东北电力大学 Indoor positioning system and method based on AI vision fusion three-dimensional scene
CN117576219A (en) * 2023-10-21 2024-02-20 东北石油大学 Camera calibration equipment and calibration method for single shot image of large wide-angle fish-eye lens

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YINGJIAN WANG; XIANGYONG WEN; LONGJI YIN; CHAO XU; YANJUN CAO; FEI GAO: "Certifiably Optimal Mutual Localization With Anonymous Bearing Measurements", IEEE ROBOTICS AND AUTOMATION LETTERS, 12 July 2022 (2022-07-12) *
许成浩: "基于多目视觉与电机耦合的静态室内场景定位方法与建图研究", 信息科技辑, 15 March 2024 (2024-03-15) *

Also Published As

Publication number Publication date
CN117934598B (en) 2024-06-11

Similar Documents

Publication Publication Date Title
JP7057454B2 (en) Improved camera calibration system, target, and process
Treible et al. Cats: A color and thermal stereo benchmark
CN1934459B (en) Wireless location and identification system and method
CN109308722B (en) Space pose measurement system and method based on active vision
Gaspar et al. Constant resolution omnidirectional cameras
Attamimi et al. Real-time 3D visual sensor for robust object recognition
Ellmauthaler et al. A visible-light and infrared video database for performance evaluation of video/image fusion methods
US20170206422A1 (en) Method for tracking keypoints in a scene
Su et al. Hybrid marker-based object tracking using Kinect v2
Huang et al. Obstacle distance measurement based on binocular vision for high-voltage transmission lines using a cable inspection robot
Chaochuan et al. An extrinsic calibration method for multiple RGB-D cameras in a limited field of view
CN112149348A (en) Simulation space model training data generation method based on unmanned container scene
Ivanov et al. An iterative technique for calculating aliasing probability of linear feedback signature registers
Feng et al. Unmanned aerial vehicle-aided stereo camera calibration for outdoor applications
CN117934598B (en) Desktop-level rigid body positioning equipment and method based on optical positioning technology
Szelag et al. Real-time camera pose estimation based on volleyball court view
Santos et al. A real-time low-cost marker-based multiple camera tracking solution for virtual reality applications
Yang et al. Non-central refractive camera calibration using co-planarity constraints for a photogrammetric system with an optical sphere cover
Rosenberger et al. 3D high-resolution multimodal imaging system for real-time applications
CN111964681B (en) Real-time positioning system of inspection robot
Wang et al. Stereo calibration of binocular ultra-wide angle long-wave infrared camera based on an equivalent small field of view camera
Sifferman et al. Unlocking the performance of proximity sensors by utilizing transient histograms
Liang et al. Monocular depth estimation for glass walls with context: a new dataset and method
Katai-Urban et al. Reconstructing atmospheric cloud particles from multiple fisheye cameras
Jiao et al. A smart post-rectification algorithm based on an ANN considering reflectivity and distance for indoor scenario reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant