CN116342662A - Tracking and positioning method, device, equipment and medium based on multi-camera - Google Patents

Tracking and positioning method, device, equipment and medium based on multi-camera Download PDF

Info

Publication number
CN116342662A
CN116342662A CN202310323947.5A CN202310323947A CN116342662A CN 116342662 A CN116342662 A CN 116342662A CN 202310323947 A CN202310323947 A CN 202310323947A CN 116342662 A CN116342662 A CN 116342662A
Authority
CN
China
Prior art keywords
camera
tracked object
information
cameras
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310323947.5A
Other languages
Chinese (zh)
Other versions
CN116342662B (en
Inventor
王侃
何元会
周烽
李体雷
田承林
刘昊扬
戴若犂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING NOITOM TECHNOLOGY Ltd
Original Assignee
BEIJING NOITOM TECHNOLOGY Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING NOITOM TECHNOLOGY Ltd filed Critical BEIJING NOITOM TECHNOLOGY Ltd
Priority to CN202310323947.5A priority Critical patent/CN116342662B/en
Publication of CN116342662A publication Critical patent/CN116342662A/en
Application granted granted Critical
Publication of CN116342662B publication Critical patent/CN116342662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a tracking and positioning method, a device, equipment and a medium based on a multi-camera. The tracking and positioning method based on the multi-camera comprises the following steps: acquiring first preset relative position relation information among a plurality of cameras in the multi-view camera, wherein the plurality of cameras are respectively connected through a fixing frame with a preset fixing structure so as to fix the relative position relation among the plurality of cameras; controlling a plurality of cameras to acquire image information of a tracked object at the same time to obtain at least two image information of the tracked object, wherein the tracked object is an object with a preset target rigid body; obtaining topological structure data of a target rigid body; based on the first preset relative position relation information, at least two image information and topological structure data, the position information of the tracked object is determined, so that no-sweeping field is realized when tracking and positioning are performed based on the multi-camera, reinstallation of a plurality of cameras in the multi-camera is not needed each time, and meanwhile, the installation and maintenance cost is reduced.

Description

Tracking and positioning method, device, equipment and medium based on multi-camera
Technical Field
The disclosure relates to the technical field of cameras, and in particular relates to a tracking and positioning method, device, equipment and medium based on a multi-camera.
Background
Binocular or multi-view tracking is a technique that uses multiple mounting locations and fixed-orientation cameras to track and locate objects in a multi-view visible region. The binocular or multi-camera tracking technology generally includes two steps, namely, binocular or multi-camera calibration and tracking and positioning of objects by the binocular or multi-camera, wherein the binocular or multi-camera calibration is used for acquiring an assembly relationship between each camera in the binocular or multi-camera, and further tracking and positioning of the objects based on the assembly relationship between each camera in the binocular or multi-camera.
However, in the existing tracking positioning scheme of the binocular or multi-view camera, each of the plurality of cameras in the binocular or multi-view camera is erected into the existing structural space of the field, and can only be fixed by using the existing conditions, and because the position and the orientation of each camera are different from those of other cameras when each camera is installed, the mounted binocular or multi-view camera needs to be scanned after each camera is installed in the tracking area; in addition, in the use process, if one or more of the binocular or multi-view cameras changes in installation position and orientation, the tracking accuracy is reduced or even normal tracking and positioning cannot be performed, and the field must be re-scanned; furthermore, if the site is changed, each camera in the binocular or multi-camera needs to be installed and calibrated again in the new site, so that the site needs to be scanned again every time of installation, and the problems of high installation difficulty, high maintenance cost and the like exist.
Disclosure of Invention
In order to solve the technical problems, the present disclosure provides a tracking and positioning method, device, equipment and medium based on a multi-camera.
A first aspect of an embodiment of the present disclosure provides a tracking positioning method based on a multi-view camera, including:
acquiring first preset relative position relation information among a plurality of cameras in the multi-view camera, wherein the plurality of cameras are respectively connected through a fixing frame with a preset fixing structure so as to fix the relative position relation among the plurality of cameras;
controlling a plurality of cameras to acquire image information of a tracked object at the same time to obtain at least two image information of the tracked object, wherein the tracked object is an object with a preset target rigid body;
obtaining topological structure data of a target rigid body;
and determining the position information of the tracked object based on the first preset relative position relation information, the at least two image information and the topological structure data so as to track and position the tracked object.
A second aspect of embodiments of the present disclosure provides a tracking and positioning device based on a multi-camera, including:
the first acquisition module is used for acquiring first preset relative position relation information among a plurality of cameras in the multi-camera, wherein the plurality of cameras in the multi-camera are respectively connected through a fixing frame with a preset fixing structure so as to fix the relative position relation among the plurality of cameras in the multi-camera;
The second acquisition module is used for controlling the cameras to acquire image information of the tracked object at the same time to obtain at least two image information of the tracked object, wherein the tracked object is an object with a preset target rigid body;
the third acquisition module is used for acquiring topological structure data of the target rigid body;
the first positioning module is used for determining the position information of the tracked object based on the first preset relative position relation information, at least two pieces of image information and topology structure data so as to track and position the tracked object.
A third aspect of the disclosed embodiments provides an electronic device, comprising:
a processor;
a memory for storing executable instructions;
the processor is configured to read the executable instructions from the memory, and execute the executable instructions to implement the tracking positioning method based on the multi-camera provided in the first aspect.
A fourth aspect of embodiments of the present disclosure provides a computer-readable storage medium storing a computer program, which when executed by a processor, causes the processor to implement the method for tracking positioning based on a multi-camera provided in the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
According to the tracking and positioning method, device, equipment and medium based on the multi-camera, first preset relative position relation information between a plurality of cameras in the multi-camera can be obtained, the plurality of cameras are respectively connected through the fixing frame of the preset fixed structure, so that the relative position relation among the plurality of cameras is fixed, the plurality of cameras are controlled to collect image information of a tracked object at the same time, at least two image information of the tracked object is obtained, the tracked object is an object with a preset target rigid body, topology structure data of the target rigid body are obtained at the same time, and further, the position information of the tracked object is determined based on the first preset relative position relation information, the at least two image information and the topology structure data, so that the tracked object is tracked and positioned, the relative position relation among the plurality of cameras in the multi-camera can be fixed through the fixing frame of the preset fixed structure, the multi-camera does not need to be scanned when a site is changed or the relative position relation among the plurality of cameras is changed each time, the multi-camera is not required to be calibrated, the multi-camera is not need to be scanned, and the multi-camera in tracking and positioning is not required to be scanned each time, and the multi-camera is installed again, and maintenance cost is reduced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a flow chart of a tracking and positioning method based on a multi-camera provided in an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present disclosure;
FIG. 3 is a schematic view of a structure between a plurality of cameras in a multi-view camera according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a scan field calibration provided in an embodiment of the present disclosure;
FIG. 5 is a flow chart of another method of tracking and locating based on a multi-view camera provided by an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a tracking and positioning device based on a multi-camera according to an embodiment of the disclosure;
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
In general, in the existing tracking positioning scheme of the binocular or multi-view camera, each of the plurality of cameras in the binocular or multi-view camera is erected into the existing structural space of the field, and each camera can only be fixed by using the existing conditions, and because the position and the orientation of each camera and the orientation relative to other cameras are different when each camera is installed, the mounted binocular or multi-view camera needs to be scanned after each camera is installed in the tracking area; in addition, in the use process, if one or more of the binocular or multi-view cameras changes in installation position and orientation, the tracking accuracy is reduced or even normal tracking and positioning cannot be performed, and the field must be re-scanned; furthermore, if the site is changed, each camera in the binocular or multi-camera needs to be installed and calibrated again in the new site, so that the site needs to be scanned again every time of installation, and the problems of high installation difficulty, high maintenance cost and the like exist. In view of this problem, embodiments of the present disclosure provide a tracking and positioning method based on a multi-camera, which is described below with reference to specific embodiments.
Fig. 1 is a flowchart of a tracking and positioning method based on a multi-camera, which may be implemented by a tracking and positioning device based on a multi-camera, and the tracking and positioning device based on a multi-camera may be implemented in software and/or hardware, and the tracking and positioning device based on a multi-camera may be configured in an electronic device, such as a server or a terminal, where the terminal specifically includes a mobile phone, a computer, a tablet computer, or the like. In addition, the method may be applied to an application scenario shown in fig. 2, where the application scenario includes at least one group of multi-camera, such as 21, 22, and 23, a tracked object 24, and an electronic device 25 shown in fig. 2, and it is understood that the multi-camera-based tracking positioning method provided in the embodiments of the present disclosure may also be applied to other scenarios.
In fig. 2, the multi-camera 21, the multi-camera 22, the multi-camera 23 and the electronic device 25 are in the same lan, and the multi-camera 21, the multi-camera 22 and the multi-camera 23 can collect image information of the tracked object 24 and send the image information to the electronic device 25, so that the electronic device 25 performs tracking positioning on the tracked object 24 based on the image information.
The following describes the multi-camera based tracking positioning method shown in fig. 1 for the case where there is only one set of multi-camera in fig. 2, which may be performed by the electronic device 25 in fig. 2, for example. As shown in fig. 1, the tracking and positioning method based on the multi-camera provided in the present embodiment includes the following steps.
S110, acquiring first preset relative position relation information among a plurality of cameras in the multi-camera, wherein the plurality of cameras are respectively connected through a fixing frame with a preset fixing structure, so that the relative position relation among the plurality of cameras is fixed.
In the embodiment of the disclosure, when the electronic device performs tracking and positioning on the tracked object, first preset relative positional relationship information between a plurality of cameras in the multi-camera is obtained, wherein the plurality of cameras are respectively connected through a fixing frame with a preset fixing structure, so that the relative positional relationship between the plurality of cameras is fixed.
Fig. 3 is a schematic structural diagram of a plurality of cameras in a multi-view camera according to an embodiment of the present disclosure.
As shown in fig. 3, a group of multi-camera includes three cameras, namely, a camera 31, a camera 32 and a camera 33, and the cameras 31, 32 and 33 are connected through a fixing frame with a preset fixing structure, so that the relative positions and orientations of the camera 31, the camera 32 or the camera 33 are fixed, and no change occurs any more, thus forming a multi-camera system with a stable structure, namely, a multi-camera system with a constant structure is formed by fixing T1 between the camera 31 and the camera 32 and T2 between the camera 32 and the camera 33, wherein T1 and T2 represent the distances and orientations between the camera 31 and the camera 32, and between the camera 32 and the camera 33, respectively.
In some embodiments of the present disclosure, the electronic device may obtain the first preset relative positional relationship information between the plurality of cameras in the multi-camera by calculating the internal parameters, the external parameters, and the installation parameters between the plurality of cameras in the pre-stored multi-camera.
In other embodiments of the present disclosure, the first preset relative positional relationship information between the plurality of cameras in the multi-view camera is stored in the memory of the electronic device in advance, and the electronic device may directly perform the query based on the identifier of the multi-view camera to obtain the first preset relative positional relationship information between the plurality of cameras in the multi-view camera.
The first preset relative position relationship information between the plurality of cameras can be obtained by calibrating the sweeping field of the multi-camera before the multi-camera is used, and the first preset relative position relationship information is stored in the electronic equipment.
Fig. 4 is a schematic structural diagram of a scan field calibration according to an embodiment of the present disclosure. The specific process of calibrating the sweeping field before using the multi-camera is described in detail below with reference to fig. 4.
As shown in fig. 4, the structural schematic diagram of the scan calibration includes a plurality of cameras 41, a scan bar 42 and an electronic device 43, wherein the scan bar 42 includes 3 reflection points, i.e., marker points, the positions of the 3 reflection points are known and fixed, when the scan calibration is performed, a calibrator holds the scan bar 42 to move, in the moving process, to ensure that the three reflection points cover the shooting areas of the plurality of cameras 41, and simultaneously, the electronic device 43 controls the plurality of cameras of the plurality of cameras to shoot at the same frame rate, acquires the image information shot by each camera, sends the image information shot by each camera to the electronic device 43, the electronic device performs preliminary judgment on the image information shot by each camera to determine whether the shot image information meets the requirement of the scan calibration, when the request of the scan calibration is determined, calculates the relative position relationship among the plurality of cameras based on the image information shot by each camera, further calibrates the calculated relative position relationship among the plurality of cameras according to the position relationship among the plurality of reflection points of the plurality of cameras on the scan bar 42, and stores the calculated relative position relationship among the plurality of cameras as the first relative relationship among the plurality of cameras 41 when the calibration results meet the preset conditions.
S120, controlling a plurality of cameras to acquire image information of a tracked object at the same time, and obtaining at least two image information of the tracked object, wherein the tracked object is an object with a preset target rigid body.
In the embodiment of the disclosure, after acquiring first preset relative positional relationship information between a plurality of cameras in a multi-camera, the electronic device controls the plurality of cameras to simultaneously acquire image information of a tracked object, so as to obtain at least two image information of the tracked object, wherein the tracked object is an object with a preset target rigid body.
In an embodiment of the present disclosure, the target rigid body is a rigid body containing at least three reflection points by which the target rigid body is identified.
Further, at least one target rigid body may be preset on the tracked object.
In the embodiment of the present disclosure, the plurality of cameras in the multiple cameras may be each capable of acquiring image information of the tracked object at the same time, or at least two cameras in the multiple cameras may be capable of acquiring image information of the tracked object.
S130, obtaining topological structure data of the target rigid body.
In the embodiment of the disclosure, each target rigid body has a unique identifier corresponding to the target rigid body, and the electronic device can search from the memory according to the identifier of the target rigid body to obtain the topology structure data of the target rigid body.
Alternatively, the topology data of the target rigid body may be preset and stored in the electronic device, wherein the topology data may be a positional relationship of at least three reflection points of the target rigid body and positional coordinate information of each reflection point.
And S140, determining the position information of the tracked object based on the first preset relative position relation information, at least two pieces of image information and topological structure data so as to track and position the tracked object.
In the embodiment of the disclosure, after obtaining the topology structure data of the tracked object, the electronic device determines the position information of the tracked object based on the first preset relative position relationship information, at least two pieces of image information and the topology structure data, so as to track and position the tracked object.
In some embodiments of the present disclosure, there is only one target rigid body on the tracked object, the electronic device may extract feature points in at least two image information to obtain at least two feature points, where the feature points may be understood as camera reflection points on the tracked object extracted from the image information by the camera, further calculate, according to cameras corresponding to the at least two feature points respectively and first preset relative positional relationship information between the plurality of cameras, a three-dimensional position of the tracked feature points, then determine, according to topology structure data of the target rigid body, a position and an attitude of the tracked object under the multi-camera, further read a transformation matrix from the multi-camera to a target coordinate system, further transform the position and the attitude of the tracked object under the multi-camera to the position and the attitude under the target coordinate system, and determine the position and the attitude as position information of the tracked object, so as to implement tracking positioning of the tracked object.
Further, the target coordinate system may be a world coordinate system calibrated in advance.
In other embodiments of the present disclosure, there are at least two target rigid bodies on the tracked object, the electronic device may extract feature points in at least two image information to obtain at least two feature points, match each feature point according to a camera corresponding to each feature point, an identifier of each target rigid body, and topology structure data corresponding to each rigid body, determine a target rigid body corresponding to each feature point, further calculate a three-dimensional position of the tracked feature point according to first preset relative position relationship information between the camera corresponding to each feature point and the plurality of cameras, then determine a position and an attitude of the tracked object under the multi-camera according to the topology structure data of each target rigid body and the preset position relationship between the at least two target rigid bodies, further read a transformation matrix from the multi-camera to the target coordinate system, further transform the position and the attitude of the tracked object under the multi-camera to the position and the attitude under the target coordinate system, and determine the position and the attitude as position information of the tracked object, so as to achieve tracking positioning of the tracked object.
In the embodiment of the disclosure, first preset relative positional relationship information between a plurality of cameras in a multi-camera can be acquired, wherein the plurality of cameras are respectively connected through a fixing frame with a preset fixing structure, so that the relative positional relationship between the plurality of cameras is fixed, the plurality of cameras are controlled to simultaneously acquire image information of a tracked object, at least two image information of the tracked object is obtained, the tracked object is an object with a preset target rigid body, topology structure data of the target rigid body are acquired at the same time, and then the positional information of the tracked object is determined based on the first preset relative positional relationship information, the at least two image information and the topology structure data, so that the tracked object is tracked and positioned.
On the basis of the above embodiments of the present disclosure, after controlling a plurality of cameras to collect image information of a tracked object at the same time to obtain at least two image information of the tracked object, the tracking positioning method based on the plurality of cameras may further include: and performing binarization processing on at least two pieces of image information to obtain at least two pieces of binarized image information of the tracked object.
In the embodiment of the present disclosure, the binarization process may be understood as adjusting the gray value in the image information, that is, adjusting the gray value of the region having the gray value lower than the preset gray value threshold to 0, and adjusting the gray value of the region having the gray value higher than the preset gray value threshold to be higher or to the gray maximum value 255.
In the embodiment of the disclosure, the acquired at least two pieces of image information of the tracked object can be subjected to binarization processing, so that the characteristics of the acquired at least two pieces of image information of the tracked object are clearer, the characteristic points of the tracked object are convenient to identify and extract, and the accuracy of tracking and positioning the tracked object is improved.
Further, determining the position information of the tracked object based on the first preset relative position relationship information, the at least two image information, and the topology structure data includes: extracting features of at least two pieces of binarized image information to obtain at least two feature points of a tracked object, and obtaining coordinates of the at least two feature points under the same camera coordinate system based on first preset relative position relation information; for each feature point in at least two feature points, converting the coordinates of the feature point under the same camera coordinate system into a target coordinate system to obtain a target coordinate corresponding to the feature point; position information of the tracked object is determined based on the topology data and the target coordinates.
Specifically, after obtaining at least two pieces of binarized image information, the electronic device may input the at least two pieces of binarized image information into a preset machine learning model, perform feature extraction on the at least two pieces of binarized image information by the preset machine learning model to obtain at least two feature points of the tracked object, select one camera from the plurality of cameras as a reference camera according to internal parameters and external parameters of each camera and first preset relative position relationship information, convert the at least two feature points to the reference camera coordinate system to obtain coordinates of the at least two feature points in the same camera coordinate system, and simultaneously convert, for each feature point of the at least two feature points, coordinates of the feature points in the same camera coordinate system to the target coordinate system to obtain target coordinates corresponding to the feature points, and further determine the position and posture of the tracked object based on the topology structure data and the target coordinates, and determine the position and posture of the tracked object as position information of the tracked object.
In the embodiment of the disclosure, feature extraction is performed through at least two pieces of binarized image information, so that the position information of the tracked object is obtained according to at least two extracted feature points, first preset relative position relation information and topological structure data, and the accuracy of the obtained position information of the tracked object is improved.
In an embodiment of the present disclosure, at least two sets of multi-camera are included, wherein the at least two sets of multi-camera are located in the same local area network, and the tracking positioning method based on the multi-camera includes: tracking and positioning the tracked object based on at least two groups of multi-view cameras.
In the embodiment of the present disclosure, each of the at least two sets of the multi-camera may be a multi-camera having a uniform structure, that is, the number of cameras in each set of the multi-camera and the number of cameras are the same, and the relative positional relationship between the plurality of cameras is also the same.
In the embodiment of the disclosure, the tracked object can be tracked and positioned through at least two groups of multi-view cameras, so that the accuracy of the obtained position information of the tracked object can be improved, and the tracking area can be conveniently and rapidly enlarged through the placement positions among the at least two groups of multi-view cameras.
Fig. 5 is a flowchart of another tracking positioning method based on a multi-camera according to an embodiment of the disclosure, and as shown in fig. 5, the flowchart of the tracking positioning method based on the multi-camera includes the following steps.
S510, acquiring first preset relative position relation information among a plurality of cameras in the multi-camera, wherein the plurality of cameras are respectively connected through a fixing frame with a preset fixing structure, so that the relative position relation among the plurality of cameras is fixed, at least two groups of multi-camera are included, and at least two groups of multi-camera are located in the same local area network.
In the embodiment of the disclosure, the installation positions of at least two groups of multi-camera satisfy a preset area coverage rate for tracking and positioning a tracked object in a target area based on at least two groups of multi-camera, that is, a repetition rate between a shooting range of each group of multi-camera and a shooting range of each adjacent group of multi-camera satisfies the preset area coverage rate, wherein the repetition rate between the shooting range of each group of multi-camera and the shooting range of each adjacent group of multi-camera is obtained through an area coverage rate test, and when the area coverage rate test is performed, a relative position relationship between at least two groups of multi-camera can be obtained simultaneously, and the relative position relationship between at least two groups of multi-camera is stored.
Connectivity between at least two sets of multi-camera can be ensured by performing region coverage test on at least two sets of multi-camera, i.e. cascading of at least two sets of multi-camera is achieved.
Further, the preset area coverage rate may be understood as an area coverage rate preset for tracking and positioning the tracked object in the target area based on at least two groups of multi-view cameras.
In the embodiment of the present disclosure, the specific implementation manner of obtaining the first preset relative positional relationship information between the plurality of cameras in the multi-view camera is similar to S110, and will not be described herein.
Based on the above embodiment, when at least two sets of multi-camera can simultaneously acquire the image information of the tracked object, the electronic device performs tracking and positioning on the tracked object based on the at least two sets of multi-camera S520-S540, and when the at least two sets of multi-camera cannot simultaneously acquire the image information of the tracked object, the electronic device performs tracking and positioning on the tracked object based on the at least two sets of multi-camera S550-S570.
S520, when at least two groups of multi-camera can acquire the image information of the tracked object at the same time, controlling the cameras to acquire the image information of the tracked object at the same time aiming at each camera in each group of multi-camera, so as to obtain at least two image information of the tracked object.
In the embodiment of the present disclosure, the at least two groups of multi-view cameras can acquire the image information of the tracked object at the same time may be understood as for each group of multi-view cameras in the at least two groups of multi-view cameras, at least two cameras in the plurality of cameras in each group of multi-view cameras can acquire the image information of the tracked object.
In the embodiment of the present disclosure, in S520, for each camera in each group of multiple cameras, the control camera simultaneously collects the image information of the tracked object, and the specific implementation of obtaining at least two image information of the tracked object is similar to S120, and will not be described herein.
S530, determining first position information of the tracked object based on the first preset relative position relation information, the at least two image information and the topological structure data for each of the at least two groups of multi-camera.
In the embodiment of the present disclosure, the specific implementation of determining the first position information of the tracked object based on the first preset relative position relationship information, the at least two image information, and the topology structure data for each of the at least two sets of the multi-camera in S530 is similar to S140, and will not be described herein.
S540, merging and calculating the first position information corresponding to at least two groups of multi-camera respectively to obtain second position information of the tracked object, and determining the second position information as the position information of the tracked object.
In some embodiments of the present disclosure, after obtaining first position information corresponding to at least two sets of multi-camera, the electronic device performs mean calculation on the first position information corresponding to at least two sets of multi-camera, to obtain second position information of the tracked object, and determines the second position information as the position information of the tracked object.
In other embodiments of the present disclosure, after obtaining the first position information corresponding to the at least two sets of multi-camera, the electronic device performs a weighted calculation on the first position information based on the first position information corresponding to the at least two sets of multi-camera and a preset error of the at least two sets of multi-camera, to obtain the second position information of the tracked object, and determines the second position information as the position information of the tracked object.
On the basis of the embodiment of the present disclosure, after S540, the tracking and positioning method based on the multi-camera may further include: updating second preset relative position relation information between at least two groups of multi-view cameras; the updating the second preset relative positional relationship information between the at least two groups of multi-view cameras may specifically include: determining pose offsets between at least two sets of multi-view cameras based on position information of the tracked object; and updating the second preset relative position relation information by the base pose offset.
In the embodiment of the disclosure, after the electronic device acquires the second position information, the electronic device determines the second position information as the position information of the tracked object, and updates the second preset relative position relationship information between at least two groups of multi-view cameras based on the second position information.
Specifically, the electronic device calculates pose offset amounts between at least two groups of multi-view cameras based on position information of the tracked object, that is, second position information, such as position coordinates, orientations, and the like of the tracked object, updates second preset relative position relationship information through the pose offset amounts, and stores the second preset relative position relationship information in a conversion relationship list among the multiple groups of multi-view cameras.
In the embodiment of the disclosure, when at least two groups of multi-camera can acquire the image information of the tracked object at the same time, the second preset relative position relation information is updated through the acquired position information of the tracked object, so that the relative position relation information between the at least two groups of multi-camera is automatically corrected, the situation that the re-sweeping is not needed after the position and orientation of one or more groups of multi-camera in the at least two groups of multi-camera are changed is ensured, and meanwhile, the accuracy of the acquired position information of the tracked object is further ensured.
S550, when at least two groups of multi-camera can not acquire the image information of the tracked object at the same time, acquiring second preset relative position relation information between the at least two groups of multi-camera.
In the embodiment of the present disclosure, when at least two sets of multi-camera cannot simultaneously acquire image information of a tracked object, the electronic device may acquire second preset relative positional relationship information between the at least two sets of multi-camera from a pre-stored memory based on the identifiers of the at least two sets of multi-camera.
In some embodiments of the present disclosure, the second preset relative positional relationship information may be positional relationship information between at least two sets of multi-camera in the memory acquired and stored when the area coverage test is performed on at least two sets of multi-camera and the preset coverage is satisfied.
In other embodiments of the present disclosure, the second preset relative positional relationship information may also be positional relationship information between at least two sets of multi-camera determined according to the positional information of the tracked object and stored in the memory when the at least two sets of multi-camera track and position the tracked object for the first time.
S560, controlling each group of multi-camera in the at least two groups of multi-camera to collect the image information of the tracked object at the same time, so as to obtain at least two image information of the tracked object.
In the embodiment of the present disclosure, in S560, each of the at least two sets of multi-camera is controlled to simultaneously acquire image information of the tracked object, and a specific implementation manner of obtaining at least two image information of the tracked object is similar to S120, and is not described herein.
S570, determining the position information of the tracked object based on the first preset relative position relation information, the second preset relative position relation information, at least two image information and topology structure data.
In the embodiment of the disclosure, when only one tracked object can be obtained by at least two groups of multi-view cameras, and/or at least two tracked objects can not be obtained by at least two groups of multi-view cameras, the electronic device can extract characteristic points of the tracked object acquired by each group of first multi-view cameras capable of obtaining the image information of the tracked object in the at least two groups of multi-view cameras, so as to obtain at least two characteristic points corresponding to the first multi-view camera, further according to the cameras corresponding to the at least two characteristic points respectively, and the first preset relative position relation information between a plurality of cameras in the first multi-view camera, calculate the three-dimensional first position information of the tracked characteristic points, then according to the second preset relative position relation information, obtain the three-dimensional second position information of the characteristic points of the tracked object corresponding to the second multi-view camera, which can not obtain the image information of the tracked object, further according to the first position information, the second position information and the topology structure of the target object, determine the at least two groups of multi-view cameras and the at least two coordinate systems of the tracked object, and further determine the position of the tracked object under the at least two groups of multi-view cameras and the coordinate systems of the at least two multi-view cameras.
In the embodiment of the disclosure, when at least two groups of multi-camera cannot acquire the image information of the tracked object at the same time, the second preset relative position relation information between the at least two groups of multi-camera is acquired, and for each camera in the at least two groups of multi-camera, the camera is controlled to acquire the image information of the tracked object at the same time to acquire at least two image information of the tracked object, and then the position information of the tracked object is determined based on the first preset relative position relation information, the second preset relative position relation information, the at least two image information and the target rigid body topological structure data, so that the accuracy of the acquired position information of the tracked object is improved.
Fig. 6 is a schematic structural diagram of a tracking and positioning device based on a multi-camera according to an embodiment of the disclosure.
In the embodiment of the disclosure, the tracking and positioning device based on the multi-camera may be disposed in an electronic device, which is understood as a part of functional modules in the electronic device. Specifically, the electronic device may be a server or a terminal, where the terminal specifically includes a mobile phone, a computer, a tablet computer, or the like, which is not limited herein.
As shown in fig. 6, the multi-camera based tracking positioning device 600 may include a first acquisition module 610, a second acquisition module 620, a third acquisition module 630, and a first positioning module 640.
The first obtaining module 610 may be configured to obtain first preset relative positional relationship information between a plurality of cameras in the multi-view camera, where the plurality of cameras in the multi-view camera are respectively connected through a fixing frame with a preset fixing structure, so that the relative positional relationship between the plurality of cameras in the multi-view camera is fixed.
The second obtaining module 620 may be configured to control a plurality of cameras to simultaneously collect image information of a tracked object, so as to obtain at least two image information of the tracked object, where the tracked object is an object with a target rigid body preset.
The third acquisition module 630 may be configured to acquire topology data of the target rigid body.
The first positioning module 640 may be configured to determine location information of the tracked object based on the first preset relative location relationship information, the at least two image information, and the topology data, so as to perform tracking positioning on the tracked object.
In the embodiment of the disclosure, first preset relative positional relationship information between a plurality of cameras in a multi-camera can be acquired, wherein the plurality of cameras are respectively connected through a fixing frame with a preset fixing structure, so that the relative positional relationship between the plurality of cameras is fixed, the plurality of cameras are controlled to simultaneously acquire image information of a tracked object, at least two image information of the tracked object is obtained, the tracked object is an object with a preset target rigid body, topology structure data of the target rigid body are acquired at the same time, and then the positional information of the tracked object is determined based on the first preset relative positional relationship information, the at least two image information and the topology structure data, so that the tracked object is tracked and positioned.
In some embodiments of the present disclosure, the multi-camera based tracking and positioning device 600 may further include an image processing module 650.
The image processing module 650 may be specifically configured to, after controlling a plurality of cameras to collect image information of a tracked object at the same time, obtain at least two image information of the tracked object, perform binarization processing on the at least two image information to obtain at least two binarized image information of the tracked object.
In some embodiments of the present disclosure, the first positioning module 640 may include a feature extraction unit 6401, a coordinate system conversion unit 6402, and a first determination unit 6403.
The feature extraction unit 6401 may be configured to perform feature extraction on at least two pieces of binarized image information to obtain at least two feature points of the tracked object, and obtain coordinates of the at least two feature points under the same camera coordinate system based on the first preset relative positional relationship information.
The coordinate system conversion unit 6402 may be configured to convert, for each of at least two feature points, coordinates of the feature point in the same camera coordinate system to target coordinates, to obtain target coordinates corresponding to the feature point.
The position determination unit 6403 may be configured to determine position information of the tracked object based on the topology data and the target coordinates.
In some embodiments of the present disclosure, at least two sets of multi-view cameras are included, wherein the at least two sets of multi-view cameras are within the same local area network.
The multi-camera based tracking positioning device 600 may also include a second positioning module 660.
The second positioning module 660 may be configured to track and position the tracked object based on at least two sets of multi-view cameras.
In some embodiments of the present disclosure, the second positioning module 660 may include a first control unit 6601, a first positioning unit 6602, and a second determination unit 6603.
The first control unit 6601 can be used for controlling the cameras to collect the image information of the tracked object at the same time for each camera in each group of the multiple cameras when the at least two groups of the multiple cameras can acquire the image information of the tracked object at the same time, so as to obtain at least two image information of the tracked object.
The first positioning unit 6602 can be used for determining, for each of at least two sets of multi-camera, first position information of the tracked object based on the first preset relative position relationship information, the at least two image information, and the topology structure data.
The second determining unit 6603 may be configured to perform merging calculation on the first position information corresponding to at least two sets of multi-camera respectively, obtain second position information of the tracked object, and determine the second position information as the position information of the tracked object.
In some embodiments of the present disclosure, the second positioning module 660 may include an information acquisition unit 6604, a second control unit 6605, and a second positioning unit 6606.
The information obtaining unit 6604 may be used for obtaining second preset relative positional relationship information between at least two multi-camera groups when the at least two multi-camera groups cannot obtain the image information of the tracked object at the same time.
The second control unit 6605 can control each of the at least two multi-camera groups to collect the image information of the tracked object at the same time, so as to obtain at least two image information of the tracked object.
The second positioning unit 6606 can be used for determining the position information of the tracked object based on the first preset relative position relation information, the second preset relative position relation information, at least two image information and topology structure data.
In some embodiments of the present disclosure, the multi-camera based tracking and locating device 600 may further include an information update module 670.
The information updating module 670 may be configured to update second preset relative positional relationship information between at least two sets of multi-view cameras.
The information update module 670 may include an offset determination unit 6701 and an information update unit 6702, among others.
The offset determination unit 6701 may be used to determine a pose offset between at least two sets of multi-view cameras based on position information of the tracked object.
The information updating unit 6702 may be configured to update the second preset relative positional relationship information based on the pose offset.
It should be noted that, the tracking positioning device 600 based on the multi-camera shown in fig. 6 may perform the steps in the above method embodiments, and implement the processes and effects in the above method embodiments, which are not described herein.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
In the embodiment of the present disclosure, the electronic device shown in fig. 7 may be a server or a terminal, where the terminal specifically includes a mobile phone, a computer, a tablet computer, or the like, which is not limited herein.
As shown in fig. 7, the electronic device may include a processor 710 and a memory 720 storing computer program instructions.
In particular, the processor 710 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present disclosure.
Memory 720 may include mass storage for information or instructions. By way of example, and not limitation, memory 720 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of these. Memory 720 may include removable or non-removable (or fixed) media, where appropriate. Memory 720 may be internal or external to the integrated gateway device, where appropriate. In a particular embodiment, the memory 720 is a non-volatile solid state memory. In a particular embodiment, the Memory 720 includes Read-Only Memory (ROM). The ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (Electrical Programmable ROM, EPROM), electrically erasable PROM (Electrically Erasable Programmable ROM, EEPROM), electrically rewritable ROM (Electrically Alterable ROM, EAROM), or flash memory, or a combination of two or more of these, where appropriate.
The processor 710 reads and executes the computer program instructions stored in the memory 720 to perform the steps of the multi-camera based tracking positioning method provided by the embodiments of the present disclosure.
In one example, the electronic device may also include a transceiver 730 and a bus 740. As shown in fig. 7, the processor 710, the memory 720, and the transceiver 730 are connected and communicate with each other through a bus 740.
Bus 740 includes hardware, software, or both. By way of example, and not limitation, the buses may include an accelerated graphics port (Accelerated Graphics Port, AGP) or other graphics BUS, an enhanced industry standard architecture (Extended Industry Standard Architecture, EISA) BUS, a Front Side BUS (FSB), a HyperTransport (HT) interconnect, an industry standard architecture (Industrial Standard Architecture, ISA) BUS, an InfiniBand interconnect, a Low Pin Count (LPC) BUS, a memory BUS, a micro channel architecture (Micro Channel Architecture, MCa) BUS, a peripheral control interconnect (Peripheral Component Interconnect, PCI) BUS, a PCI-Express (PCI-X) BUS, a serial advanced technology attachment (Serial Advanced Technology Attachment, SATA) BUS, a video electronics standards association local (Video Electronics Standards Association Local Bus, VLB) BUS, or other suitable BUS, or a combination of two or more of these. Bus 740 may include one or more buses, where appropriate.
The embodiments of the present disclosure also provide a computer-readable storage medium, which may store a computer program that, when executed by a processor, causes the processor to implement the multi-camera-based tracking positioning method provided by the embodiments of the present disclosure.
The storage medium may, for example, include a memory 720 of computer program instructions executable by the processor 710 of the electronic device to perform the multi-camera based tracking positioning method provided by embodiments of the present disclosure. Alternatively, the storage medium may be a non-transitory computer readable storage medium, for example, a ROM, a random access memory (Random Access Memory, RAM), a Compact Disc ROM (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A tracking and positioning method based on a multi-camera, comprising:
acquiring first preset relative position relation information among a plurality of cameras in the multi-view camera, wherein the plurality of cameras are respectively connected through a fixing frame with a preset fixing structure so as to fix the relative position relation among the plurality of cameras;
controlling the cameras to acquire image information of a tracked object at the same time to obtain at least two image information of the tracked object, wherein the tracked object is an object with a preset target rigid body;
obtaining topological structure data of the target rigid body;
and determining the position information of the tracked object based on the first preset relative position relation information, the at least two pieces of image information and the topological structure data so as to track and position the tracked object.
2. The method of claim 1, wherein after said controlling said plurality of cameras to simultaneously acquire image information of a tracked object, resulting in at least two image information of said tracked object, said method further comprises:
and performing binarization processing on the at least two pieces of image information to obtain at least two pieces of binarized image information of the tracked object.
3. The method of claim 2, wherein the determining the location information of the tracked object based on the first preset relative location relationship information, the at least two image information, and the topology data comprises:
extracting features of the at least two binarized image information to obtain at least two feature points of the tracked object, and obtaining coordinates of the at least two feature points under the same camera coordinate system based on the first preset relative position relation information;
for each feature point in the at least two feature points, converting coordinates of the feature point under the same camera coordinate system into a target coordinate system to obtain a target coordinate corresponding to the feature point;
position information of the tracked object is determined based on the topology data and the target coordinates.
4. The method of claim 1, comprising at least two sets of multi-view cameras, wherein the at least two sets of multi-view cameras are within the same local area network, the method further comprising:
tracking and positioning the tracked object based on the at least two groups of multi-view cameras.
5. The method of claim 4, wherein tracking the tracked object based on the at least two sets of multi-view cameras comprises:
When the at least two groups of multi-camera can acquire the image information of the tracked object at the same time, controlling the cameras to acquire the image information of the tracked object at the same time aiming at each camera in each group of multi-camera to acquire at least two image information of the tracked object;
determining, for each of the at least two sets of multi-camera, first location information of the tracked object based on the first preset relative location relationship information, the at least two image information, and the topology structure data;
and merging and calculating the first position information corresponding to the at least two groups of multi-camera respectively to obtain second position information of the tracked object, and determining the second position information as the position information of the tracked object.
6. The method of claim 4, wherein tracking the tracked object based on the at least two sets of multi-view cameras comprises:
when the at least two groups of multi-camera cannot simultaneously acquire the image information of the tracked object, acquiring second preset relative position relation information between the at least two groups of multi-camera;
Controlling each group of multi-camera in the at least two groups of multi-camera to collect the image information of the tracked object at the same time, so as to obtain at least two image information of the tracked object;
and determining the position information of the tracked object based on the first preset relative position relation information, the second preset relative position relation information, the at least two image information and the topological structure data.
7. The method of claim 5, wherein the method further comprises:
updating second preset relative position relation information between the at least two groups of multi-view cameras;
wherein updating the second preset relative positional relationship information between the at least two groups of multi-view cameras includes:
determining pose offsets between the at least two sets of multi-view cameras based on the position information of the tracked object;
and updating the second preset relative position relation information based on the pose offset.
8. A tracking and positioning device based on a multi-camera, comprising:
the first acquisition module is used for acquiring first preset relative position relation information among a plurality of cameras in the multi-camera, wherein the plurality of cameras in the multi-camera are respectively connected through a fixing frame with a preset fixing structure so as to fix the relative position relation among the plurality of cameras in the multi-camera;
The second acquisition module is used for controlling the cameras to acquire image information of a tracked object at the same time to obtain at least two image information of the tracked object, wherein the tracked object is an object with a preset target rigid body;
the third acquisition module is used for acquiring the topological structure data of the target rigid body;
and the first positioning module is used for determining the position information of the tracked object based on the first preset relative position relation information, the at least two pieces of image information and the topological structure data so as to track and position the tracked object.
9. An electronic device, comprising:
a processor;
a memory for storing executable instructions;
wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the multi-camera based tracking positioning method of any of the above claims 1-7.
10. A computer readable storage medium, characterized in that the storage medium stores a computer program, which when executed by a processor causes the processor to implement the multi-camera based tracking positioning method of any of the preceding claims 1-7.
CN202310323947.5A 2023-03-29 2023-03-29 Tracking and positioning method, device, equipment and medium based on multi-camera Active CN116342662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310323947.5A CN116342662B (en) 2023-03-29 2023-03-29 Tracking and positioning method, device, equipment and medium based on multi-camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310323947.5A CN116342662B (en) 2023-03-29 2023-03-29 Tracking and positioning method, device, equipment and medium based on multi-camera

Publications (2)

Publication Number Publication Date
CN116342662A true CN116342662A (en) 2023-06-27
CN116342662B CN116342662B (en) 2023-12-05

Family

ID=86882068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310323947.5A Active CN116342662B (en) 2023-03-29 2023-03-29 Tracking and positioning method, device, equipment and medium based on multi-camera

Country Status (1)

Country Link
CN (1) CN116342662B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106714681A (en) * 2014-07-23 2017-05-24 凯内蒂科尔股份有限公司 Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
CN107289931A (en) * 2017-05-23 2017-10-24 北京小鸟看看科技有限公司 A kind of methods, devices and systems for positioning rigid body
CN108186117A (en) * 2018-02-28 2018-06-22 安徽大中润科技有限公司 A kind of distribution optical alignment tracking system and method
CN111354018A (en) * 2020-03-06 2020-06-30 合肥维尔慧渤科技有限公司 Object identification method, device and system based on image
CN112215955A (en) * 2020-09-27 2021-01-12 深圳市瑞立视多媒体科技有限公司 Rigid body mark point screening method, device, system, equipment and storage medium
US20210056715A1 (en) * 2019-08-20 2021-02-25 Boe Technology Group Co., Ltd. Object tracking method, object tracking device, electronic device and storage medium
KR20210023431A (en) * 2019-08-23 2021-03-04 한국기계연구원 Position tracking system using a plurality of cameras and method for position tracking using the same
WO2022007886A1 (en) * 2020-07-08 2022-01-13 深圳市瑞立视多媒体科技有限公司 Automatic camera calibration optimization method and related system and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106714681A (en) * 2014-07-23 2017-05-24 凯内蒂科尔股份有限公司 Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
CN107289931A (en) * 2017-05-23 2017-10-24 北京小鸟看看科技有限公司 A kind of methods, devices and systems for positioning rigid body
CN108186117A (en) * 2018-02-28 2018-06-22 安徽大中润科技有限公司 A kind of distribution optical alignment tracking system and method
US20210056715A1 (en) * 2019-08-20 2021-02-25 Boe Technology Group Co., Ltd. Object tracking method, object tracking device, electronic device and storage medium
KR20210023431A (en) * 2019-08-23 2021-03-04 한국기계연구원 Position tracking system using a plurality of cameras and method for position tracking using the same
CN111354018A (en) * 2020-03-06 2020-06-30 合肥维尔慧渤科技有限公司 Object identification method, device and system based on image
WO2022007886A1 (en) * 2020-07-08 2022-01-13 深圳市瑞立视多媒体科技有限公司 Automatic camera calibration optimization method and related system and device
CN112215955A (en) * 2020-09-27 2021-01-12 深圳市瑞立视多媒体科技有限公司 Rigid body mark point screening method, device, system, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张辉;杨育;陈瑶;齐小龙,: ""基于固定位姿约束的双目相机标定研究"", 《机床与液压》, pages 109 - 113 *

Also Published As

Publication number Publication date
CN116342662B (en) 2023-12-05

Similar Documents

Publication Publication Date Title
KR102022388B1 (en) Calibration system and method using real-world object information
CN110348297B (en) Detection method, system, terminal and storage medium for identifying stereo garage
CN112595323A (en) Robot and drawing establishing method and device thereof
CN107038443B (en) Method and device for positioning region of interest on circuit board
CN103155001A (en) Online reference generation and tracking for multi-user augmented reality
US9297653B2 (en) Location correction apparatus and method
CN105992259B (en) Positioning detection method and device
CN106303942B (en) Wireless network fingerprint signal processing method and device
JP2015035685A (en) Radio communication system, radio terminal and radio communication method
CN112949782A (en) Target detection method, device, equipment and storage medium
CN111325122A (en) License plate correction method, ETC antenna device and computer-readable storage medium
CN110910459A (en) Camera device calibration method and device and calibration equipment
CN111935641B (en) Indoor self-positioning realization method, intelligent mobile device and storage medium
CN116342662B (en) Tracking and positioning method, device, equipment and medium based on multi-camera
KR100962177B1 (en) Monitoring system and method using cctv nearby moving object
CN113988228B (en) Indoor monitoring method and system based on RFID and vision fusion
CN117295158B (en) WiFi positioning method, device, equipment and medium based on fingerprint matching
JP2019133592A (en) Traffic light recognition device
CN117269910A (en) Multi-radar calibration method and system based on track matching
CN111337950B (en) Data processing method, device, equipment and medium for improving landmark positioning precision
CN115861597A (en) Bridge modal real-time identification method and system based on laser and computer vision
CN112837343B (en) Low-altitude unmanned-machine prevention and control photoelectric early warning identification method and system based on camera array
CN108632740B (en) Positioning method and device of user equipment
CN115146745A (en) Method, device and equipment for correcting point cloud data coordinate point positions and storage medium
CN114661049A (en) Inspection method, inspection device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant