CN115588040A - System and method for counting and positioning coordinates based on full-view imaging points - Google Patents

System and method for counting and positioning coordinates based on full-view imaging points Download PDF

Info

Publication number
CN115588040A
CN115588040A CN202211105529.0A CN202211105529A CN115588040A CN 115588040 A CN115588040 A CN 115588040A CN 202211105529 A CN202211105529 A CN 202211105529A CN 115588040 A CN115588040 A CN 115588040A
Authority
CN
China
Prior art keywords
coordinate system
pixel
world
view
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211105529.0A
Other languages
Chinese (zh)
Inventor
赵�权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Huanyu Zhongheng Technology Co ltd
Original Assignee
Sichuan Huanyu Zhongheng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Huanyu Zhongheng Technology Co ltd filed Critical Sichuan Huanyu Zhongheng Technology Co ltd
Priority to CN202211105529.0A priority Critical patent/CN115588040A/en
Publication of CN115588040A publication Critical patent/CN115588040A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Algebra (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image identification and detection, and discloses a system and a method for counting and positioning coordinates based on full-view imaging points.A pixel coordinate system is established according to a view image to obtain a view image containing a calibration object, and world longitude and latitude coordinates corresponding to the full-view pixel points are deduced by combining actual world longitude and latitude coordinates of the calibration object; and finally, carrying out target object identification on the view image containing the target object by utilizing an identification algorithm, and obtaining the pixel coordinates of the identified target object in a pixel coordinate system so as to obtain world longitude and latitude coordinates of the target object. The invention adopts the technologies of recognition algorithm, tracking algorithm, database storage and the like to convert the pixel coordinate in the two-dimensional view to express three-dimensional world coordinate information into real-time positioning; only need carry out image acquisition through the camera on the hardware, then carry out corresponding processing by the software program, do not have the signal interference problem, not only positioning accuracy is high, does not receive the interference influence moreover, therefore the reliability is strong, and the application scene is wide.

Description

System and method for counting and positioning coordinates based on full-view imaging point
Technical Field
The invention relates to the technical field of image recognition and detection, in particular to a system and a method for counting and positioning coordinates based on a full-view imaging point.
Background
Traditional real-time positioning is realized by adopting GPS positioning mostly, and because GPS positioning is realized by transmitting signals through radio, the GPS positioning can be influenced by a plurality of signals, such as electric waves, and meanwhile, the GPS positioning can also be obstructed by walls and the like and reflected, so that indoor positioning cannot be directly carried out. In addition, the GPS generates many errors in the positioning process, such as satellite clock errors, ionosphere propagation delay, receiver noise, and the like, so that GPS positioning is adopted in urban traffic of high-rise forests and remote plant areas, and cannot play a good role due to blocked signals or few signal base stations, and cannot immediately position addresses for safety monitoring, such as occurrence of safety accidents, and delay rescue time, and cannot play a particularly positive role in subsequent accident assessment in the aspect of visualization.
Disclosure of Invention
Based on the problems, the invention provides a system and a method for counting and positioning coordinates based on full-view imaging points, which combine a mapping technology and a view identification technology, and realize world coordinate model derivation corresponding to full-view pixel point coordinates through pixel coordinates and world coordinates of an imaging calibration object, thereby calibrating world coordinate positions corresponding to full-view pixel coordinate points; and finally, identifying the target object by using an identification algorithm and the like, and obtaining the pixel coordinate of the target object and the corresponding world coordinate, thereby achieving the purpose of expressing the three-dimensional space coordinate by using two-dimensional imaging.
In order to realize the technical effects, the technical scheme adopted by the invention is as follows:
a full view imaging point coordinate based statistical positioning system, comprising:
the view collection module: the device comprises a display unit, a display unit and a control unit, wherein the display unit is used for displaying a view image of a region to be detected, and the view image comprises a calibration object and a target object;
a full-view pixel coordinate generation module: the image processing device is used for establishing a pixel coordinate system according to the view image and obtaining the position coordinate of the calibration object in the pixel coordinate system in the view image containing the calibration object;
a world coordinate analysis module: the system comprises a pixel coordinate system, a global coordinate system and a global coordinate system, wherein the pixel coordinate system is used for deriving world longitude and latitude coordinates corresponding to full-view pixel points through position coordinates of a calibration object in the pixel coordinate system and actual world longitude and latitude coordinates of the calibration object;
the target object identification and positioning module: and identifying the target object by using an identification algorithm to the view image containing the target object, acquiring the pixel coordinate of the identified target object in a pixel coordinate system, and acquiring the world longitude and latitude coordinates of the target object according to the world longitude and latitude coordinates corresponding to the full view image pixel point acquired by the world coordinate analysis module.
The system further comprises a coordinate information database module which is respectively in communication connection with the world coordinate analysis module and the target object identification and positioning module; the system is used for storing the corresponding relation data of the full-view image pixel point analyzed by the world coordinate analysis module and world longitude and latitude coordinates, or returning the world longitude and latitude coordinates corresponding to the target object pixel coordinates to the target object identification and positioning module after the target object identification and positioning module obtains the pixel coordinates of the target object.
According to the invention, the world coordinates corresponding to the global pixel coordinate point deduced through the pixel coordinates of the calibration object and the world coordinate position are stored in the coordinate information database module, so that the subsequent identification algorithm can conveniently and quickly call the world coordinate position of the target object according to the pixel coordinates of the target object after identifying the target object, and then complex calculation is carried out, thereby improving the identification efficiency.
Further, the view images are images of the region to be measured containing the calibration object or the target object at the same or different viewing angles acquired at the same height by the acquisition device.
The acquisition equipment acquires view images containing calibration objects or target objects at the same or different visual angles at the same height, so that the problem of inconsistent internal parameters such as focal length and the like of the acquisition equipment at different heights can be solved, and the accuracy of the analyzed corresponding relation model of the pixel coordinate system and the world coordinate system can be ensured.
Further, the process of resolving the pixel coordinates in the pixel coordinate system into world longitude and latitude coordinates by the world coordinate resolving module is as follows:
establishing a Cartesian coordinate system which comprises a pixel coordinate system, an image coordinate system, a camera coordinate system and a world longitude and latitude coordinate system; the image coordinate system is a distance coordinate of a certain point on the view image, and the image coordinate system and the pixel coordinate system are in the same plane; the camera coordinate system is a space coordinate system established by taking a camera as a central origin; the world longitude and latitude coordinate system is a longitude and latitude coordinate system of the real world;
constructing a plane homography model from a pixel coordinate system to a world longitude and latitude coordinate system;
and substituting the calibration object data set into the constructed plane homography model, and solving the position coordinates of any pixel points in the view mapped into the world coordinate system through a calibration object sample equation set according to the coordinates in the calibration object pixel coordinate system and the positions in the corresponding world longitude and latitude coordinate system.
Further, the construction process of the planar homography model from the pixel coordinate system to the world longitude and latitude coordinate system in the world coordinate analysis module comprises the following steps:
converting the position coordinates of the pixel points (u, v) in the pixel coordinate system into distance coordinates (x, y) of the image coordinate system, wherein the distance coordinates (x, y) have the following relationship:
Figure BDA0003841667990000041
wherein d is x ,d y The dimension of each pixel point on the x axis and the y axis of an image coordinate system is in millimeter/pixel; (u) 0 ,v 0 ) Is the coordinate origin of the image coordinate system, namely the offset of the coordinate origin of the image coordinate system under the pixel coordinate system;
according to any pixel point (x, y) in an image coordinate system, based on a pinhole imaging principle, introducing an internal focal length parameter f of acquisition equipment, and performing matrix conversion to correspond to a point (x) in a camera coordinate system c ,y c ,z c ) The following relationship exists:
Figure BDA0003841667990000042
the corresponding position coordinate (x, y) in the camera coordinate system is obtained from the matrix relation through the (x, y) coordinate c ,y c ,z c );
From camera coordinate system point (x) c ,y c ,z c ) To world coordinate system point (x) w ,y w ,z w ) And the two are mutually converted through a rotation matrix R and a translation matrix T, the conversion relation is rigid conversion, and the relation is expressed as follows:
Figure BDA0003841667990000043
point q = [ u, v,1 ] in the imaging pixel coordinate system on the view image plane] T Mapping to a world coordinate System one Point Q = [ x ] w ,y w ,z w ] T The relationship between the two is:
q=s·H·Q
where s is the scale factor and H is the homography matrix.
According to the method, a pixel coordinate system, an image coordinate system and a camera coordinate system are established to a world coordinate system, a planar homography model from the pixel coordinate system to a world longitude and latitude coordinate system is theoretically established, a calibration object data set is substituted into the model to carry out model analysis of the world coordinate system corresponding to a full-view image pixel point, any pixel point in a view is worked out and mapped to a position coordinate in the world coordinate system through a calibration object sample equation set, solving calculation of a single parameter in the theoretical model is not needed, and calibration of the world coordinate position corresponding to the full-view pixel coordinate point is achieved.
Furthermore, before the target object is identified in the target object identification and positioning module, a view identification technology is utilized in advance to mark the target object in the view image through a target detection marking tool, the acquired target object data set is trained, an identification algorithm automatically learns to anchor the object according to the trained data set, and a preset boundary frame predicted by the self-adaptive data set object boundary frame is obtained.
In the invention, the target object is identified through an identification algorithm, the anchor frame is automatically added to the target object to obtain the preset boundary frame predicted by the self-adaptive data set object boundary frame, the feature points can be extracted according to the position of the real boundary frame similar to the anchor frame, the feature points capable of representing the target object can be obtained, and the condition that the coordinate position of the feature points in the view pixel coordinate system is converted into the position of the three-dimensional world longitude and latitude coordinate system of the target object is more accurate is ensured.
In order to realize the technical effect, the invention also provides a full-view imaging point coordinate based statistical positioning method, which comprises the following steps:
receiving a view image of a region to be detected containing a calibration object or a target object;
establishing a pixel coordinate system according to the view image, and obtaining the coordinate of each calibration object in the pixel coordinate system;
deducing world longitude and latitude coordinates corresponding to the full-view pixel points through the pixel coordinates of the calibration object in the pixel coordinate system and the actual world longitude and latitude coordinates of the calibration object;
and identifying the target object in the view image through an identification algorithm, acquiring the pixel coordinate of the identified target object in a pixel coordinate system, and acquiring the world longitude and latitude coordinates of the target object according to the world longitude and latitude coordinates corresponding to the full-view image pixel point acquired by the world coordinate analysis module.
Furthermore, when the target object is identified, the view identification technology is utilized to mark the object in the view collected data through a target detection marking tool, the collected object data set is trained, an identification algorithm can automatically learn to carry out anchor frame on the object according to the trained data set, a preset boundary frame predicted by the self-adaptive data set object boundary frame is obtained, and meanwhile, the corresponding ID is given to the object respectively.
In order to achieve the technical effects, the invention further provides electronic equipment which comprises a memory and a processor, wherein the memory stores the system for counting and positioning based on the full-view imaging point coordinate, and the processor can execute and achieve the functions of all the components in the system for counting and positioning based on the full-view imaging point coordinate.
In order to achieve the above technical effects, the present invention further provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements functions of each component module in a full-view imaging point coordinate based statistical positioning system.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the method, a mapping technology is combined with a view identification technology, and through the pixel coordinates and world coordinates of an imaging calibration object, world coordinate model derivation corresponding to all-view pixel point coordinates is realized, so that world coordinate positions corresponding to all-view pixel coordinate points are calibrated; and finally, identifying the target object by using an identification algorithm and the like, and obtaining the pixel coordinate of the target object and the corresponding world coordinate, thereby achieving the purpose of expressing the three-dimensional space coordinate by using two-dimensional imaging.
The invention only needs to acquire images through a camera on hardware and then carries out corresponding processing by a software program, has no signal interference problem, not only has high positioning precision, but also is not influenced by interference, thus having strong reliability and wide application scenes.
Drawings
FIG. 1 is a schematic diagram illustrating a transformation from a pixel coordinate system to world longitude and latitude coordinates in an embodiment;
FIG. 2 is a schematic diagram illustrating a transformation from a pixel coordinate system to an image coordinate system in an embodiment;
FIG. 3 is a result diagram of real-time display on a map after a target object is identified by a full-view imaging point coordinate statistical positioning system in the embodiment;
FIG. 4 is a block diagram of an embodiment of a statistical positioning system based on full-view imaging point coordinates;
fig. 5 is a block diagram showing the components of the electronic apparatus according to the embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and the accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not used as limiting the present invention.
The embodiment is as follows:
referring to fig. 1-5, a full view imaging point coordinate based statistical positioning system includes:
view collection module 1: used for acquiring view images of the area to be measured, obtaining a view image containing a calibration object and a target object; the flow of acquiring the view image by the view collection module 1 in this embodiment is as follows: placing a calibration object in the area to be measured and acquiring world longitude and latitude coordinate positions of the calibration object; a multi-angle view image containing the calibration object is then captured. And fixedly installing view acquisition equipment in a scene, and acquiring a view image of a region to be detected containing a target object through the view acquisition equipment.
Full-view pixel coordinate generation module 2: the image processing device is used for establishing a pixel coordinate system according to the view image and obtaining the position coordinate of the calibration object in the pixel coordinate system in the view image containing the calibration object;
world coordinate analysis module 3: the system comprises a pixel coordinate system, a global coordinate system and a global coordinate system, wherein the pixel coordinate system is used for deriving world longitude and latitude coordinates corresponding to full-view pixel points through position coordinates of a calibration object in the pixel coordinate system and actual world longitude and latitude coordinates of the calibration object;
the target object identification and positioning module 4: and identifying the target object by using an identification algorithm to the view image containing the target object, acquiring the pixel coordinate of the identified target object in a pixel coordinate system, and acquiring the world longitude and latitude coordinates of the target object according to the world longitude and latitude coordinates corresponding to the full view image pixel point acquired by the world coordinate analysis module 3. In this embodiment, the view image of the target is also captured by the view capturing device installed in the scene. Before the target object is identified in the target object identification and positioning module 4, a view identification technology is utilized in advance to label the target object in a view image through a target detection labeling tool, the acquired target object data set is trained, an identification algorithm automatically learns to anchor the object according to the trained data set, and a preset boundary frame predicted by the self-adaptive data set object boundary frame is obtained.
In the embodiment, the system for counting and positioning coordinates based on the full-view imaging point further comprises a coordinate information database module 5, wherein the coordinate information database module 5 is respectively in communication connection with the world coordinate analysis module 3 and the target object identification and positioning module 4; the system is used for storing the corresponding relation data of the full-view image pixel point and world longitude and latitude coordinates analyzed by the world coordinate analysis module 3, or returning the world longitude and latitude coordinates corresponding to the target object pixel coordinates to the target object identification and positioning module 4 after the target object identification and positioning module 4 obtains the pixel coordinates of the target object.
The view images in this embodiment are images of the region to be measured containing the calibration object or the target object at the same or different viewing angles acquired at the same height by the acquisition device. Therefore, the acquisition of the view of the region to be measured containing the calibration object or the target object by the acquisition equipment can be in two forms:
the first is that: the same acquisition equipment (such as a camera) is selected, and then the multi-view images are acquired at the same height position above the area to be measured. Particularly, for the view images containing the calibration objects, the calibration objects can be placed at fixed positions of the to-be-measured areas, then the same acquisition equipment is used for carrying out multi-view image acquisition on the to-be-measured areas at the same height, and the method is used for deducing world longitude and latitude coordinate systems corresponding to pixel points of a full view according to the pixel coordinate systems established by the view images containing the calibration objects.
Secondly, the following steps: the same acquisition equipment carries out the view collection of the region to be measured in fixed position, when the view image that contains the calibration object is gathered, need once only place the calibration object in a plurality of different positions, perhaps move the position of calibration object after shooing a view image, carry out the view image collection that contains the calibration object in a plurality of positions, also can derive the world longitude and latitude coordinate system that the pixel point of full view corresponds according to the pixel coordinate system that contains the view image establishment of calibration object.
In this embodiment, the process of the world coordinate analysis module 3 analyzing the coordinates of the pixel points in the pixel coordinate system into world longitude and latitude coordinates includes:
establishing a Cartesian coordinate system which comprises a pixel coordinate system, an image coordinate system, a camera coordinate system and a world longitude and latitude coordinate system; the image coordinate system is a distance coordinate of a certain point on the view image, and the image coordinate system and the pixel coordinate system are in the same plane; the camera coordinate system is a space coordinate system established by taking a camera as a central origin; the world longitude and latitude coordinate system is a longitude and latitude coordinate system of the real world;
constructing a plane homography model from a pixel coordinate system to a world longitude and latitude coordinate system;
and substituting the calibration object data set into the constructed plane homography model, and solving the position coordinates of any pixel points in the view mapped into the world coordinate system through a calibration object sample equation set according to the coordinates in the calibration object pixel coordinate system and the positions in the corresponding world longitude and latitude coordinate system.
In this embodiment, the process of constructing the planar homography model from the pixel coordinate system to the world longitude and latitude coordinate system in the world coordinate analysis module 3 includes the following steps:
converting the position coordinates of the pixel points (u, v) in the pixel coordinate system into distance coordinates (x, y) of the image coordinate system, wherein the distance coordinates (x, y) have the following relationship:
Figure BDA0003841667990000101
wherein d is x ,d y The dimension of each pixel point on the x axis and the y axis of an image coordinate system is in millimeter/pixel; (u) 0 ,v 0 ) Is the coordinate origin of the image coordinate system, namely the offset of the coordinate origin of the image coordinate system under the pixel coordinate system;
according to any pixel point (x, y) in an image coordinate system, based on a pinhole imaging principle, introducing an internal focal length parameter f of acquisition equipment, and performing matrix conversion to correspond to a point (x) in a camera coordinate system c ,y c ,z c ) The following relationship exists:
Figure BDA0003841667990000102
the corresponding position coordinate (x, y) in the camera coordinate system is obtained from the matrix relation through the (x, y) coordinate c ,y c ,z c );
From the camera coordinate systemPoint (x) c ,y c ,z c ) To world coordinate system point (x) w ,y w ,z w ) And the two are mutually converted through a rotation matrix R and a translation matrix T, the conversion relation is rigid conversion, and the relation is expressed as follows:
Figure BDA0003841667990000111
point q = [ u, v,1 ] in the imaging pixel coordinate system on the view image plane] T Mapping to a world coordinate System one Point Q = [ x ] w ,y w ,z w ] T The relationship between the two is:
q=s·H·Q
where s is the scale factor and H is the homography matrix.
In order to more clearly illustrate the positioning system provided in this embodiment, the embodiment randomly selects a certain road segment as the region to be measured, and the specific description is given.
The positioning method comprises the following steps:
receiving a view image of a region to be detected containing a calibration object or a target object;
establishing a pixel coordinate system according to the view image, and obtaining the coordinate of each calibration object in the pixel coordinate system;
deducing world longitude and latitude coordinates corresponding to the full-view pixel points through pixel coordinates of a calibration object in a pixel coordinate system and actual world longitude and latitude coordinates of the calibration object;
and identifying the target object in the view image through an identification algorithm, acquiring the pixel coordinates of the identified target object in a pixel coordinate system, and acquiring the world longitude and latitude coordinates of the target object according to the world longitude and latitude coordinates corresponding to the full-view image pixel point acquired by the world coordinate analysis module 3.
The specific operation steps are as follows:
1) After a scene to be detected is selected, a calibration object is placed in the scene, and view images of different visual angles are shot at the same height through collection equipment to serve as a calibration object data set; the data of the calibration objects are obtained through the acquisition equipment, and relevant parameters of a calibration object data set such as longitude and latitude coordinates of the calibration objects in a scene can be solved. To the view image acquisition who contains the calibration object in this embodiment be unmanned aerial vehicle carries on collection equipment, after rising to a certain high position, removes unmanned aerial vehicle horizontal position, carries out the view image acquisition of many visual angles to the region of awaiting measuring.
2) Installing real-time acquisition equipment at a fixed position in a scene, acquiring view images containing a target object on a road in the same scene, and forming an object data set; in this embodiment, the height of the acquisition device for acquiring the view image of the target object is the same as the acquisition height of the calibration object data set acquired by the acquisition device.
3) Arranging a shooting calibration object data set, collecting an object data set in a scene, and transmitting the object data set to a view collection module 1;
4) The method comprises the following steps of processing a calibration object data set and an object data set in an acquisition scene, wherein the specific processing flow comprises the following steps:
4.1 Respectively, a cartesian coordinate system consisting of a pixel coordinate system, an image coordinate system, a camera coordinate system, and a world longitude and latitude coordinate system is established in the data set. The image coordinate system is distance coordinates of pixel points in the view images; the camera coordinate system is a space rectangular coordinate system constructed by taking the camera position as the origin of coordinates.
The coordinates of any point in the coordinate system are respectively expressed as: (u, v), (x, y), (x) c ,y c ,z c ), (x w ,y w ,z w ) (ii) a The relationship between the objects in the four coordinate systems is schematically shown in fig. 1.
The view image is a dot matrix formed by pixel points, and the position coordinates of the pixel points (u, v) need to be converted into distance coordinates (x, y) of an image coordinate system; that is, for any pixel point (u, v) in the pixel coordinate system, according to the imaging principle, the following relationship exists between the two:
Figure BDA0003841667990000131
wherein d is x ,d y For each pixel point in the figureLike the dimensions on the x-axis and y-axis of the coordinate system, in millimeters per pixel. (u) 0 ,v 0 ) As an offset centered on the pixel coordinate system. FIG. 2 is a schematic diagram illustrating a pixel coordinate system and an image coordinate system;
4.2 According to the point (x, y) in the image coordinate system, introducing internal parameters of the acquisition device based on the pinhole imaging principle: focal length f, matrix-converted to point (x) corresponding to the camera coordinate system c ,y c ,z c ) The following relationships exist:
Figure BDA0003841667990000132
the corresponding position coordinate (x, y) in the camera coordinate system is obtained from the matrix relation through the (x, y) coordinate c ,y c ,z c )。
4.3 From camera coordinate system point (x) c ,y c ,z c ) To world coordinate system point (x) w ,y w ,z w ) And the two are mutually converted through a rotation matrix R and a translation matrix T, the conversion relation is rigid conversion, and the relation is expressed as follows:
Figure BDA0003841667990000133
4.4 In computer vision, the homography of planes is the projection mapping from one plane to another. The mapping from a point in the pixel coordinate system to a point position coordinate in the world coordinate system is a planar homography. By adopting a calibration object, a calibrated model plane can be established, and the midpoint q = [ u, v,1 ] of the imaging pixel coordinate system on the model plane] T Mapping to a world coordinate System one Point Q = [ x ] w ,y w ,z w ] T The relationship between the two is:
q=s·H·Q
wherein s is a scale factor, the homography matrix H has 9 elements, and the homography matrix has 8 parameters for solving and at least needs 4 points in the view for solving in consideration of homogeneous coordinates. The scale factor s is a constant coefficient, and the scale of conversion is ensured to be unchanged in the conversion process between the coordinate systems. Coordinates in a calibration object pixel coordinate system of different suitable visual angles and corresponding world coordinate system positions are obtained through the collected calibration object data set, and position coordinates of pixel points of any point in the view mapped into the world coordinate system can be solved through a calibration object sample column equation set.
5) The view pixels are gridded, the characteristic value (in this embodiment, the position coordinates of the pixels in the pixel coordinate system) of each pixel in the view is established, and the position coordinates of the world coordinate system corresponding to the mapping of all the pixels in the full view can be obtained through the mapping conversion (due to the principle of imaging on the large and small sides, the lengths of world position coordinate points expressed by each pixel in the view are different).
6) And establishing a coordinate information database, and storing all pixel points acquired from the full view into the database in a one-to-one correspondence manner of the characteristic values and the mapped world position coordinates.
7) And marking the object in the view acquisition data by using a view identification technology through a target detection marking tool, training the acquired object data set, and automatically learning an identification algorithm according to the trained data set to anchor the object. Preset bounding boxes obtained from adaptive dataset object bounding box prediction are obtained, and corresponding IDs are respectively given to the objects.
8) The pixel coordinates in the object view returned from the object view obtained by the recognition algorithm are used to call the corresponding pixel characteristic value in the database, obtain the corresponding world coordinate system position coordinates, and display the object coordinate position in the map (the pixel coordinate theoretical calculation derivation in this embodiment is calculated by programming, mainly using the computer language being Python language, the real-time positioning display being a web page end using JavaScript language html, the map being a map based on the national standard protocol WGS-84), with the result displayed as shown in fig. 3.
It should be noted that: after the acquisition equipment (generally a camera) acquires the region to be detected, the corresponding precision of the world coordinate position corresponding to the pixel point far away from the acquisition equipment in the acquired view image is lower than that corresponding to the pixel point far away from the acquisition equipment due to the principle of large distance and small distance. Therefore, the world coordinates corresponding to the pixel points obtained according to this embodiment are represented as areas within a certain radius range with the coordinate points as centers, and in general, the radius of the area corresponding to the pixel points farther from the acquisition device is larger, that is, the accuracy is lower. However, under the condition that the positioning accuracy requirement is satisfied (the position of the acquisition device is set at an appropriate position so that the obtained world coordinate accuracy satisfies the positioning requirement required for the scene), the position of the target object is generally represented by the calculated world coordinate position.
As shown in fig. 5, the embodiment also provides an electronic device, which may include a processor 51 and a memory 52, where the memory 52 includes an object identification module, a view pixel coordinate generation module, and a longitude and latitude coordinate generation module. Wherein the memory 52 is coupled to the processor 51. It is noted that this diagram is exemplary and that other types of structures may be used in addition to or in place of this structure to implement data extraction, report generation, communication, or other functionality.
The electronic device may further include: an input unit 53, a display unit 54, and a power supply 55. It is to be noted that the electronic device does not necessarily comprise all components shown in fig. 5. Furthermore, the electronic device may also comprise components not shown in fig. 5, reference being made to the prior art.
The processor 51, also sometimes referred to as a controller or operational control, may include a microprocessor 51 or other processor 51 device and/or logic device, the processor 51 receiving input and controlling the operation of the various components of the electronic device.
The memory 52 may be, for example, one or more of a buffer, a flash memory, a hard drive, a removable medium, a volatile memory, a non-volatile memory, or other suitable devices, and may store the configuration information of the processor 51, the instructions executed by the processor 51, the recorded table data, and other information. The processor 51 may execute a program stored in the memory 52 to realize information storage or processing, or the like. In one embodiment, a buffer memory, i.e., a buffer, is also included in the memory 52 to store the intermediate information.
The input unit 53 is for example used to provide the respective text report to the processor 51. The display unit 54 is used for displaying various results in the process, and the display unit 54 may be, for example, an LCD display, but the present invention is not limited thereto. The power supply 55 is used to provide power to the electronic device.
The embodiment of the invention also provides a storage medium storing computer readable instructions, wherein the computer readable instructions enable the electronic device to realize the functions of the modules in the system.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above is an embodiment of the present invention. The embodiments and specific parameters in the embodiments are only for the purpose of clearly illustrating the verification process of the invention and are not intended to limit the scope of the invention, which is defined by the claims, and all equivalent structural changes made by using the contents of the specification and the drawings of the present invention should be included in the present invention.

Claims (10)

1. A full view imaging point coordinate based statistical positioning system, comprising:
the view collection module: the system comprises a receiving module, a processing module, a display module and a display module, wherein the receiving module is used for receiving a view image of a region to be detected, and the view image contains a calibration object and a target object;
a full-view pixel coordinate generation module: the image processing device is used for establishing a pixel coordinate system according to the view image and obtaining the position coordinate of the calibration object in the pixel coordinate system in the view image containing the calibration object;
a world coordinate analysis module: the system comprises a full-view pixel point, a pixel coordinate system and a coordinate system, wherein the full-view pixel point is used for acquiring a world longitude and latitude coordinate system;
the target object identification and positioning module: and identifying the target object by using an identification algorithm to the view image containing the target object, acquiring the pixel coordinate of the identified target object in a pixel coordinate system, and acquiring the world longitude and latitude coordinates of the target object according to the world longitude and latitude coordinates corresponding to the full-view image pixel point acquired by the world coordinate analysis module.
2. The full view imaging point coordinate based statistical positioning system of claim 1, wherein: the coordinate information database module is respectively in communication connection with the world coordinate analysis module and the target object identification and positioning module; the system is used for storing the corresponding relation data of the full-view image pixel point analyzed by the world coordinate analysis module and world longitude and latitude coordinates, or returning the world longitude and latitude coordinates corresponding to the target pixel coordinates to the target identification and positioning module after the target identification and positioning module obtains the pixel coordinates of the target.
3. The system according to claim 1, wherein the view images are images of the region to be measured containing the calibration object or the target object at the same or different viewing angles and acquired by the acquisition equipment at the same height.
4. The system according to claim 3, wherein the process of resolving the coordinates of the pixel points in the pixel coordinate system into world longitude and latitude coordinates by the world coordinate resolution module is as follows:
establishing a Cartesian coordinate system which comprises a pixel coordinate system, an image coordinate system, a camera coordinate system and a world longitude and latitude coordinate system; the image coordinate system is a distance coordinate of a certain point on the view image, and the image coordinate system and the pixel coordinate system are in the same plane; the camera coordinate system is a space coordinate system established by taking a camera as a central origin; the world longitude and latitude coordinate system is a longitude and latitude coordinate system of the real world;
constructing a plane homography model from a pixel coordinate system to a world longitude and latitude coordinate system;
and substituting the calibration object data set into the constructed plane homography model, and solving the position coordinates of any pixel points in the view mapped into the world coordinate system through a calibration object sample equation set according to the coordinates in the calibration object pixel coordinate system and the positions in the corresponding world longitude and latitude coordinate system.
5. The system according to claim 4, wherein the process of constructing the planar homography model from the pixel coordinate system to the world longitude and latitude coordinate system in the world coordinate analysis module comprises the following steps:
converting the position coordinates of the pixel points (u, v) in the pixel coordinate system into distance coordinates (x, y) of the image coordinate system, wherein the distance coordinates (x, y) have the following relationship:
Figure FDA0003841667980000021
wherein d is x ,d y The dimension of each pixel point on the x axis and the y axis of an image coordinate system is in millimeter/pixel; (u) 0 ,v 0 ) Is the coordinate origin of the image coordinate system, namely the offset of the coordinate origin of the image coordinate system under the pixel coordinate system;
according to any pixel point (x, y) in an image coordinate system, based on a pinhole imaging principle, introducing an internal focal length parameter f of acquisition equipment, and performing matrix conversion to correspond to a point (x) in a camera coordinate system c ,y c ,z c ) The following relationships exist:
Figure FDA0003841667980000031
the corresponding position coordinate (x, y) in the camera coordinate system is obtained from the matrix relation through the (x, y) coordinate c ,y c ,z c );
From camera coordinate system point (x) c ,y c ,z c ) To world coordinate system point (x) w ,y w ,z w ) And the two are mutually converted through a rotation matrix R and a translation matrix T, and the conversion relation is rigid conversion, wherein the relation is expressed as follows:
Figure FDA0003841667980000032
point q = [ u, v,1 ] in the imaging pixel coordinate system on the view image plane] T Mapping to a world coordinate System one Point Q = [ x ] w ,y w ,z w ] T The relationship between the two is:
q=s·H·Q
wherein s is a scale factor and H is a homography matrix.
6. The system according to claim 1, wherein before the target is identified in the target identification and positioning module, a view identification technology is used in advance to label the target in the view image with a target detection labeling tool, the collected target data set is trained, and the identification algorithm automatically learns to anchor the object according to the trained data set to obtain a preset bounding box for adaptive data set object bounding box prediction.
7. A full-view imaging point coordinate based statistical positioning method is characterized by comprising the following steps:
receiving a view image of a region to be detected containing a calibration object or a target object;
establishing a pixel coordinate system according to the view image, and obtaining the coordinate of each calibration object in the pixel coordinate system;
deducing world longitude and latitude coordinates corresponding to the full-view pixel points through pixel coordinates of a calibration object in a pixel coordinate system and actual world longitude and latitude coordinates of the calibration object;
and identifying the target object in the view image through an identification algorithm, acquiring the pixel coordinates of the identified target object in a pixel coordinate system, and acquiring the world longitude and latitude coordinates of the target object according to the world longitude and latitude coordinates corresponding to the full-view image pixel point acquired by the world coordinate analysis module.
8. The full-view imaging point coordinate based statistical positioning method according to claim 7, wherein: when the target object is identified, the view identification technology is utilized to mark the object in the view collected data through a target detection marking tool, the collected object data set is trained, an identification algorithm can automatically learn to anchor the object according to the trained data set, a preset boundary frame for predicting the object boundary frame of the adaptive data set is obtained, and meanwhile, corresponding IDs are respectively given to the object.
9. An electronic device comprising a memory and a processor, wherein the memory stores the system for statistical positioning based on full-view imaging point coordinates according to any one of claims 1 to 6, and the processor can execute and implement the functions of the various modules in the system for statistical positioning based on full-view imaging point coordinates.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the functions of the respective constituent modules of the full-view imaging point coordinate based statistical positioning system according to any one of claims 1 to 6.
CN202211105529.0A 2022-09-09 2022-09-09 System and method for counting and positioning coordinates based on full-view imaging points Pending CN115588040A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211105529.0A CN115588040A (en) 2022-09-09 2022-09-09 System and method for counting and positioning coordinates based on full-view imaging points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211105529.0A CN115588040A (en) 2022-09-09 2022-09-09 System and method for counting and positioning coordinates based on full-view imaging points

Publications (1)

Publication Number Publication Date
CN115588040A true CN115588040A (en) 2023-01-10

Family

ID=84772837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211105529.0A Pending CN115588040A (en) 2022-09-09 2022-09-09 System and method for counting and positioning coordinates based on full-view imaging points

Country Status (1)

Country Link
CN (1) CN115588040A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116128981A (en) * 2023-04-19 2023-05-16 北京元客视界科技有限公司 Optical system calibration method, device and calibration system
CN116524017A (en) * 2023-03-13 2023-08-01 明创慧远科技集团有限公司 Underground detection, identification and positioning system for mine
CN116597150A (en) * 2023-07-14 2023-08-15 北京科技大学 Deep learning-based oblique photography model full-element singulation method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481284A (en) * 2017-08-25 2017-12-15 京东方科技集团股份有限公司 Method, apparatus, terminal and the system of target tracking path accuracy measurement
CN109472829A (en) * 2018-09-04 2019-03-15 顺丰科技有限公司 A kind of object positioning method, device, equipment and storage medium
CN111461994A (en) * 2020-03-30 2020-07-28 苏州科达科技股份有限公司 Method for obtaining coordinate transformation matrix and positioning target in monitoring picture
CN113850126A (en) * 2021-08-20 2021-12-28 武汉卓目科技有限公司 Target detection and three-dimensional positioning method and system based on unmanned aerial vehicle
CN114862973A (en) * 2022-07-11 2022-08-05 中铁电气化局集团有限公司 Space positioning method, device and equipment based on fixed point location and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481284A (en) * 2017-08-25 2017-12-15 京东方科技集团股份有限公司 Method, apparatus, terminal and the system of target tracking path accuracy measurement
CN109472829A (en) * 2018-09-04 2019-03-15 顺丰科技有限公司 A kind of object positioning method, device, equipment and storage medium
CN111461994A (en) * 2020-03-30 2020-07-28 苏州科达科技股份有限公司 Method for obtaining coordinate transformation matrix and positioning target in monitoring picture
CN113850126A (en) * 2021-08-20 2021-12-28 武汉卓目科技有限公司 Target detection and three-dimensional positioning method and system based on unmanned aerial vehicle
CN114862973A (en) * 2022-07-11 2022-08-05 中铁电气化局集团有限公司 Space positioning method, device and equipment based on fixed point location and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524017A (en) * 2023-03-13 2023-08-01 明创慧远科技集团有限公司 Underground detection, identification and positioning system for mine
CN116524017B (en) * 2023-03-13 2023-09-19 明创慧远科技集团有限公司 Underground detection, identification and positioning system for mine
CN116128981A (en) * 2023-04-19 2023-05-16 北京元客视界科技有限公司 Optical system calibration method, device and calibration system
CN116597150A (en) * 2023-07-14 2023-08-15 北京科技大学 Deep learning-based oblique photography model full-element singulation method and device
CN116597150B (en) * 2023-07-14 2023-09-22 北京科技大学 Deep learning-based oblique photography model full-element singulation method and device

Similar Documents

Publication Publication Date Title
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN110363158B (en) Millimeter wave radar and visual cooperative target detection and identification method based on neural network
CN110969663B (en) Static calibration method for external parameters of camera
CN115588040A (en) System and method for counting and positioning coordinates based on full-view imaging points
CN111383285B (en) Sensor fusion calibration method and system based on millimeter wave radar and camera
CN111444845B (en) Non-motor vehicle illegal stop recognition method, device and system
CN103150786B (en) Non-contact type unmanned vehicle driving state measuring system and measuring method
CN111275960A (en) Traffic road condition analysis method, system and camera
CN109345599B (en) Method and system for converting ground coordinates and PTZ camera coordinates
CN107730993A (en) The parking lot intelligent vehicle-tracing system and method identified again based on image
CN106019264A (en) Binocular vision based UAV (Unmanned Aerial Vehicle) danger vehicle distance identifying system and method
CN109523471A (en) A kind of conversion method, system and the device of ground coordinate and wide angle cameras picture coordinate
CN115100423B (en) System and method for realizing real-time positioning based on view acquisition data
TW202036478A (en) Camera calibration method, roadside sensing device, and smart transportation system
CN104167109A (en) Detection method and detection apparatus for vehicle position
CN111830470B (en) Combined calibration method and device, target object detection method, system and device
CN112255604B (en) Method and device for judging accuracy of radar data and computer equipment
CN114782548B (en) Global image-based radar data calibration method, device, equipment and medium
CN112132900A (en) Visual repositioning method and system
CN116192044A (en) Fault photovoltaic panel numbering and positioning method and device, electronic equipment and storage medium
CN114979956A (en) Unmanned aerial vehicle aerial photography ground target positioning method and system
CN111145260A (en) Vehicle-mounted binocular calibration method
CN111145262A (en) Vehicle-mounted monocular calibration method
CN111538008A (en) Transformation matrix determining method, system and device
CN115083209B (en) Vehicle-road cooperation method and system based on visual positioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination